Sethu Vijayakumar, Timothy Hospedales, and Adrian Haith
- Published in print:
- 2011
- Published Online:
- September 2012
- ISBN:
- 9780195387247
- eISBN:
- 9780199918379
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195387247.003.0004
- Subject:
- Psychology, Cognitive Neuroscience, Cognitive Psychology
This chapter argues that many aspects of human perception are best explained by adopting a modeling approach in which experimental subjects are assumed to possess a full generative probabilistic ...
More
This chapter argues that many aspects of human perception are best explained by adopting a modeling approach in which experimental subjects are assumed to possess a full generative probabilistic model of the task they are faced with, and that they use this model to make inferences about their environment and act optimally given the information available to them. It applies this generative modeling framework in two diverse settings—concurrent sensory and motor adaptation, and multisensory oddity detection—and shows, in both cases, that the data are best described by a full generative modeling approach.Less
This chapter argues that many aspects of human perception are best explained by adopting a modeling approach in which experimental subjects are assumed to possess a full generative probabilistic model of the task they are faced with, and that they use this model to make inferences about their environment and act optimally given the information available to them. It applies this generative modeling framework in two diverse settings—concurrent sensory and motor adaptation, and multisensory oddity detection—and shows, in both cases, that the data are best described by a full generative modeling approach.
Mark Newman
- Published in print:
- 2010
- Published Online:
- September 2010
- ISBN:
- 9780199206650
- eISBN:
- 9780191594175
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199206650.001.0001
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
The scientific study of networks, including computer networks, social networks, and biological networks, has received an enormous amount of interest in the last few years. The rise of the Internet ...
More
The scientific study of networks, including computer networks, social networks, and biological networks, has received an enormous amount of interest in the last few years. The rise of the Internet and the wide availability of inexpensive computers have made it possible to gather and analyze network data on a large scale, and the development of a variety of new theoretical tools has allowed us to extract new knowledge from many different kinds of networks. The study of networks is broadly interdisciplinary and important developments have occurred in many fields, including mathematics, physics, computer and information sciences, biology, and the social sciences. This book brings together the most important breakthroughs in each of these fields and presents them in a coherent fashion, highlighting the strong interconnections between work in different areas. Subjects covered include the measurement and structure of networks in many branches of science, methods for analyzing network data, including methods developed in physics, statistics, and sociology, the fundamentals of graph theory, computer algorithms, and spectral methods, mathematical models of networks, including random graph models and generative models, and theories of dynamical processes taking place on networks.Less
The scientific study of networks, including computer networks, social networks, and biological networks, has received an enormous amount of interest in the last few years. The rise of the Internet and the wide availability of inexpensive computers have made it possible to gather and analyze network data on a large scale, and the development of a variety of new theoretical tools has allowed us to extract new knowledge from many different kinds of networks. The study of networks is broadly interdisciplinary and important developments have occurred in many fields, including mathematics, physics, computer and information sciences, biology, and the social sciences. This book brings together the most important breakthroughs in each of these fields and presents them in a coherent fashion, highlighting the strong interconnections between work in different areas. Subjects covered include the measurement and structure of networks in many branches of science, methods for analyzing network data, including methods developed in physics, statistics, and sociology, the fundamentals of graph theory, computer algorithms, and spectral methods, mathematical models of networks, including random graph models and generative models, and theories of dynamical processes taking place on networks.
Timothy J. O’Donnell and Noah D. Goodman
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262028844
- eISBN:
- 9780262326803
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262028844.003.0002
- Subject:
- Linguistics, Sociolinguistics / Anthropological Linguistics
This chapter consists of four parts. The first section discusses the ideas behind the modeling framework adopted in the book---structured probabilistic generative models. The second section discusses ...
More
This chapter consists of four parts. The first section discusses the ideas behind the modeling framework adopted in the book---structured probabilistic generative models. The second section discusses some theoretical and methodological issues in the interpretation of this approach to modeling. The third section develops the modelling framework from the perspective of probabilistic programming using the Church programming language. The Church formalization makes explicit the relationship between the model and two important technical ideas from computer science (and, specifically, the theory of programming languages). The final section of the chapter provides more discussion of the four classes of model evaluated in the book.Less
This chapter consists of four parts. The first section discusses the ideas behind the modeling framework adopted in the book---structured probabilistic generative models. The second section discusses some theoretical and methodological issues in the interpretation of this approach to modeling. The third section develops the modelling framework from the perspective of probabilistic programming using the Church programming language. The Church formalization makes explicit the relationship between the model and two important technical ideas from computer science (and, specifically, the theory of programming languages). The final section of the chapter provides more discussion of the four classes of model evaluated in the book.
M. E. J. Newman
- Published in print:
- 2010
- Published Online:
- September 2010
- ISBN:
- 9780199206650
- eISBN:
- 9780191594175
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199206650.003.0014
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
Generative network models model the mechanisms by which networks are created. The idea behind models such as these is to explore hypothesized generative mechanisms to see what structures they ...
More
Generative network models model the mechanisms by which networks are created. The idea behind models such as these is to explore hypothesized generative mechanisms to see what structures they produce. If the structures are similar to those of networks observed in the real world, this suggests — though does not prove — that similar generative mechanisms may be at work in the real networks. This chapter examines the best-known example of a generative network model: the ‘preferential attachment’ model for the growth of networks with power-law degree distributions. Exercises are provided at the end of the chapter.Less
Generative network models model the mechanisms by which networks are created. The idea behind models such as these is to explore hypothesized generative mechanisms to see what structures they produce. If the structures are similar to those of networks observed in the real world, this suggests — though does not prove — that similar generative mechanisms may be at work in the real networks. This chapter examines the best-known example of a generative network model: the ‘preferential attachment’ model for the growth of networks with power-law degree distributions. Exercises are provided at the end of the chapter.
Bob Rehder
- Published in print:
- 2007
- Published Online:
- April 2010
- ISBN:
- 9780195176803
- eISBN:
- 9780199958511
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195176803.003.0013
- Subject:
- Psychology, Developmental Psychology
Essentialism is the view that kinds are defined by underlying properties or characteristics (an essence) that is shared by all category members and by members of no other ...
More
Essentialism is the view that kinds are defined by underlying properties or characteristics (an essence) that is shared by all category members and by members of no other categories and that are presumed to generate, or cause, perceptual features. Although unobservable, essential features can nonetheless affect classification by changing the evidence that observable features provide for category membership. This chapter proposes treating essentialized categories as a generative causal model and provides evidence for four phenomena that follow from this view: (a) classification as diagnostic reasoning; (b) classification as prospective reasoning; (c) boundary intensification; and (d) the effect of coherence on classification. The chapter also characterizes the development of conceptual knowledge in terms of an evolving set of causal models.Less
Essentialism is the view that kinds are defined by underlying properties or characteristics (an essence) that is shared by all category members and by members of no other categories and that are presumed to generate, or cause, perceptual features. Although unobservable, essential features can nonetheless affect classification by changing the evidence that observable features provide for category membership. This chapter proposes treating essentialized categories as a generative causal model and provides evidence for four phenomena that follow from this view: (a) classification as diagnostic reasoning; (b) classification as prospective reasoning; (c) boundary intensification; and (d) the effect of coherence on classification. The chapter also characterizes the development of conceptual knowledge in terms of an evolving set of causal models.
Nigam Kamal, McCallum Andrew, and Mitchell Tom
- Published in print:
- 2006
- Published Online:
- August 2013
- ISBN:
- 9780262033589
- eISBN:
- 9780262255899
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262033589.003.0003
- Subject:
- Computer Science, Machine Learning
This chapter explores the use of generative models for semi-supervised learning with labeled and unlabeled data in domains of text classification. The widely used naive Bayes classifier for ...
More
This chapter explores the use of generative models for semi-supervised learning with labeled and unlabeled data in domains of text classification. The widely used naive Bayes classifier for supervised learning defines a mixture of multinomials mixture models. In some domains, model likelihood and classification accuracy are strongly correlated, despite the overly simplified generative model. Here, expectation-maximization finds more likely models and improved classification accuracy. In other domains, likelihood and accuracy are not well correlated with the naive Bayes model. Here, we can use a more expressive generative model that allows for multiple mixture components per class. This helps restore a moderate correlation between model likelihood and classification accuracy, and again, EM finds more accurate models. Finally, even with a well-correlated generative model, local maxima are a significant hindrance with EM. Here, the approach of deterministic annealing does provide much higher likelihood models, but often loses the correspondence with the class labels. When class label correspondence is easily corrected, high accuracy models result.Less
This chapter explores the use of generative models for semi-supervised learning with labeled and unlabeled data in domains of text classification. The widely used naive Bayes classifier for supervised learning defines a mixture of multinomials mixture models. In some domains, model likelihood and classification accuracy are strongly correlated, despite the overly simplified generative model. Here, expectation-maximization finds more likely models and improved classification accuracy. In other domains, likelihood and accuracy are not well correlated with the naive Bayes model. Here, we can use a more expressive generative model that allows for multiple mixture components per class. This helps restore a moderate correlation between model likelihood and classification accuracy, and again, EM finds more accurate models. Finally, even with a well-correlated generative model, local maxima are a significant hindrance with EM. Here, the approach of deterministic annealing does provide much higher likelihood models, but often loses the correspondence with the class labels. When class label correspondence is easily corrected, high accuracy models result.
Martin V. Butz and Esther F. Kutter
- Published in print:
- 2017
- Published Online:
- July 2017
- ISBN:
- 9780198739692
- eISBN:
- 9780191834462
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198739692.003.0009
- Subject:
- Psychology, Cognitive Models and Architectures, Cognitive Psychology
While bottom-up visual processing is important, the brain integrates this information with top-down, generative expectations from very early on in the visual processing hierarchy. Indeed, our brain ...
More
While bottom-up visual processing is important, the brain integrates this information with top-down, generative expectations from very early on in the visual processing hierarchy. Indeed, our brain should not be viewed as a classification system, but rather as a generative system, which perceives something by integrating sensory evidence with the available, learned, predictive knowledge about that thing. The involved generative models continuously produce expectations over time, across space, and from abstracted encodings to more concrete encodings. Bayesian information processing is the key to understand how information integration must work computationally – at least in approximation – also in the brain. Bayesian networks in the form of graphical models allow the modularization of information and the factorization of interactions, which can strongly improve the efficiency of generative models. The resulting generative models essentially produce state estimations in the form of probability densities, which are very well-suited to integrate multiple sources of information, including top-down and bottom-up ones. A hierarchical neural visual processing architecture illustrates this point even further. Finally, some well-known visual illusions are shown and the perceptions are explained by means of generative, information integrating, perceptual processes, which in all cases combine top-down prior knowledge and expectations about objects and environments with the available, bottom-up visual information.Less
While bottom-up visual processing is important, the brain integrates this information with top-down, generative expectations from very early on in the visual processing hierarchy. Indeed, our brain should not be viewed as a classification system, but rather as a generative system, which perceives something by integrating sensory evidence with the available, learned, predictive knowledge about that thing. The involved generative models continuously produce expectations over time, across space, and from abstracted encodings to more concrete encodings. Bayesian information processing is the key to understand how information integration must work computationally – at least in approximation – also in the brain. Bayesian networks in the form of graphical models allow the modularization of information and the factorization of interactions, which can strongly improve the efficiency of generative models. The resulting generative models essentially produce state estimations in the form of probability densities, which are very well-suited to integrate multiple sources of information, including top-down and bottom-up ones. A hierarchical neural visual processing architecture illustrates this point even further. Finally, some well-known visual illusions are shown and the perceptions are explained by means of generative, information integrating, perceptual processes, which in all cases combine top-down prior knowledge and expectations about objects and environments with the available, bottom-up visual information.
Thomas P. Trappenberg
- Published in print:
- 2019
- Published Online:
- January 2020
- ISBN:
- 9780198828044
- eISBN:
- 9780191883873
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198828044.003.0008
- Subject:
- Neuroscience, Behavioral Neuroscience
This chapter presents an introduction to the important topic of building generative models. These are models that are aimed to understand the variety of a class such as cars or trees. A generative ...
More
This chapter presents an introduction to the important topic of building generative models. These are models that are aimed to understand the variety of a class such as cars or trees. A generative mode should be able to generate feature vectors for instances of the class they represent, and such models should therefore be able to characterize the class with all its variations. The subject is discussed both in a Bayesian and in a deep learning context, and also within a supervised and unsupervised context. This area is related to important algorithms such as k-means clustering, expectation maximization (EM), naïve Bayes, generative adversarial networks (GANs), and variational autoencoders (VAE).Less
This chapter presents an introduction to the important topic of building generative models. These are models that are aimed to understand the variety of a class such as cars or trees. A generative mode should be able to generate feature vectors for instances of the class they represent, and such models should therefore be able to characterize the class with all its variations. The subject is discussed both in a Bayesian and in a deep learning context, and also within a supervised and unsupervised context. This area is related to important algorithms such as k-means clustering, expectation maximization (EM), naïve Bayes, generative adversarial networks (GANs), and variational autoencoders (VAE).
Timothy J. O’Donnell and Noah D. Goodman
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262028844
- eISBN:
- 9780262326803
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262028844.003.0003
- Subject:
- Linguistics, Sociolinguistics / Anthropological Linguistics
This chapter the mathematical details of the models studied in this book. It also discusses the inference algorithms used for each of the models and various other issues of practical concern for the ...
More
This chapter the mathematical details of the models studied in this book. It also discusses the inference algorithms used for each of the models and various other issues of practical concern for the simulations that we report later.Less
This chapter the mathematical details of the models studied in this book. It also discusses the inference algorithms used for each of the models and various other issues of practical concern for the simulations that we report later.
Reza Shadmehr and Sandro Mussa-Ivaldi
- Published in print:
- 2012
- Published Online:
- August 2013
- ISBN:
- 9780262016964
- eISBN:
- 9780262301282
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262016964.003.0008
- Subject:
- Neuroscience, Research and Theory
This chapter presents some useful ideas on how to encourage the process of learning. It illustrates that speeding up learning can occur if the learner becomes more sensitive to prediction errors. ...
More
This chapter presents some useful ideas on how to encourage the process of learning. It illustrates that speeding up learning can occur if the learner becomes more sensitive to prediction errors. This chapter suggests that as the brain is presented with a prediction error, it tries to learn a generative model. It shows that the generative model’s trial-to-trial change is a forgetting rate. Presumably, biological learning is a process of state estimation, and a process in which the brain learns the structure of the generative model.Less
This chapter presents some useful ideas on how to encourage the process of learning. It illustrates that speeding up learning can occur if the learner becomes more sensitive to prediction errors. This chapter suggests that as the brain is presented with a prediction error, it tries to learn a generative model. It shows that the generative model’s trial-to-trial change is a forgetting rate. Presumably, biological learning is a process of state estimation, and a process in which the brain learns the structure of the generative model.
Basu Sugato, Bilenko Mikhail, Banerjee Arindam, and Mooney Raymond
- Published in print:
- 2006
- Published Online:
- August 2013
- ISBN:
- 9780262033589
- eISBN:
- 9780262255899
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262033589.003.0005
- Subject:
- Computer Science, Machine Learning
This chapter discusses a problem resulting from certain clustering tasks from which limited supervision is obtained in the form of pairwise constraints—semi-supervised clustering. Semi-supervised ...
More
This chapter discusses a problem resulting from certain clustering tasks from which limited supervision is obtained in the form of pairwise constraints—semi-supervised clustering. Semi-supervised clustering is an instance of semi-supervised learning stemming from a traditional unsupervised learning setting. Several algorithms exist for enhancing clustering quality by using supervision in the form of constraints. These algorithms typically utilize the pairwise constraints to either modify the clustering objective function or to learn the clustering distortion measure. This chapter describes an approach that employs hidden Markov random fields (HMRFs) as a probabilistic generative model for semi-supervised clustering, thereby providing a principled framework for incorporating constraint-based supervision into prototype-based clustering. The HMRF-based model allows the use of a broad range of clustering distortion measures, including Bregman divergences and directional distance measures, making it applicable to a number of domains.Less
This chapter discusses a problem resulting from certain clustering tasks from which limited supervision is obtained in the form of pairwise constraints—semi-supervised clustering. Semi-supervised clustering is an instance of semi-supervised learning stemming from a traditional unsupervised learning setting. Several algorithms exist for enhancing clustering quality by using supervision in the form of constraints. These algorithms typically utilize the pairwise constraints to either modify the clustering objective function or to learn the clustering distortion measure. This chapter describes an approach that employs hidden Markov random fields (HMRFs) as a probabilistic generative model for semi-supervised clustering, thereby providing a principled framework for incorporating constraint-based supervision into prototype-based clustering. The HMRF-based model allows the use of a broad range of clustering distortion measures, including Bregman divergences and directional distance measures, making it applicable to a number of domains.
Giovanni Pezzulo
- Published in print:
- 2016
- Published Online:
- June 2016
- ISBN:
- 9780190241537
- eISBN:
- 9780190241551
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780190241537.003.0013
- Subject:
- Psychology, Cognitive Psychology
The ubiquity of predictive processing in the brain suggests that it is functionally oriented toward the future. Mechanisms for predictive processing such as internal generative models can give rise ...
More
The ubiquity of predictive processing in the brain suggests that it is functionally oriented toward the future. Mechanisms for predictive processing such as internal generative models can give rise to internally generated brain dynamics, thus permitting the brain to “detach,” at least partially, from the here and now of the current sensorimotor context. One example are internally generated sequences of neural activity in the rodent hippocampus, which can be produced and replayed in the absence of external cues, and have been linked to flexible decisions, planning, and memory functions. In this chapter the author considers the idea that other, more sophisticated kinds of detached cognition, such as counterfactual thinking and some forms of episodic simulation, might also be based on internally generated dynamics and use an internal model that originally supported predictive processing.Less
The ubiquity of predictive processing in the brain suggests that it is functionally oriented toward the future. Mechanisms for predictive processing such as internal generative models can give rise to internally generated brain dynamics, thus permitting the brain to “detach,” at least partially, from the here and now of the current sensorimotor context. One example are internally generated sequences of neural activity in the rodent hippocampus, which can be produced and replayed in the absence of external cues, and have been linked to flexible decisions, planning, and memory functions. In this chapter the author considers the idea that other, more sophisticated kinds of detached cognition, such as counterfactual thinking and some forms of episodic simulation, might also be based on internally generated dynamics and use an internal model that originally supported predictive processing.
Franck Jovanovic and Christophe Schinckus
- Published in print:
- 2017
- Published Online:
- December 2016
- ISBN:
- 9780190205034
- eISBN:
- 9780190205065
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780190205034.003.0005
- Subject:
- Economics and Finance, Financial Economics
Chapter 5 identifies what the potential contributions of econophysics to financial economics. First of all, the way econophysics models are or could be used in trading rooms is discussed. Then the ...
More
Chapter 5 identifies what the potential contributions of econophysics to financial economics. First of all, the way econophysics models are or could be used in trading rooms is discussed. Then the theoretical contributions of these models are analyzed from the viewpoint of a financial economist. The rest of the chapter scrutinizes the recent developments in econophysics by presenting them as potential alternative solutions to existing issues in financial economics. With this purpose, two crucial issues are studied: on one hand, the generative models explaining the apparition of power laws in financial data, and, on the other hand, the development of new statistical tests for the validation of new tools capturing non-Gaussian uncertainty.Less
Chapter 5 identifies what the potential contributions of econophysics to financial economics. First of all, the way econophysics models are or could be used in trading rooms is discussed. Then the theoretical contributions of these models are analyzed from the viewpoint of a financial economist. The rest of the chapter scrutinizes the recent developments in econophysics by presenting them as potential alternative solutions to existing issues in financial economics. With this purpose, two crucial issues are studied: on one hand, the generative models explaining the apparition of power laws in financial data, and, on the other hand, the development of new statistical tests for the validation of new tools capturing non-Gaussian uncertainty.
Vsevolod Kapatsinski
- Published in print:
- 2018
- Published Online:
- September 2019
- ISBN:
- 9780262037860
- eISBN:
- 9780262346313
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262037860.003.0006
- Subject:
- Psychology, Cognitive Psychology
This chapter describes the evidence for the existence of dimensions, focusing on the difference between the difficulty of attention shifts to a previously relevant vs. irrelevant dimension. It ...
More
This chapter describes the evidence for the existence of dimensions, focusing on the difference between the difficulty of attention shifts to a previously relevant vs. irrelevant dimension. It discusses the representation of continuous dimensions in the associationist framework. including population coding and thermometer coding, as well as the idea that learning can adjust the breadth of adjustable receptive fields. In phonetics, continuous dimensions have been argued to be split into categories via distributional learning. This chapter reviews what we know about distributional learning and argues that it relies on several distinct learning mechanisms, including error-driven learning at two distinct levels and building a generative model of the speaker. The emergence of perceptual equivalence regions from error-driven learning is discussed, and implications for language change briefly noted with an iterated learning simulation.Less
This chapter describes the evidence for the existence of dimensions, focusing on the difference between the difficulty of attention shifts to a previously relevant vs. irrelevant dimension. It discusses the representation of continuous dimensions in the associationist framework. including population coding and thermometer coding, as well as the idea that learning can adjust the breadth of adjustable receptive fields. In phonetics, continuous dimensions have been argued to be split into categories via distributional learning. This chapter reviews what we know about distributional learning and argues that it relies on several distinct learning mechanisms, including error-driven learning at two distinct levels and building a generative model of the speaker. The emergence of perceptual equivalence regions from error-driven learning is discussed, and implications for language change briefly noted with an iterated learning simulation.
- Published in print:
- 2010
- Published Online:
- June 2013
- ISBN:
- 9780804770552
- eISBN:
- 9780804775625
- Item type:
- chapter
- Publisher:
- Stanford University Press
- DOI:
- 10.11126/stanford/9780804770552.003.0017
- Subject:
- Society and Culture, Jewish Studies
This chapter examines the place of the generative model of “the Jews” in Isaak Babel's artistic self-image through the close reading of his two stories, “Moi pervyi gus” and “Guy de Maupassant.” It ...
More
This chapter examines the place of the generative model of “the Jews” in Isaak Babel's artistic self-image through the close reading of his two stories, “Moi pervyi gus” and “Guy de Maupassant.” It discusses Babel's depiction of “the Jews” as epitomizing the physical and spiritual decline of the European male, and considers his fictional and nonfictional writings as a coherent autobiographical narrative that presents in the symbolic language of a post-Christian culture a portrait of the artist as a former Jew. The chapter also argues that Babel's artistic affronts to Jewish and Christian sensibilities indicate his break from the traditional choices of the Russian-Jewish intelligentsia.Less
This chapter examines the place of the generative model of “the Jews” in Isaak Babel's artistic self-image through the close reading of his two stories, “Moi pervyi gus” and “Guy de Maupassant.” It discusses Babel's depiction of “the Jews” as epitomizing the physical and spiritual decline of the European male, and considers his fictional and nonfictional writings as a coherent autobiographical narrative that presents in the symbolic language of a post-Christian culture a portrait of the artist as a former Jew. The chapter also argues that Babel's artistic affronts to Jewish and Christian sensibilities indicate his break from the traditional choices of the Russian-Jewish intelligentsia.
Mark Newman
- Published in print:
- 2018
- Published Online:
- October 2018
- ISBN:
- 9780198805090
- eISBN:
- 9780191843235
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198805090.003.0013
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
This chapter describes models of the growth or formation of networks, with a particular focus on preferential attachment models. It starts with a discussion of the classic preferential attachment ...
More
This chapter describes models of the growth or formation of networks, with a particular focus on preferential attachment models. It starts with a discussion of the classic preferential attachment model for citation networks introduced by Price, including a complete derivation of the degree distribution in the limit of large network size. Subsequent sections introduce the Barabasi-Albert model and various generalized preferential attachment models, including models with addition or removal of extra nodes or edges and models with nonlinear preferential attachment. Also discussed are node copying models and models in which networks are formed by optimization processes, such as delivery networks or airline networks.Less
This chapter describes models of the growth or formation of networks, with a particular focus on preferential attachment models. It starts with a discussion of the classic preferential attachment model for citation networks introduced by Price, including a complete derivation of the degree distribution in the limit of large network size. Subsequent sections introduce the Barabasi-Albert model and various generalized preferential attachment models, including models with addition or removal of extra nodes or edges and models with nonlinear preferential attachment. Also discussed are node copying models and models in which networks are formed by optimization processes, such as delivery networks or airline networks.
- Published in print:
- 2010
- Published Online:
- June 2013
- ISBN:
- 9780804770552
- eISBN:
- 9780804775625
- Item type:
- chapter
- Publisher:
- Stanford University Press
- DOI:
- 10.11126/stanford/9780804770552.003.0003
- Subject:
- Society and Culture, Jewish Studies
This chapter examines the archetypal description of the Jews in Christian imagination. It explains that the Jews are considered homologous to Satan and Eve by virtue of their embodiment of the “bad ...
More
This chapter examines the archetypal description of the Jews in Christian imagination. It explains that the Jews are considered homologous to Satan and Eve by virtue of their embodiment of the “bad father” and the “bad mother.” The chapter analyzes the descriptive and narrative implications of these archetypal associations for the generative model of the Jews, and provides a summary of the invariant situations and motifs arising from the theological and archetypal structure of the “Jewish” image. It explains that these situations and motifs are the core narrative and descriptive features of the actor called “the Jews.”Less
This chapter examines the archetypal description of the Jews in Christian imagination. It explains that the Jews are considered homologous to Satan and Eve by virtue of their embodiment of the “bad father” and the “bad mother.” The chapter analyzes the descriptive and narrative implications of these archetypal associations for the generative model of the Jews, and provides a summary of the invariant situations and motifs arising from the theological and archetypal structure of the “Jewish” image. It explains that these situations and motifs are the core narrative and descriptive features of the actor called “the Jews.”
- Published in print:
- 2010
- Published Online:
- June 2013
- ISBN:
- 9780804770552
- eISBN:
- 9780804775625
- Item type:
- chapter
- Publisher:
- Stanford University Press
- DOI:
- 10.11126/stanford/9780804770552.003.0011
- Subject:
- Society and Culture, Jewish Studies
This chapter examines the role of the Jewish character of Iankel as a helper-figure in Nikolai Gogol's “Taras Bul'ba.” It explains the archetypal conception of “the Jews” as keepers of knowledge and ...
More
This chapter examines the role of the Jewish character of Iankel as a helper-figure in Nikolai Gogol's “Taras Bul'ba.” It explains the archetypal conception of “the Jews” as keepers of knowledge and traffickers of information in this story and highlights their use of information not only a commodity but a weapon. The chapter discusses how Gogol conveyed the demonic nature of the “Jewish” Helper-figure based on the descriptive vocabulary furnished by the generative model of “the Jews,” and describes the alterity in the language spoken by Iankel and his kind in the story.Less
This chapter examines the role of the Jewish character of Iankel as a helper-figure in Nikolai Gogol's “Taras Bul'ba.” It explains the archetypal conception of “the Jews” as keepers of knowledge and traffickers of information in this story and highlights their use of information not only a commodity but a weapon. The chapter discusses how Gogol conveyed the demonic nature of the “Jewish” Helper-figure based on the descriptive vocabulary furnished by the generative model of “the Jews,” and describes the alterity in the language spoken by Iankel and his kind in the story.