Raymond L. Chambers and Robert G. Clark
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780198566625
- eISBN:
- 9780191738449
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198566625.003.0011
- Subject:
- Mathematics, Probability / Statistics
Inference for non-linear population parameters develops model-based prediction theory for target parameters that are not population totals or means. The development initially is for the case where ...
More
Inference for non-linear population parameters develops model-based prediction theory for target parameters that are not population totals or means. The development initially is for the case where the target parameter can be expressed as a differentiable function of finite population means, and a Taylor series linearisation argument is used to get a large sample approximation to the prediction variance of the substitution-based predictor. This Taylor linearisation approach is then generalised to target parameters that can be expressed as solutions of estimating equations. An application to inference about the median value of a homogeneous population serves to illustrate the basic approach, and this is then extended to the stratified population case.Less
Inference for non-linear population parameters develops model-based prediction theory for target parameters that are not population totals or means. The development initially is for the case where the target parameter can be expressed as a differentiable function of finite population means, and a Taylor series linearisation argument is used to get a large sample approximation to the prediction variance of the substitution-based predictor. This Taylor linearisation approach is then generalised to target parameters that can be expressed as solutions of estimating equations. An application to inference about the median value of a homogeneous population serves to illustrate the basic approach, and this is then extended to the stratified population case.
Christopher G. Small and Jinfang Wang
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198506881
- eISBN:
- 9780191709258
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198506881.003.0007
- Subject:
- Mathematics, Probability / Statistics
This chapter explores the relationship between the numerical methods described in earlier chapters and the theory of dynamical systems. An estimating function defines a dynamical estimating system, ...
More
This chapter explores the relationship between the numerical methods described in earlier chapters and the theory of dynamical systems. An estimating function defines a dynamical estimating system, whose domains of attraction and repulsion can be studied in relation to the estimation problem. For instance, stability of roots to an estimating equation can be studied using the linearization method or the Liapunov's method. In particular, it can be shown that a consistent root of an estimating equation is an asymptotically stable fixed point of the associated dynamical estimating system. The Newton-Raphson method is reexamined in detail from the perspective of the theory of dynamical systems, and derivations of and formal proofs for the properties of the modified Newton's methods are given. This chapter also explores the Julia sets and domains of attraction of estimating functions, taking the estimation of the correlation coefficient for bivariate normal data as an example.Less
This chapter explores the relationship between the numerical methods described in earlier chapters and the theory of dynamical systems. An estimating function defines a dynamical estimating system, whose domains of attraction and repulsion can be studied in relation to the estimation problem. For instance, stability of roots to an estimating equation can be studied using the linearization method or the Liapunov's method. In particular, it can be shown that a consistent root of an estimating equation is an asymptotically stable fixed point of the associated dynamical estimating system. The Newton-Raphson method is reexamined in detail from the perspective of the theory of dynamical systems, and derivations of and formal proofs for the properties of the modified Newton's methods are given. This chapter also explores the Julia sets and domains of attraction of estimating functions, taking the estimation of the correlation coefficient for bivariate normal data as an example.
Nomi Erteschik‐Shir
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780199556861
- eISBN:
- 9780191722271
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199556861.003.0003
- Subject:
- Linguistics, Syntax and Morphology, Phonetics / Phonology
MON types an ‘uncertainty’ question and non‐typing MON adds ‘uncertainty’ to questions typed by an overt or silent performative. This paper accounts for the placement of MON in both these cases and ...
More
MON types an ‘uncertainty’ question and non‐typing MON adds ‘uncertainty’ to questions typed by an overt or silent performative. This paper accounts for the placement of MON in both these cases and also explains why V‐2 is not triggered when MON occurs sentence initially. The explanation rests on the idea that MON is linearized in the phonology as are other adverbs and that V‐2 is phonologically motivated.Less
MON types an ‘uncertainty’ question and non‐typing MON adds ‘uncertainty’ to questions typed by an overt or silent performative. This paper accounts for the placement of MON in both these cases and also explains why V‐2 is not triggered when MON occurs sentence initially. The explanation rests on the idea that MON is linearized in the phonology as are other adverbs and that V‐2 is phonologically motivated.
Steven Franks
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780199556861
- eISBN:
- 9780191722271
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199556861.003.0007
- Subject:
- Linguistics, Syntax and Morphology, Phonetics / Phonology
Minimalism reevaluates the division of labor between syntax and PF; much traditionally syntactic is actually a response to Spell–Out demands. This paper examines largely Slavic phenomena caused by ...
More
Minimalism reevaluates the division of labor between syntax and PF; much traditionally syntactic is actually a response to Spell–Out demands. This paper examines largely Slavic phenomena caused by Spell–out exigencies of (i) imposition of linear order on syntactically concatenated elements, (ii) determination of which copy to pronounce, and (iii) prosodification.Less
Minimalism reevaluates the division of labor between syntax and PF; much traditionally syntactic is actually a response to Spell–Out demands. This paper examines largely Slavic phenomena caused by Spell–out exigencies of (i) imposition of linear order on syntactically concatenated elements, (ii) determination of which copy to pronounce, and (iii) prosodification.
Yvonne Choquet-Bruhat
- Published in print:
- 2008
- Published Online:
- May 2009
- ISBN:
- 9780199230723
- eISBN:
- 9780191710872
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199230723.003.0007
- Subject:
- Mathematics, Applied Mathematics
This chapter begins with a discussion of the concepts of linearization and stability. It then covers conformally formulated (CF) constraints, solutions on compact manifolds, solution of the momentum ...
More
This chapter begins with a discussion of the concepts of linearization and stability. It then covers conformally formulated (CF) constraints, solutions on compact manifolds, solution of the momentum constraint, Lichnerowicz equation, system of constraints, solutions on asymptotically Euclidean manifolds, momentum constraint, solution of the Lichnerowicz equation, solutions of the system of constraints, and gluing solutions of the constraint equations.Less
This chapter begins with a discussion of the concepts of linearization and stability. It then covers conformally formulated (CF) constraints, solutions on compact manifolds, solution of the momentum constraint, Lichnerowicz equation, system of constraints, solutions on asymptotically Euclidean manifolds, momentum constraint, solution of the Lichnerowicz equation, solutions of the system of constraints, and gluing solutions of the constraint equations.
D. Gary Miller
- Published in print:
- 2010
- Published Online:
- January 2011
- ISBN:
- 9780199583430
- eISBN:
- 9780191595288
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199583430.003.0001
- Subject:
- Linguistics, Historical Linguistics, Theoretical Linguistics
Positional correlations and linearization changes mark the transition to morphology and syntax. On most theoretical accounts, morphology is not autonomous, but interacts with at least three other ...
More
Positional correlations and linearization changes mark the transition to morphology and syntax. On most theoretical accounts, morphology is not autonomous, but interacts with at least three other domains: (i) phonology and perception, (ii) the lexicon/culture, and (iii) syntax. The first is treated extensively in Volume I. The second is illustrated with the rise of the feminine gender in Indo‐European, and the third by documentation of the changes from Latin to Romance in the shift from morphological to syntactic coding of reflexive, anticausative, middle, and passive. In morphological change, both inflectional and derivational markers are shown to spread by lexical diffusion. Syntactic change is (micro)parametric and is typically motivated by changes in lexical features combined with morphological attrition and/or principles of efficient computation. The latter are especially important for frequent crosslinguistic changes, including the numerous shifts from lexical to functional content as well as changes within functional categories. The volume closes with the genesis of creole inflectional, derivational, and syntactic categories, involving the interaction of contact phenomena with morphological and syntactic change.Less
Positional correlations and linearization changes mark the transition to morphology and syntax. On most theoretical accounts, morphology is not autonomous, but interacts with at least three other domains: (i) phonology and perception, (ii) the lexicon/culture, and (iii) syntax. The first is treated extensively in Volume I. The second is illustrated with the rise of the feminine gender in Indo‐European, and the third by documentation of the changes from Latin to Romance in the shift from morphological to syntactic coding of reflexive, anticausative, middle, and passive. In morphological change, both inflectional and derivational markers are shown to spread by lexical diffusion. Syntactic change is (micro)parametric and is typically motivated by changes in lexical features combined with morphological attrition and/or principles of efficient computation. The latter are especially important for frequent crosslinguistic changes, including the numerous shifts from lexical to functional content as well as changes within functional categories. The volume closes with the genesis of creole inflectional, derivational, and syntactic categories, involving the interaction of contact phenomena with morphological and syntactic change.
D. Gary Miller
- Published in print:
- 2010
- Published Online:
- January 2011
- ISBN:
- 9780199583430
- eISBN:
- 9780191595288
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199583430.003.0002
- Subject:
- Linguistics, Historical Linguistics, Theoretical Linguistics
Core Data. As groundwork for the theoretical discussion, this taxonomic chapter presents the main linearization patterns and the properties of the three most frequent language types: SOV, SVO, VSO. ...
More
Core Data. As groundwork for the theoretical discussion, this taxonomic chapter presents the main linearization patterns and the properties of the three most frequent language types: SOV, SVO, VSO. Preferences for complement and relative clause types issue from processing complexity. Adjectives tend to follow nouns crosslinguistically. Genitives, determiners, and numerals are positionally variable. One kind of cross‐category harmony affects the position of functional heads with respect to LPs (lexical phrases). Another affects the linearization within FPs (functional phrases) and LPs. In English, the order of genitives before nouns and Deg(ree) before AP ( so good) is predicted by the parametrization of functional heads before LPs.Less
Core Data. As groundwork for the theoretical discussion, this taxonomic chapter presents the main linearization patterns and the properties of the three most frequent language types: SOV, SVO, VSO. Preferences for complement and relative clause types issue from processing complexity. Adjectives tend to follow nouns crosslinguistically. Genitives, determiners, and numerals are positionally variable. One kind of cross‐category harmony affects the position of functional heads with respect to LPs (lexical phrases). Another affects the linearization within FPs (functional phrases) and LPs. In English, the order of genitives before nouns and Deg(ree) before AP ( so good) is predicted by the parametrization of functional heads before LPs.
D. Gary Miller
- Published in print:
- 2010
- Published Online:
- January 2011
- ISBN:
- 9780199583430
- eISBN:
- 9780191595288
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199583430.003.0003
- Subject:
- Linguistics, Historical Linguistics, Theoretical Linguistics
This chapter begins with hypotheses about linearization and proceeds to examples of word‐order change. The theoretical portion focuses on the main formal theories, the most interesting of which ...
More
This chapter begins with hypotheses about linearization and proceeds to examples of word‐order change. The theoretical portion focuses on the main formal theories, the most interesting of which involves feature‐driven parameters of movement in combination with the Linear Correspondence Axiom as an interface default. The main typological change treated is Germanic, which shifted from V‐final by gradual loss of object‐fronting cues. Changes in the genitive are also discussed. The phrasal genitive developed in contact with Danes in northeast England. Finally, numerous changes from V‐final to non‐V‐final are reviewed and contrasted with the rarity of changes to a head‐final language.Less
This chapter begins with hypotheses about linearization and proceeds to examples of word‐order change. The theoretical portion focuses on the main formal theories, the most interesting of which involves feature‐driven parameters of movement in combination with the Linear Correspondence Axiom as an interface default. The main typological change treated is Germanic, which shifted from V‐final by gradual loss of object‐fronting cues. Changes in the genitive are also discussed. The phrasal genitive developed in contact with Danes in northeast England. Finally, numerous changes from V‐final to non‐V‐final are reviewed and contrasted with the rarity of changes to a head‐final language.
Klaus Boehmer
- Published in print:
- 2010
- Published Online:
- January 2011
- ISBN:
- 9780199577040
- eISBN:
- 9780191595172
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199577040.001.0001
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
Nonlinear elliptic problems play an increasingly important role in mathematics, science and engineering, and create an exciting interplay. Other books discuss nonlinearity by a very few important ...
More
Nonlinear elliptic problems play an increasingly important role in mathematics, science and engineering, and create an exciting interplay. Other books discuss nonlinearity by a very few important examples. This is the first and only book, proving in a systematic and unifying way, stability and convergence results and methods for solving nonlinear discrete equations via discrete Newton methods for the different numerical methods for all these problems. The proofs use linearization, compact perturbation of the coercive principal parts, or monotone operator techniques, and approximation theory. This is examplified for linear to fully nonlinear problems (highest derivatives occur nonlinearly) and for the most important space discretization methods: conforming and nonconforming finite element, discontinuous Galerkin, finite difference and wavelet methods. The proof of stability for nonconforming methods employs the anticrime operator as an essential tool. For all these methods approximate evaluation of the discrete equations, and eigenvalue problems are discussed. The numerical methods are based upon analytic results for this wide class of problems, guaranteeing existence, uniqueness and regularity of the exact solutions. In the next book, spectral, mesh‐free methods and convergence for bifurcation and center manifolds for all these combinations are proved. Specific long open problems, solved here, are numerical methods for fully nonlinear elliptic problems, wavelet and mesh‐free methods for nonlinear problems, and more general nonlinear boundary conditions. Adaptivity is discussed for finite element and wavelet methods with totally different techniques.Less
Nonlinear elliptic problems play an increasingly important role in mathematics, science and engineering, and create an exciting interplay. Other books discuss nonlinearity by a very few important examples. This is the first and only book, proving in a systematic and unifying way, stability and convergence results and methods for solving nonlinear discrete equations via discrete Newton methods for the different numerical methods for all these problems. The proofs use linearization, compact perturbation of the coercive principal parts, or monotone operator techniques, and approximation theory. This is examplified for linear to fully nonlinear problems (highest derivatives occur nonlinearly) and for the most important space discretization methods: conforming and nonconforming finite element, discontinuous Galerkin, finite difference and wavelet methods. The proof of stability for nonconforming methods employs the anticrime operator as an essential tool. For all these methods approximate evaluation of the discrete equations, and eigenvalue problems are discussed. The numerical methods are based upon analytic results for this wide class of problems, guaranteeing existence, uniqueness and regularity of the exact solutions. In the next book, spectral, mesh‐free methods and convergence for bifurcation and center manifolds for all these combinations are proved. Specific long open problems, solved here, are numerical methods for fully nonlinear elliptic problems, wavelet and mesh‐free methods for nonlinear problems, and more general nonlinear boundary conditions. Adaptivity is discussed for finite element and wavelet methods with totally different techniques.
Klaus Böhmer
- Published in print:
- 2010
- Published Online:
- January 2011
- ISBN:
- 9780199577040
- eISBN:
- 9780191595172
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199577040.003.0002
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
Chapter 2 summarizes general linear, special semilinear, semilinear, quasilinear, and fully nonlinear elliptic differential equations and systems of order 2m, m ≥ 1, e.g. the above equations. ...
More
Chapter 2 summarizes general linear, special semilinear, semilinear, quasilinear, and fully nonlinear elliptic differential equations and systems of order 2m, m ≥ 1, e.g. the above equations. Essential are existence, uniqueness, and regularity of their solutions and linearization. Many important arguments for linearization are discussed. It is assumed that the derivative of the nonlinear operator, evaluated in the exact (isolated) solution, is boundedly invertible, closely related to the numerically necessary condition of a (locally) well-conditioned problem. Bifurcation problems are delayed to the next book; ill-conditioned problems are not considered. Linearization is applicable to nearly all nonlinear elliptic problems. Its bounded invertibility yields the Fredholm alternative and the stability of space discretization methods. Some nonlinear, monotone problems exclude linearization.Less
Chapter 2 summarizes general linear, special semilinear, semilinear, quasilinear, and fully nonlinear elliptic differential equations and systems of order 2m, m ≥ 1, e.g. the above equations. Essential are existence, uniqueness, and regularity of their solutions and linearization. Many important arguments for linearization are discussed. It is assumed that the derivative of the nonlinear operator, evaluated in the exact (isolated) solution, is boundedly invertible, closely related to the numerically necessary condition of a (locally) well-conditioned problem. Bifurcation problems are delayed to the next book; ill-conditioned problems are not considered. Linearization is applicable to nearly all nonlinear elliptic problems. Its bounded invertibility yields the Fredholm alternative and the stability of space discretization methods. Some nonlinear, monotone problems exclude linearization.
Klaus Böhmer
- Published in print:
- 2010
- Published Online:
- January 2011
- ISBN:
- 9780199577040
- eISBN:
- 9780191595172
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199577040.003.0003
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
A new general discretization theory unifies the generalized Petrov-Galerkin method and one of the classical methods. Linearization is a main tool: the derivative of the operator in the exact solution ...
More
A new general discretization theory unifies the generalized Petrov-Galerkin method and one of the classical methods. Linearization is a main tool: the derivative of the operator in the exact solution has to be boundedly invertible. For quasilinear problems, in Sobolev spaces Wm'p(Ω), with 2 ≤ p 〈 ∞, this chapter obtains stability and convergence results with respect to discrete Hm(Ω) norms. This is complemented by the monotone approach for 1 ≤ p 〈 ∞ with Wm'p(Ω) convergence. Our approach allows a unified proof for stability, convergence and Fredholm results for the discrete solutions and their computation. A few well-known basic concepts from functional analysis and approximation theory are combined: coercive bilinear forms or monotone operators, their compact perturbations, interpolation, best approximation and inverse estimates for approximating spaces yield the classical “consistency and stability imply convergence”. The mesh independence principle is the key for an efficient solution for all discretizations of all nonlinear problems considered here.Less
A new general discretization theory unifies the generalized Petrov-Galerkin method and one of the classical methods. Linearization is a main tool: the derivative of the operator in the exact solution has to be boundedly invertible. For quasilinear problems, in Sobolev spaces Wm'p(Ω), with 2 ≤ p 〈 ∞, this chapter obtains stability and convergence results with respect to discrete Hm(Ω) norms. This is complemented by the monotone approach for 1 ≤ p 〈 ∞ with Wm'p(Ω) convergence. Our approach allows a unified proof for stability, convergence and Fredholm results for the discrete solutions and their computation. A few well-known basic concepts from functional analysis and approximation theory are combined: coercive bilinear forms or monotone operators, their compact perturbations, interpolation, best approximation and inverse estimates for approximating spaces yield the classical “consistency and stability imply convergence”. The mesh independence principle is the key for an efficient solution for all discretizations of all nonlinear problems considered here.
Dirk Bury and Hiroyuki Uchida
- Published in print:
- 2012
- Published Online:
- September 2012
- ISBN:
- 9780199644933
- eISBN:
- 9780191741609
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199644933.003.0002
- Subject:
- Linguistics, Theoretical Linguistics, Syntax and Morphology
This chapter replaces traditional phrase structure trees with a new type of set-based representations. Typical labelled trees permit copying of one category onto distinct syntactic nodes and ...
More
This chapter replaces traditional phrase structure trees with a new type of set-based representations. Typical labelled trees permit copying of one category onto distinct syntactic nodes and excessive copying possibilities need to be filtered out by independent measures. Such additional means not only increase the complexity of the grammar but also blur the distinction between syntactic and semantic constraints. Thus, is presented an alternative structure representation system that can express syntactic copying only in one specific configuration, whose importance is exemplified by German V2 data. The system is incompatible with certain current approaches to phrasal movement, namely with multidominance systems and with (certain versions of) the copy theory of movement. Its implications are discussed with regard to wh-extraction and reconstruction. Finally, the chapter compares the authors' representation system with dependency graphs as well as other set-based structure representations proposed in Chomskyan syntax.Less
This chapter replaces traditional phrase structure trees with a new type of set-based representations. Typical labelled trees permit copying of one category onto distinct syntactic nodes and excessive copying possibilities need to be filtered out by independent measures. Such additional means not only increase the complexity of the grammar but also blur the distinction between syntactic and semantic constraints. Thus, is presented an alternative structure representation system that can express syntactic copying only in one specific configuration, whose importance is exemplified by German V2 data. The system is incompatible with certain current approaches to phrasal movement, namely with multidominance systems and with (certain versions of) the copy theory of movement. Its implications are discussed with regard to wh-extraction and reconstruction. Finally, the chapter compares the authors' representation system with dependency graphs as well as other set-based structure representations proposed in Chomskyan syntax.
Martina Gračanin‐Yuksek
- Published in print:
- 2012
- Published Online:
- September 2012
- ISBN:
- 9780199644933
- eISBN:
- 9780191741609
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199644933.003.0005
- Subject:
- Linguistics, Theoretical Linguistics, Syntax and Morphology
According to Kayne”s (1994) Linear Correspondence Axiom (LCA), the linear order of terminals in a structure depends on asymmetric c-command relations among the non-terminals. Since the LCA is ...
More
According to Kayne”s (1994) Linear Correspondence Axiom (LCA), the linear order of terminals in a structure depends on asymmetric c-command relations among the non-terminals. Since the LCA is incompatible with multidominance (MD), the questions arise: how are MD structures linearized, and what constrains MD? Recently, several proposals have argued that an MD representation is admissible if it is linearizable by (some version of) the LCA. This chapter argues against this view. The chapter shows that an MD compatible version of the LCA cannot linearize Croatian non-MD structures containing the clitic je. The problem re-emerges in Croatian coordinated wh-questions (Q&Qs), and German Subjektlücke in finiten Sätzen. The chapter concludes that in MD representations, linear order is not predictable from the structure. Consequently, the claim that MD is constrained by linearization becomes moot. The data are explained if we adopt the claim that what constrains MD is a syntactic constraint, the COSH.Less
According to Kayne”s (1994) Linear Correspondence Axiom (LCA), the linear order of terminals in a structure depends on asymmetric c-command relations among the non-terminals. Since the LCA is incompatible with multidominance (MD), the questions arise: how are MD structures linearized, and what constrains MD? Recently, several proposals have argued that an MD representation is admissible if it is linearizable by (some version of) the LCA. This chapter argues against this view. The chapter shows that an MD compatible version of the LCA cannot linearize Croatian non-MD structures containing the clitic je. The problem re-emerges in Croatian coordinated wh-questions (Q&Qs), and German Subjektlücke in finiten Sätzen. The chapter concludes that in MD representations, linear order is not predictable from the structure. Consequently, the claim that MD is constrained by linearization becomes moot. The data are explained if we adopt the claim that what constrains MD is a syntactic constraint, the COSH.
Theresa Biberauer and Michelle Sheehan
- Published in print:
- 2012
- Published Online:
- September 2012
- ISBN:
- 9780199644933
- eISBN:
- 9780191741609
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199644933.003.0009
- Subject:
- Linguistics, Theoretical Linguistics, Syntax and Morphology
This chapter addresses the crucial issue of how hierarchical structure relates to linear order, and provides evidence that the two are universally mediated by a version of Kayne’s (1994) Linear ...
More
This chapter addresses the crucial issue of how hierarchical structure relates to linear order, and provides evidence that the two are universally mediated by a version of Kayne’s (1994) Linear Correspondence Axiom (LCA). The discussion focuses on new data in support of the Final-over-Final Constraint (FOFC), an apparent gap in disharmonic word orders. The data in question relate to the embedding of various types of clauses in OV languages. Almost universally, the FOFC-violating order (*[VP [CPC TP] V]) fails to surface, and what we see instead is extraposition, i.e. superficially: [VP V [CP C TP]]. Based on these data, the chapter argues that: (i) in such cases, obligatory extraposition comes about as an indirect result of FOFC; (ii) any adequate explanation of FOFC and its effects will need to refer to the LCA; and (iii) the pattern provides evidence for the independently proposed idea that certain CPs can be embedded under nominal structure.Less
This chapter addresses the crucial issue of how hierarchical structure relates to linear order, and provides evidence that the two are universally mediated by a version of Kayne’s (1994) Linear Correspondence Axiom (LCA). The discussion focuses on new data in support of the Final-over-Final Constraint (FOFC), an apparent gap in disharmonic word orders. The data in question relate to the embedding of various types of clauses in OV languages. Almost universally, the FOFC-violating order (*[VP [CPC TP] V]) fails to surface, and what we see instead is extraposition, i.e. superficially: [VP V [CP C TP]]. Based on these data, the chapter argues that: (i) in such cases, obligatory extraposition comes about as an indirect result of FOFC; (ii) any adequate explanation of FOFC and its effects will need to refer to the LCA; and (iii) the pattern provides evidence for the independently proposed idea that certain CPs can be embedded under nominal structure.
John M. Anderson
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199608324
- eISBN:
- 9780191732041
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199608324.003.0002
- Subject:
- Linguistics, Syntax and Morphology, Phonetics / Phonology
This chapter is concerned with the lexicon as an interface between syntax and lexical phonology. Various levels of lexical representations are reflected in different spelling systems. The interface ...
More
This chapter is concerned with the lexicon as an interface between syntax and lexical phonology. Various levels of lexical representations are reflected in different spelling systems. The interface itself may have internal structure at which bracketing into formatives is introduced, morphology. Lexicon and syntax share categorization in terms of unary features and dependency relations between the categories, but in the lexicon the categories do not come to be linearized. The syntactic representations introduced in volume I of the trilogy are briefly reintroduced, and there is a preliminary account of the phonological structure employed in Volume III. Lexicon again shares the phonological categories of the phonology, but these are only partially ordered and structured. The crucial role of the paradigm in what follows is emphasized.Less
This chapter is concerned with the lexicon as an interface between syntax and lexical phonology. Various levels of lexical representations are reflected in different spelling systems. The interface itself may have internal structure at which bracketing into formatives is introduced, morphology. Lexicon and syntax share categorization in terms of unary features and dependency relations between the categories, but in the lexicon the categories do not come to be linearized. The syntactic representations introduced in volume I of the trilogy are briefly reintroduced, and there is a preliminary account of the phonological structure employed in Volume III. Lexicon again shares the phonological categories of the phonology, but these are only partially ordered and structured. The crucial role of the paradigm in what follows is emphasized.
Edward P. Herbst and Frank Schorfheide
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691161082
- eISBN:
- 9781400873739
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691161082.003.0002
- Subject:
- Economics and Finance, Econometrics
This chapter considers the turning of DSGE models into Bayesian versions by specifying a probability distribution for the innovations of the exogenous shock processes. There exists a wide variety of ...
More
This chapter considers the turning of DSGE models into Bayesian versions by specifying a probability distribution for the innovations of the exogenous shock processes. There exists a wide variety of numerical techniques to solve DSGE models, but the chapter elaborates on a technique that involves the log-linearization of the equilibrium conditions and the solution of the resulting linear rational expectations difference equations. The approximate solution takes the form of a vector autoregressive process for the model variables, which is driven by the innovations to the exogenous shock processes, and is used as a set of state-transition equations in the state–space representation of the DSGE model. Under the assumption that these innovations are normally distributed, the log-linearized DSGE model takes the form of a linear Gaussian state–space model.Less
This chapter considers the turning of DSGE models into Bayesian versions by specifying a probability distribution for the innovations of the exogenous shock processes. There exists a wide variety of numerical techniques to solve DSGE models, but the chapter elaborates on a technique that involves the log-linearization of the equilibrium conditions and the solution of the resulting linear rational expectations difference equations. The approximate solution takes the form of a vector autoregressive process for the model variables, which is driven by the innovations to the exogenous shock processes, and is used as a set of state-transition equations in the state–space representation of the DSGE model. Under the assumption that these innovations are normally distributed, the log-linearized DSGE model takes the form of a linear Gaussian state–space model.
Edward P. Herbst and Frank Schorfheide
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691161082
- eISBN:
- 9781400873739
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691161082.003.0007
- Subject:
- Economics and Finance, Econometrics
This chapter presents computational techniques that can be used to estimate DSGE models that have been solved with nonlinear techniques, such as higher-order perturbation methods or projection ...
More
This chapter presents computational techniques that can be used to estimate DSGE models that have been solved with nonlinear techniques, such as higher-order perturbation methods or projection methods. From the perspective of Bayesian estimation, the key difference between DSGE models that have been solved with a linearization technique and models that have been solved nonlinearly is that in the former case, the resulting state–space representation is linear, whereas in the latter case, it takes the general nonlinear form. The chapter also highlights some of the features that researchers have introduced into DSGE models to capture important nonlinearities in the data, wherein it uses the small-scale New Keynesian DSGE model as illustrative example.Less
This chapter presents computational techniques that can be used to estimate DSGE models that have been solved with nonlinear techniques, such as higher-order perturbation methods or projection methods. From the perspective of Bayesian estimation, the key difference between DSGE models that have been solved with a linearization technique and models that have been solved nonlinearly is that in the former case, the resulting state–space representation is linear, whereas in the latter case, it takes the general nonlinear form. The chapter also highlights some of the features that researchers have introduced into DSGE models to capture important nonlinearities in the data, wherein it uses the small-scale New Keynesian DSGE model as illustrative example.
Christof Koch
- Published in print:
- 1998
- Published Online:
- November 2020
- ISBN:
- 9780195104912
- eISBN:
- 9780197562338
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195104912.003.0012
- Subject:
- Computer Science, Mathematical Theory of Computation
The vast majority of nerve cells generate a series of brief voltage pulses in response to vigorous input. These pulses, also referred to as action potentials or ...
More
The vast majority of nerve cells generate a series of brief voltage pulses in response to vigorous input. These pulses, also referred to as action potentials or spikes, originate at or close to the cell body, and propagate down the axon at constant velocity and amplitude. Fig. 6.1 shows the shape of the action potential from a number of different neuronal and nonneuronal preparations. Action potentials come in a variety of shapes; common to all is the all-or-none depolarization of the membrane beyond 0. That is, if the voltage fails to exceed a particular threshold value, no spike is initiated and the potential returns to its baseline level. If the voltage threshold is exceeded, the membrane executes a stereotyped voltage trajectory that reflects membrane properties and not the input. As evident in Fig. 6.1, the shape of the action potential can vary enormously from cell type to cell type. When inserting an electrode into a brain, the small all-or-none electrical events one observes extracellularly are usually due to spikes that are initiated close to the cell body and that propagate along the axons. When measuring the electrical potential across the membrane, these spikes peak between +10 and +30 mV and are over (depending on the temperature) within 1 or 2 msec. Other all-or-none events, such as the complex spikes in cerebellar Purkinje cells or bursting pyramidal cells in cortex, show a more complex wave form with one or more fast spikes superimposed onto an underlying, much slower depolarization. Finally, under certain conditions, the dendritic membrane can also generate all-or-none events that are much slower than somatic spikes, usually on the order to 50-100 msec or longer. We will treat these events and their possible significance in Chap. 19. Only a small fraction of all neurons is unable—under physiological conditions—to generate action potentials, making exclusive use of graded signals. Examples of such nonspiking cells, usually spatially compact, can be found in the distal retina (e.g., bipolar, horizontal, and certain types of amacrine cells) and many neurons in the sensory-motor pathway of invertebrates (Roberts and Bush, 1981).
Less
The vast majority of nerve cells generate a series of brief voltage pulses in response to vigorous input. These pulses, also referred to as action potentials or spikes, originate at or close to the cell body, and propagate down the axon at constant velocity and amplitude. Fig. 6.1 shows the shape of the action potential from a number of different neuronal and nonneuronal preparations. Action potentials come in a variety of shapes; common to all is the all-or-none depolarization of the membrane beyond 0. That is, if the voltage fails to exceed a particular threshold value, no spike is initiated and the potential returns to its baseline level. If the voltage threshold is exceeded, the membrane executes a stereotyped voltage trajectory that reflects membrane properties and not the input. As evident in Fig. 6.1, the shape of the action potential can vary enormously from cell type to cell type. When inserting an electrode into a brain, the small all-or-none electrical events one observes extracellularly are usually due to spikes that are initiated close to the cell body and that propagate along the axons. When measuring the electrical potential across the membrane, these spikes peak between +10 and +30 mV and are over (depending on the temperature) within 1 or 2 msec. Other all-or-none events, such as the complex spikes in cerebellar Purkinje cells or bursting pyramidal cells in cortex, show a more complex wave form with one or more fast spikes superimposed onto an underlying, much slower depolarization. Finally, under certain conditions, the dendritic membrane can also generate all-or-none events that are much slower than somatic spikes, usually on the order to 50-100 msec or longer. We will treat these events and their possible significance in Chap. 19. Only a small fraction of all neurons is unable—under physiological conditions—to generate action potentials, making exclusive use of graded signals. Examples of such nonspiking cells, usually spatially compact, can be found in the distal retina (e.g., bipolar, horizontal, and certain types of amacrine cells) and many neurons in the sensory-motor pathway of invertebrates (Roberts and Bush, 1981).
Christof Koch
- Published in print:
- 1998
- Published Online:
- November 2020
- ISBN:
- 9780195104912
- eISBN:
- 9780197562338
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195104912.003.0013
- Subject:
- Computer Science, Mathematical Theory of Computation
The previous chapter provided a detailed description of the currents underlying the generation and propagation of action potentials in the squid giant axon. The ...
More
The previous chapter provided a detailed description of the currents underlying the generation and propagation of action potentials in the squid giant axon. The Hodgkin-Huxley (1952d) model captures these events in terms of the dynamical behavior of four variables: the membrane potential and three state variables determining the state of the fast sodium and the delayed potassium conductances. This quantitative, conductance-based formalism reproduces the physiological data remarkably well and has been extremely fertile in terms of providing a mathematical framework for modeling neuronal excitability throughout the animal kingdom (for the current state of the art, see McKenna, Davis, and Zornetzer, 1992; Bower and Beeman, 1998; Koch and Segev, 1998). Collectively, these models express the complex dynamical behaviors observed experimentally, including pulse generation and threshold behavior, adaptation, bursting, bistability, plateau potentials, hysteresis, and many more. However, these models are difficult to construct and require detailed knowledge of the kinetics of the individual ionic currents. The large number of associated activation and inactivation functions and other parameters usually obscures the contributions of particular features (e.g., the activation range of the sodium activation particle) toward the observed dynamic phenomena. Even after many years of experience in recording from neurons or modeling them, it is a dicey business predicting the effect that varying one parameter, say, the amplitude of the calcium-dependent slow potassium current (Chap. 9), has on the overall behavior of the model. This precludes the development of insight and intuition, since the numerical complexity of these models prevents one from understanding which important features in the model are responsible for a particular phenomenon and which are irrelevant. Qualitative models of neuronal excitability, capturing some of the topological aspects of neuronal dynamics but at a much reduced complexity, can be very helpful in this regard, since they highlight the crucial features responsible for a particular behavior. By topological aspects we mean those properties that remain unchanged in spite of quantitative changes in the underlying system. These typically include the existence of stable solutions and their basins of attraction, limit cycles, bistability, and the existence of strange attractors.
Less
The previous chapter provided a detailed description of the currents underlying the generation and propagation of action potentials in the squid giant axon. The Hodgkin-Huxley (1952d) model captures these events in terms of the dynamical behavior of four variables: the membrane potential and three state variables determining the state of the fast sodium and the delayed potassium conductances. This quantitative, conductance-based formalism reproduces the physiological data remarkably well and has been extremely fertile in terms of providing a mathematical framework for modeling neuronal excitability throughout the animal kingdom (for the current state of the art, see McKenna, Davis, and Zornetzer, 1992; Bower and Beeman, 1998; Koch and Segev, 1998). Collectively, these models express the complex dynamical behaviors observed experimentally, including pulse generation and threshold behavior, adaptation, bursting, bistability, plateau potentials, hysteresis, and many more. However, these models are difficult to construct and require detailed knowledge of the kinetics of the individual ionic currents. The large number of associated activation and inactivation functions and other parameters usually obscures the contributions of particular features (e.g., the activation range of the sodium activation particle) toward the observed dynamic phenomena. Even after many years of experience in recording from neurons or modeling them, it is a dicey business predicting the effect that varying one parameter, say, the amplitude of the calcium-dependent slow potassium current (Chap. 9), has on the overall behavior of the model. This precludes the development of insight and intuition, since the numerical complexity of these models prevents one from understanding which important features in the model are responsible for a particular phenomenon and which are irrelevant. Qualitative models of neuronal excitability, capturing some of the topological aspects of neuronal dynamics but at a much reduced complexity, can be very helpful in this regard, since they highlight the crucial features responsible for a particular behavior. By topological aspects we mean those properties that remain unchanged in spite of quantitative changes in the underlying system. These typically include the existence of stable solutions and their basins of attraction, limit cycles, bistability, and the existence of strange attractors.
John Ross, Igor Schreiber, and Marcel O. Vlad
- Published in print:
- 2006
- Published Online:
- November 2020
- ISBN:
- 9780195178685
- eISBN:
- 9780197562277
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195178685.003.0015
- Subject:
- Chemistry, Physical Chemistry
There is enormous interest in the biology of complex reaction systems, be it in metabolism, signal transduction, gene regulatory networks, protein synthesis, and many ...
More
There is enormous interest in the biology of complex reaction systems, be it in metabolism, signal transduction, gene regulatory networks, protein synthesis, and many others. The field of the interpretation of experiments on such systems by application of the methods of information science, computer science, and biostatistics is called bioinformatics (see for a presentation of this subject). Part of it is an extension of the chemical approaches that we have discussed for obtaining information on the reaction mechanisms of complex chemical systems to complex biological and genetic systems. We present here a very brief introduction to this field, which is exploding with scientific and technical activity. No review is intended, only an indication of several approaches on the subject of our book, with apologies for the omission of vast numbers of publications. A few reminders: The entire complement of DNA molecules constitute the genome, which consists of many genes. RNA is generated from DNA in a process called transcription; the RNA that codes for proteins is known as messenger RNA, abbreviated tomRNA. Other RNAs code for functional molecules such as transfer RNAs, ribosomal components, and regulatory molecules, or even have enzymatic function. Protein synthesis is regulated by many mechanisms, including that for transcription initiation, RNA splicing (in eukaryotes), mRNA transport, translation initiation, post-translational modifications, and degradation of mRNA. Proteins perform perhaps most cellular functions. Advances in microarray technology, with the use of cDNA or oligonucleotides immobilized in a predefined organization on a solid phase, have led to measurements of mRNA expression levels on a genome-wide scale (see chapter 3). The results of the measurements can be displayed on a plot on which a row represents one gene at various times, a column the whole set of genes, and the time of gene expression is plotted along the axis of rows. The changes in expression levels, as measured by fluorescence, are indicated by colors, for example green for decreased expression, black for no change in expression, and red for increased expression. Responses in expression levels have been measured for various biochemical and physiological conditions. We turn now to a few methods of obtaining information on genomic networks from microarray measurements.
Less
There is enormous interest in the biology of complex reaction systems, be it in metabolism, signal transduction, gene regulatory networks, protein synthesis, and many others. The field of the interpretation of experiments on such systems by application of the methods of information science, computer science, and biostatistics is called bioinformatics (see for a presentation of this subject). Part of it is an extension of the chemical approaches that we have discussed for obtaining information on the reaction mechanisms of complex chemical systems to complex biological and genetic systems. We present here a very brief introduction to this field, which is exploding with scientific and technical activity. No review is intended, only an indication of several approaches on the subject of our book, with apologies for the omission of vast numbers of publications. A few reminders: The entire complement of DNA molecules constitute the genome, which consists of many genes. RNA is generated from DNA in a process called transcription; the RNA that codes for proteins is known as messenger RNA, abbreviated tomRNA. Other RNAs code for functional molecules such as transfer RNAs, ribosomal components, and regulatory molecules, or even have enzymatic function. Protein synthesis is regulated by many mechanisms, including that for transcription initiation, RNA splicing (in eukaryotes), mRNA transport, translation initiation, post-translational modifications, and degradation of mRNA. Proteins perform perhaps most cellular functions. Advances in microarray technology, with the use of cDNA or oligonucleotides immobilized in a predefined organization on a solid phase, have led to measurements of mRNA expression levels on a genome-wide scale (see chapter 3). The results of the measurements can be displayed on a plot on which a row represents one gene at various times, a column the whole set of genes, and the time of gene expression is plotted along the axis of rows. The changes in expression levels, as measured by fluorescence, are indicated by colors, for example green for decreased expression, black for no change in expression, and red for increased expression. Responses in expression levels have been measured for various biochemical and physiological conditions. We turn now to a few methods of obtaining information on genomic networks from microarray measurements.