Nicolas Loeuille and Michel Loreau
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780199228973
- eISBN:
- 9780191711169
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199228973.003.00013
- Subject:
- Biology, Ecology
Evolutionary dynamics may help us understand the structure and dynamics of food webs. The theoretical understanding of empirical food web patterns faces a dilemma, as it is difficult to account ...
More
Evolutionary dynamics may help us understand the structure and dynamics of food webs. The theoretical understanding of empirical food web patterns faces a dilemma, as it is difficult to account simultaneously for dynamical components (demography, evolution) and for the complexity of these systems (species number, connectance). Current knowledge of food web structures is dominated by many-species models that do not incorporate any dynamical aspects, and by models that detail demographic or evolutionary dynamics of species but consider communities composed of few species. Community evolution models incorporate both the dynamical components of food webs and the complexity that is necessary to understand empirical food web data.Less
Evolutionary dynamics may help us understand the structure and dynamics of food webs. The theoretical understanding of empirical food web patterns faces a dilemma, as it is difficult to account simultaneously for dynamical components (demography, evolution) and for the complexity of these systems (species number, connectance). Current knowledge of food web structures is dominated by many-species models that do not incorporate any dynamical aspects, and by models that detail demographic or evolutionary dynamics of species but consider communities composed of few species. Community evolution models incorporate both the dynamical components of food webs and the complexity that is necessary to understand empirical food web data.
Robert G. Lawson
- Published in print:
- 2017
- Published Online:
- May 2018
- ISBN:
- 9780813174624
- eISBN:
- 9780813174655
- Item type:
- book
- Publisher:
- University Press of Kentucky
- DOI:
- 10.5810/kentucky/9780813174624.001.0001
- Subject:
- Law, Legal History
Betty Gail Brown was nineteen years old in 1961. A second-year student at Transylvania University. On the evening of October 26, 1961, she drove to campus to study with friends for an exam. Around ...
More
Betty Gail Brown was nineteen years old in 1961. A second-year student at Transylvania University. On the evening of October 26, 1961, she drove to campus to study with friends for an exam. Around midnight, she left the campus, but at some point she returned and parked her car in a driveway near the center of campus. By 3:00 a.m., she was the victim of one of the most sensational killings ever to occur in the Bluegrass. She was found dead in her car, strangled by her own brassiere. Kentuckians from across the state became engrossed in the proceedings, as lead after lead went nowhere. Four years later, the police investigation had stalled.In 1965, a drifter named Alex Arnold confessed to the killing while in jail on other charges in Oregon. Arnold was brought to Lexington and put on trial, where he entered a plea of not guilty. Robert Lawson was a young attorney at a local firm when a senior member asked him to help defend Arnold. In Who Killed Betty Gail Brown?, Lawson meticulously details the police search and Arnold’s trial. Since 1965, new leads have come and gone, and Betty Gail Brown’s murder remains unsolved. A written transcription of the court’s proceedings does not exist, and thus Lawson, drawing upon police and court records, newspaper articles, and his own notes, provides an invaluable record of an important piece of local history about one of Kentucky’s most famous cold cases.Less
Betty Gail Brown was nineteen years old in 1961. A second-year student at Transylvania University. On the evening of October 26, 1961, she drove to campus to study with friends for an exam. Around midnight, she left the campus, but at some point she returned and parked her car in a driveway near the center of campus. By 3:00 a.m., she was the victim of one of the most sensational killings ever to occur in the Bluegrass. She was found dead in her car, strangled by her own brassiere. Kentuckians from across the state became engrossed in the proceedings, as lead after lead went nowhere. Four years later, the police investigation had stalled.In 1965, a drifter named Alex Arnold confessed to the killing while in jail on other charges in Oregon. Arnold was brought to Lexington and put on trial, where he entered a plea of not guilty. Robert Lawson was a young attorney at a local firm when a senior member asked him to help defend Arnold. In Who Killed Betty Gail Brown?, Lawson meticulously details the police search and Arnold’s trial. Since 1965, new leads have come and gone, and Betty Gail Brown’s murder remains unsolved. A written transcription of the court’s proceedings does not exist, and thus Lawson, drawing upon police and court records, newspaper articles, and his own notes, provides an invaluable record of an important piece of local history about one of Kentucky’s most famous cold cases.
Judith A. Layzer and Alexis Schulman
- Published in print:
- 2017
- Published Online:
- May 2018
- ISBN:
- 9780262036580
- eISBN:
- 9780262341585
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262036580.003.0007
- Subject:
- Environmental Science, Environmental Studies
Popularized by scientists in the 1970s, adaptive management is an integrative, multi-disciplinary approach to managing landscapes and natural resources. Despite its broad appeal many critics complain ...
More
Popularized by scientists in the 1970s, adaptive management is an integrative, multi-disciplinary approach to managing landscapes and natural resources. Despite its broad appeal many critics complain that adaptive management rarely works in practice as prescribed in theory. This chapter traces the history and evolution of the concept and assess its implementation challenges. One reason adaptive management has not always delivered on its promise to make natural resource management more “rational” is that in the real world of policymaking scientists and natural resource managers must contend with advocates that have conflicting values and goals. Scientists and managers also operate in the context of institutions that create particular constraints and opportunities, and are generally inflexible and resistant to change. In recognition of these sociopolitical realities, the focus of much adaptive management practice and scholarship has shifted to governance, particularly collaboration with stakeholders, transformation of the institutions responsible for management, and the process of social learning.Less
Popularized by scientists in the 1970s, adaptive management is an integrative, multi-disciplinary approach to managing landscapes and natural resources. Despite its broad appeal many critics complain that adaptive management rarely works in practice as prescribed in theory. This chapter traces the history and evolution of the concept and assess its implementation challenges. One reason adaptive management has not always delivered on its promise to make natural resource management more “rational” is that in the real world of policymaking scientists and natural resource managers must contend with advocates that have conflicting values and goals. Scientists and managers also operate in the context of institutions that create particular constraints and opportunities, and are generally inflexible and resistant to change. In recognition of these sociopolitical realities, the focus of much adaptive management practice and scholarship has shifted to governance, particularly collaboration with stakeholders, transformation of the institutions responsible for management, and the process of social learning.
Bernice Kurchin
Diane F. George (ed.)
- Published in print:
- 2019
- Published Online:
- September 2019
- ISBN:
- 9780813056197
- eISBN:
- 9780813053950
- Item type:
- book
- Publisher:
- University Press of Florida
- DOI:
- 10.5744/florida/9780813056197.001.0001
- Subject:
- Archaeology, Historical Archaeology
In situations of displacement, disruption, and difference, humans adapt by actively creating, re-creating, and adjusting their identities using the material world. This book employs the discipline of ...
More
In situations of displacement, disruption, and difference, humans adapt by actively creating, re-creating, and adjusting their identities using the material world. This book employs the discipline of historical archaeology to study this process as it occurs in new and challenging environments. The case studies furnish varied instances of people wresting control from others who wish to define them and of adaptive transformation by people who find themselves in new and strange worlds. The authors consider multiple aspects of identity, such as race, class, gender, and ethnicity, and look for ways to understand its fluid and intersecting nature. The book seeks to make the study of the past relevant to our globalized, postcolonized, and capitalized world. Questions of identity formation are critical in understanding the world today, in which boundaries are simultaneously breaking down and being built up, and humans are constantly adapting to the ever-changing milieu. This book tackles these questions not only in multiple dimensions of earthly space but also in a panorama of historical time. Moving from the ancient past to the unknowable future and through numerous temporal stops in between, the reader travels from New York to the Great Lakes, Britain to North Africa, and the North Atlantic to the West Indies.Less
In situations of displacement, disruption, and difference, humans adapt by actively creating, re-creating, and adjusting their identities using the material world. This book employs the discipline of historical archaeology to study this process as it occurs in new and challenging environments. The case studies furnish varied instances of people wresting control from others who wish to define them and of adaptive transformation by people who find themselves in new and strange worlds. The authors consider multiple aspects of identity, such as race, class, gender, and ethnicity, and look for ways to understand its fluid and intersecting nature. The book seeks to make the study of the past relevant to our globalized, postcolonized, and capitalized world. Questions of identity formation are critical in understanding the world today, in which boundaries are simultaneously breaking down and being built up, and humans are constantly adapting to the ever-changing milieu. This book tackles these questions not only in multiple dimensions of earthly space but also in a panorama of historical time. Moving from the ancient past to the unknowable future and through numerous temporal stops in between, the reader travels from New York to the Great Lakes, Britain to North Africa, and the North Atlantic to the West Indies.
Harry S. Laver and Jeffrey J. Matthews (eds)
- Published in print:
- 2017
- Published Online:
- May 2018
- ISBN:
- 9780813174723
- eISBN:
- 9780813174778
- Item type:
- book
- Publisher:
- University Press of Kentucky
- DOI:
- 10.5810/kentucky/9780813174723.001.0001
- Subject:
- History, Military History
The Art of Command provides biographical and topical portraits of exceptional leaders from all four branches of the United States armed forces. Laver and Matthews have identified eleven core ...
More
The Art of Command provides biographical and topical portraits of exceptional leaders from all four branches of the United States armed forces. Laver and Matthews have identified eleven core characteristics of effective leadership, such as vision, charisma, determination, and integrity, and apply them to significant figures in American military history. In doing so, they argue that leadership is a learned and practiced skill, developed through conscious effort and mentoring by superiors. Tracing the careers, traits, and behaviors of eleven legendary leaders, including Ulysses Grant, George Marshall, Henry Arnold, and David Shoup, each chapter provides detailed critical analysis of a leader's personal development and leadership style. This historically grounded exploration delivers an insightful examination of various military command styles that transcend time, place, rank, and branch of service.Less
The Art of Command provides biographical and topical portraits of exceptional leaders from all four branches of the United States armed forces. Laver and Matthews have identified eleven core characteristics of effective leadership, such as vision, charisma, determination, and integrity, and apply them to significant figures in American military history. In doing so, they argue that leadership is a learned and practiced skill, developed through conscious effort and mentoring by superiors. Tracing the careers, traits, and behaviors of eleven legendary leaders, including Ulysses Grant, George Marshall, Henry Arnold, and David Shoup, each chapter provides detailed critical analysis of a leader's personal development and leadership style. This historically grounded exploration delivers an insightful examination of various military command styles that transcend time, place, rank, and branch of service.
Sylvia Richardson, Leonardo Bottolo, and Jeffrey S. Rosenthal
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199694587
- eISBN:
- 9780191731921
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199694587.003.0018
- Subject:
- Mathematics, Probability / Statistics
This paper considers the task of building efficient regression models for sparse multivariate analysis of high dimensional data sets, in particular it focuses on cases where the numbers q of ...
More
This paper considers the task of building efficient regression models for sparse multivariate analysis of high dimensional data sets, in particular it focuses on cases where the numbers q of responses Y = ( y k ,1 ≤ k ≤ q) and p of predictors X = ( x j , 1 ≤ j ≤ p) to analyse jointly are both large with respect to the sample size n, a challenging bi‐directional task. The analysis of such data sets arise commonly in genetical genomics, with X linked to the DNA characteristics and Y corresponding to measurements of fundamental biological processes such as transcription, protein or metabolite production. Building on the Bayesian variable selection set‐up for the linear model and associated efficient MCMC algorithms developed for single responses, we discuss the generic framework of hierarchical related sparse regressions, where parallel regressions of y k on the set of covariates X are linked in a hierarchical fashion, in particular through the prior model of the variable selection indicators γ kj , which indicate among the covariates x j those which are associated to the response y k in each multivariate regression. Structures for the joint model of the γ kj , which correspond to different compromises between the aims of controlling sparsity and that of enhancing the detection of predictors that are associated with many responses (“hot spots”), will be discussed and a new multiplicative model for the probability structure of the γ kj will be presented. To perform inference for these models in high dimensional set‐ups, novel adaptive MCMC algorithms are needed. As sparsity is paramount and most of the associations expected to be zero, new algorithms that progressively focus on part of the space where the most interesting associations occur are of great interest. We shall discuss their formulation and theoretical properties, and demonstrate their use on simulated and real data from genomics.Less
This paper considers the task of building efficient regression models for sparse multivariate analysis of high dimensional data sets, in particular it focuses on cases where the numbers q of responses Y = ( y k ,1 ≤ k ≤ q) and p of predictors X = ( x j , 1 ≤ j ≤ p) to analyse jointly are both large with respect to the sample size n, a challenging bi‐directional task. The analysis of such data sets arise commonly in genetical genomics, with X linked to the DNA characteristics and Y corresponding to measurements of fundamental biological processes such as transcription, protein or metabolite production. Building on the Bayesian variable selection set‐up for the linear model and associated efficient MCMC algorithms developed for single responses, we discuss the generic framework of hierarchical related sparse regressions, where parallel regressions of y k on the set of covariates X are linked in a hierarchical fashion, in particular through the prior model of the variable selection indicators γ kj , which indicate among the covariates x j those which are associated to the response y k in each multivariate regression. Structures for the joint model of the γ kj , which correspond to different compromises between the aims of controlling sparsity and that of enhancing the detection of predictors that are associated with many responses (“hot spots”), will be discussed and a new multiplicative model for the probability structure of the γ kj will be presented. To perform inference for these models in high dimensional set‐ups, novel adaptive MCMC algorithms are needed. As sparsity is paramount and most of the associations expected to be zero, new algorithms that progressively focus on part of the space where the most interesting associations occur are of great interest. We shall discuss their formulation and theoretical properties, and demonstrate their use on simulated and real data from genomics.
Mark Huber and Sarah Schott
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199694587
- eISBN:
- 9780191731921
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199694587.003.0009
- Subject:
- Mathematics, Probability / Statistics
Finding the integrated likelihood of a model given the data requires the integration of a nonnegative function over the parameter space. Classical Monte Carlo methods for numerical integration ...
More
Finding the integrated likelihood of a model given the data requires the integration of a nonnegative function over the parameter space. Classical Monte Carlo methods for numerical integration require a bound or estimate of the variance in order to determine the quality of the output. The method called the product estimator does not require knowledge of the variance in order to produce a result of guaranteed quality, but requires a cooling schedule that must have certain strict properties. Finding a cooling schedule can be difficult, and finding an optimal cooling schedule is usually computationally out of reach. TPA is a method that solves this difficulty, creating an optimal cooling schedule automatically as it is run. This method has its own set of requirements; here it is shown how to meet these requirements for problems arising in Bayesian inference. This gives guaranteed accuracy for integrated likelihoods and posterior means of nonnegative parameters.Less
Finding the integrated likelihood of a model given the data requires the integration of a nonnegative function over the parameter space. Classical Monte Carlo methods for numerical integration require a bound or estimate of the variance in order to determine the quality of the output. The method called the product estimator does not require knowledge of the variance in order to produce a result of guaranteed quality, but requires a cooling schedule that must have certain strict properties. Finding a cooling schedule can be difficult, and finding an optimal cooling schedule is usually computationally out of reach. TPA is a method that solves this difficulty, creating an optimal cooling schedule automatically as it is run. This method has its own set of requirements; here it is shown how to meet these requirements for problems arising in Bayesian inference. This gives guaranteed accuracy for integrated likelihoods and posterior means of nonnegative parameters.
Peter C. R. Lane
- Published in print:
- 2007
- Published Online:
- April 2010
- ISBN:
- 9780195178845
- eISBN:
- 9780199893751
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195178845.003.0005
- Subject:
- Psychology, Cognitive Psychology
This chapter provides details about order effects in neural networks, a commonly used modeling approach. It examines two networks in detail, the Adaptive Resonance Theory (ART) architecture and Jeff ...
More
This chapter provides details about order effects in neural networks, a commonly used modeling approach. It examines two networks in detail, the Adaptive Resonance Theory (ART) architecture and Jeff Elman's recurrent networks. The ART model shows that about a 25% difference in recognition rates can arise from using different orders. Elman's recurrent network shows that, with the wrong order, a task might not even be learnable. The chapter also discusses why these effects arise, which is important for understanding the impact and claims of computational models.Less
This chapter provides details about order effects in neural networks, a commonly used modeling approach. It examines two networks in detail, the Adaptive Resonance Theory (ART) architecture and Jeff Elman's recurrent networks. The ART model shows that about a 25% difference in recognition rates can arise from using different orders. Elman's recurrent network shows that, with the wrong order, a task might not even be learnable. The chapter also discusses why these effects arise, which is important for understanding the impact and claims of computational models.
Sean H. Rice
- Published in print:
- 2013
- Published Online:
- December 2013
- ISBN:
- 9780199595372
- eISBN:
- 9780191774799
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199595372.003.0018
- Subject:
- Biology, Evolutionary Biology / Genetics
Sewall Wright originally conceived of his Adaptive Landscape as a visual device to capture the consequences of non-linear (epistatic) interactions between genes. A useful way to visualise a ...
More
Sewall Wright originally conceived of his Adaptive Landscape as a visual device to capture the consequences of non-linear (epistatic) interactions between genes. A useful way to visualise a multivariate non-linear function is through a ‘landscape’, an important factor to consider when applying Adaptive Landscape models to questions about the evolution of development. This chapter examines how a phenotype landscape (also known as phenotypic landscape or developmental landscape) can explicitly map genetic and developmental traits to the phenotypic traits upon which selection acts. After outlining the basic properties of phenotype landscapes, it considers how they are used in concert with an Adaptive Landscape to study the evolution of development. It then describes the formal theory for evolution on phenotype landscapes and how it generalises the quantitative genetic approaches that are often applied to Adaptive Landscapes. The chapter concludes by illustrating how phenotype landscape theory can be used to study the evolution of genetic covariance, heritability, and novelty.Less
Sewall Wright originally conceived of his Adaptive Landscape as a visual device to capture the consequences of non-linear (epistatic) interactions between genes. A useful way to visualise a multivariate non-linear function is through a ‘landscape’, an important factor to consider when applying Adaptive Landscape models to questions about the evolution of development. This chapter examines how a phenotype landscape (also known as phenotypic landscape or developmental landscape) can explicitly map genetic and developmental traits to the phenotypic traits upon which selection acts. After outlining the basic properties of phenotype landscapes, it considers how they are used in concert with an Adaptive Landscape to study the evolution of development. It then describes the formal theory for evolution on phenotype landscapes and how it generalises the quantitative genetic approaches that are often applied to Adaptive Landscapes. The chapter concludes by illustrating how phenotype landscape theory can be used to study the evolution of genetic covariance, heritability, and novelty.
Erik I Svensson and Ryan Caisbeek
- Published in print:
- 2013
- Published Online:
- December 2013
- ISBN:
- 9780199595372
- eISBN:
- 9780191774799
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199595372.003.0019
- Subject:
- Biology, Evolutionary Biology / Genetics
Sewall Wright’s classic Adaptive Landscape has been a highly successful metaphor and scientific concept in evolutionary biology. It has influenced many different research subdisciplines in ...
More
Sewall Wright’s classic Adaptive Landscape has been a highly successful metaphor and scientific concept in evolutionary biology. It has influenced many different research subdisciplines in evolutionary biology and inspired generations of researchers, even though it has also sparked deep scientific and philosophical controversies. Among such subdisciplines are population genetics, evolutionary ecology, quantitative genetics, experimental evolution, conservation biology, speciation and macroevolutionary dynamics, mimicry, saltational evolution, behavioural ecology, molecular biology, protein networks, and theoretical studies on development.Less
Sewall Wright’s classic Adaptive Landscape has been a highly successful metaphor and scientific concept in evolutionary biology. It has influenced many different research subdisciplines in evolutionary biology and inspired generations of researchers, even though it has also sparked deep scientific and philosophical controversies. Among such subdisciplines are population genetics, evolutionary ecology, quantitative genetics, experimental evolution, conservation biology, speciation and macroevolutionary dynamics, mimicry, saltational evolution, behavioural ecology, molecular biology, protein networks, and theoretical studies on development.
Zhong-Lin Lu and Barbara Dosher
- Published in print:
- 2013
- Published Online:
- May 2014
- ISBN:
- 9780262019453
- eISBN:
- 9780262314930
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262019453.003.0011
- Subject:
- Psychology, Vision
Adaptive procedures are developed to reduce the burden of data collection in psychophysics by creating more efficient experimental test designs and methods of estimating either statistics or ...
More
Adaptive procedures are developed to reduce the burden of data collection in psychophysics by creating more efficient experimental test designs and methods of estimating either statistics or parameters. In some cases, these adaptive procedures may reduce the amount of testing by as much as 80% to 90%. This chapter begins with a description of classical staircase procedures for estimating the threshold and/or slope of the psychometric function, followed by a description of modern Bayesian adaptive methods for optimizing psychophysical tests. We introduce applications of Bayesian adaptive procedures for the estimation of psychophysically measured functions and surfaces. The bias, precision and efficiency of estimates is considered. Each method is accompanied by an illustrative example and sample results and a discussion of the practical requirements of the procedure.Less
Adaptive procedures are developed to reduce the burden of data collection in psychophysics by creating more efficient experimental test designs and methods of estimating either statistics or parameters. In some cases, these adaptive procedures may reduce the amount of testing by as much as 80% to 90%. This chapter begins with a description of classical staircase procedures for estimating the threshold and/or slope of the psychometric function, followed by a description of modern Bayesian adaptive methods for optimizing psychophysical tests. We introduce applications of Bayesian adaptive procedures for the estimation of psychophysically measured functions and surfaces. The bias, precision and efficiency of estimates is considered. Each method is accompanied by an illustrative example and sample results and a discussion of the practical requirements of the procedure.
Marianne E. Krasny and Keith G. Tidball
- Published in print:
- 2015
- Published Online:
- September 2015
- ISBN:
- 9780262028653
- eISBN:
- 9780262327169
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262028653.003.0011
- Subject:
- Environmental Science, Environmental Studies
Three general steps move civic ecology practices from small local innovations to broader policy innovations: giving a label to the phenomenon (in our case “civic ecology”); becoming more effective as ...
More
Three general steps move civic ecology practices from small local innovations to broader policy innovations: giving a label to the phenomenon (in our case “civic ecology”); becoming more effective as local providers of ecosystem services and contributors to community well-being through partnerships with scientists; and government and larger NGOs formulating policies that allow civic ecology practices to spread. Civic ecology practices are small social or “social-ecological innovations,” whereas larger NGOs and government agencies are policy entrepreneurs who shape the policy environment. Policy entrepreneurs can also bridge between multiple civic ecology practices and larger management initiatives to form regional adaptive and collaborative resource management systems.Less
Three general steps move civic ecology practices from small local innovations to broader policy innovations: giving a label to the phenomenon (in our case “civic ecology”); becoming more effective as local providers of ecosystem services and contributors to community well-being through partnerships with scientists; and government and larger NGOs formulating policies that allow civic ecology practices to spread. Civic ecology practices are small social or “social-ecological innovations,” whereas larger NGOs and government agencies are policy entrepreneurs who shape the policy environment. Policy entrepreneurs can also bridge between multiple civic ecology practices and larger management initiatives to form regional adaptive and collaborative resource management systems.
Bruce I. Blum
- Published in print:
- 1996
- Published Online:
- November 2020
- ISBN:
- 9780195091601
- eISBN:
- 9780197560662
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195091601.003.0016
- Subject:
- Computer Science, Software Engineering
Now that the foundation has been laid, I can turn to the principal concern of this book: software design. I use the word design in its most expansive ...
More
Now that the foundation has been laid, I can turn to the principal concern of this book: software design. I use the word design in its most expansive sense. That is, design is contrasted with discovery; it encompasses all deliberate modifications of the environment, in this case modifications that employ software components. Thus, software design should not be interpreted as a phase in the development of a product— an activity that begins after some prerequisite is complete and that terminates with the acceptance of a work product. The context of software design in Part III is extended to include all aspects of the software process from the design of a response to a real-world need (which ultimately may be expressed as a requirements document) through the design of changes to the product (i.e., lifetime maintenance). This broader use of “design” can be confusing, and the reader may think of software design as the equivalent of the software process. In what follows, the goal is to discover the essential nature of software design, which I also shall refer to as the software process. what of the foundation constructed so laboriously during the first two parts of the book? It is not one of concrete and deep pilings. Rather it is composed of crushed rock. It can support a broad-based model of software design, but it may be unstable when it comes to specifics. The foundation has been chipped from the monolith of Positivism, of Technical Rationality. Its constituents are solid and cohesive models, but they defy unification and resist integration. we interpret them as science, technology, culture, philosophy, cognition, emotion, art; they comprise the plural realities from which we compose human knowledge. Unfortunately, my description of the foundation holds little promise of broad, general answers. Indeed, it suggests that science may be of limited help to design and that we may never discover the essence of design. That is, we must accept design as a human activity; whatever answers we may find will be valid within narrow domains where knowledge is determined by its context. Thus, Parts I and II prepare us to accept that the study of software design may not be amenable to systematic analysis.
Less
Now that the foundation has been laid, I can turn to the principal concern of this book: software design. I use the word design in its most expansive sense. That is, design is contrasted with discovery; it encompasses all deliberate modifications of the environment, in this case modifications that employ software components. Thus, software design should not be interpreted as a phase in the development of a product— an activity that begins after some prerequisite is complete and that terminates with the acceptance of a work product. The context of software design in Part III is extended to include all aspects of the software process from the design of a response to a real-world need (which ultimately may be expressed as a requirements document) through the design of changes to the product (i.e., lifetime maintenance). This broader use of “design” can be confusing, and the reader may think of software design as the equivalent of the software process. In what follows, the goal is to discover the essential nature of software design, which I also shall refer to as the software process. what of the foundation constructed so laboriously during the first two parts of the book? It is not one of concrete and deep pilings. Rather it is composed of crushed rock. It can support a broad-based model of software design, but it may be unstable when it comes to specifics. The foundation has been chipped from the monolith of Positivism, of Technical Rationality. Its constituents are solid and cohesive models, but they defy unification and resist integration. we interpret them as science, technology, culture, philosophy, cognition, emotion, art; they comprise the plural realities from which we compose human knowledge. Unfortunately, my description of the foundation holds little promise of broad, general answers. Indeed, it suggests that science may be of limited help to design and that we may never discover the essence of design. That is, we must accept design as a human activity; whatever answers we may find will be valid within narrow domains where knowledge is determined by its context. Thus, Parts I and II prepare us to accept that the study of software design may not be amenable to systematic analysis.
Subrata Dasgupta
- Published in print:
- 2018
- Published Online:
- November 2020
- ISBN:
- 9780190843861
- eISBN:
- 9780197559826
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190843861.003.0010
- Subject:
- Computer Science, History of Computer Science
Human Problem Solving (1972) by Allen Newell and Herbert Simon of Carnegie-Mellon University, a tome of over 900 pages, was the summa of some 17 years of research by ...
More
Human Problem Solving (1972) by Allen Newell and Herbert Simon of Carnegie-Mellon University, a tome of over 900 pages, was the summa of some 17 years of research by Newell, Simon, and their numerous associates (most notably Cliff Shaw, a highly gifted programmer at Rand Corporation) into “how humans think.” “How humans think” of course belonged historically to the psychologists’ turf. But what Newell and Simon meant by their project of “understanding . . . how humans think” was very different from how psychologists envisioned the problem before these two men invaded their milieu in 1958 with a paper on human problem solving in the prestigious Psychological Review. Indeed, professional psychologists must have looked at them askance. Neither was formally trained in psychology. Newell was originally trained as a mathematician, Simon as a political scientist. They both disdained disciplinary boundaries. Their curricula vitae proclaimed loudly their intellectual heterodoxy. At the time Human Problem Solving was published, Newell’s research interests straddled artificial intelligence, computer architecture, and (as we will see) what came to be called cognitive science. Simon’s multidisciplinary creativity—his reputation as a “Renaissance man”—encompassing administrative theory, economics, sociology, cognitive psychology, computer science, and the philosophy of science—was of near-mythical status by the early 1970s. Yet, for one prominent historian of psychology it would seem that what Newell and Simon did had nothing to do with the discipline: the third edition of Georgetown University psychologist Daniel N. Robinson’s An Intellectual History of Psychology (1995) makes no mention of Newell or Simon. Perhaps this was because, as Newell and Simon explained, their study of thinking adopted a pointedly information processing perspective. Information processing: Thus entered the computer into this conversation. But, Newell and Simon hastened to clarify, they were not suggesting a metaphor of humans as computers. Rather, they would propose an information processing system (IPS) that would serve to describe and explain how humans “process task-oriented symbolic information.” In other words, human problem solving, in their view, is an instance of representing information as symbols and processing them.
Less
Human Problem Solving (1972) by Allen Newell and Herbert Simon of Carnegie-Mellon University, a tome of over 900 pages, was the summa of some 17 years of research by Newell, Simon, and their numerous associates (most notably Cliff Shaw, a highly gifted programmer at Rand Corporation) into “how humans think.” “How humans think” of course belonged historically to the psychologists’ turf. But what Newell and Simon meant by their project of “understanding . . . how humans think” was very different from how psychologists envisioned the problem before these two men invaded their milieu in 1958 with a paper on human problem solving in the prestigious Psychological Review. Indeed, professional psychologists must have looked at them askance. Neither was formally trained in psychology. Newell was originally trained as a mathematician, Simon as a political scientist. They both disdained disciplinary boundaries. Their curricula vitae proclaimed loudly their intellectual heterodoxy. At the time Human Problem Solving was published, Newell’s research interests straddled artificial intelligence, computer architecture, and (as we will see) what came to be called cognitive science. Simon’s multidisciplinary creativity—his reputation as a “Renaissance man”—encompassing administrative theory, economics, sociology, cognitive psychology, computer science, and the philosophy of science—was of near-mythical status by the early 1970s. Yet, for one prominent historian of psychology it would seem that what Newell and Simon did had nothing to do with the discipline: the third edition of Georgetown University psychologist Daniel N. Robinson’s An Intellectual History of Psychology (1995) makes no mention of Newell or Simon. Perhaps this was because, as Newell and Simon explained, their study of thinking adopted a pointedly information processing perspective. Information processing: Thus entered the computer into this conversation. But, Newell and Simon hastened to clarify, they were not suggesting a metaphor of humans as computers. Rather, they would propose an information processing system (IPS) that would serve to describe and explain how humans “process task-oriented symbolic information.” In other words, human problem solving, in their view, is an instance of representing information as symbols and processing them.
Bruce I. Blum
- Published in print:
- 1996
- Published Online:
- November 2020
- ISBN:
- 9780195091601
- eISBN:
- 9780197560662
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195091601.003.0006
- Subject:
- Computer Science, Software Engineering
Fifty years ago there were no stored-program binary electronic computers. Indeed, in the mid 1940s computer was a job description; the computer was a ...
More
Fifty years ago there were no stored-program binary electronic computers. Indeed, in the mid 1940s computer was a job description; the computer was a person. Much has happened in the ensuing half-century. whereas the motto of the 1950s was “do not bend, spindle, or mutilate,” we now have become comfortable with GUI wIMP (i.e., Graphic User Interface; windows, Icons, Mouse, and Pointers). whereas computers once were maintained in isolation and viewed through large picture windows, they now are visible office accessories and invisible utilities. whereas the single computer once was a highly prized resource, modern networks now hide even the machines’ geographic locations. Naturally, some of our perceptions have adapted to reflect these changes; however, much of our understanding remains bound to the concepts that flourished during computing’s formative years. For example, we have moved beyond thinking of computers as a giant brain (Martin 1993), but we still hold firmly to our faith in computing’s scientific foundations. The purpose of this book is to look forward and speculate about the place of computing in the next fifty years. There are many aspects of computing that make it very different from all other technologies. The development of the microchip has made digital computing ubiquitous; we are largely unaware of the computers in our wrist watches, automobiles, cameras, and household appliances. The field of artificial intelligence (AI) sees the brain as an organ with some functions that can be modeled in a computer, thereby enabling computers to exhibit “intelligent” behavior. Thus, their research seeks to extend the role of computers through applications in which they perform autonomously or act as active assistants. (For some recent overviews of AI see waldrop 1987; Crevier 1993.) In the domain of information systems, Zuboff (1988) finds that computers can both automate (routinize) and informate, that is, produce new information that serves as “a voice that symbolically renders events, objects, and processes so that they become visible, knowable, and sharable in a new way” (p. 9).
Less
Fifty years ago there were no stored-program binary electronic computers. Indeed, in the mid 1940s computer was a job description; the computer was a person. Much has happened in the ensuing half-century. whereas the motto of the 1950s was “do not bend, spindle, or mutilate,” we now have become comfortable with GUI wIMP (i.e., Graphic User Interface; windows, Icons, Mouse, and Pointers). whereas computers once were maintained in isolation and viewed through large picture windows, they now are visible office accessories and invisible utilities. whereas the single computer once was a highly prized resource, modern networks now hide even the machines’ geographic locations. Naturally, some of our perceptions have adapted to reflect these changes; however, much of our understanding remains bound to the concepts that flourished during computing’s formative years. For example, we have moved beyond thinking of computers as a giant brain (Martin 1993), but we still hold firmly to our faith in computing’s scientific foundations. The purpose of this book is to look forward and speculate about the place of computing in the next fifty years. There are many aspects of computing that make it very different from all other technologies. The development of the microchip has made digital computing ubiquitous; we are largely unaware of the computers in our wrist watches, automobiles, cameras, and household appliances. The field of artificial intelligence (AI) sees the brain as an organ with some functions that can be modeled in a computer, thereby enabling computers to exhibit “intelligent” behavior. Thus, their research seeks to extend the role of computers through applications in which they perform autonomously or act as active assistants. (For some recent overviews of AI see waldrop 1987; Crevier 1993.) In the domain of information systems, Zuboff (1988) finds that computers can both automate (routinize) and informate, that is, produce new information that serves as “a voice that symbolically renders events, objects, and processes so that they become visible, knowable, and sharable in a new way” (p. 9).
Christof Koch
- Published in print:
- 1998
- Published Online:
- November 2020
- ISBN:
- 9780195104912
- eISBN:
- 9780197562338
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195104912.003.0024
- Subject:
- Computer Science, Mathematical Theory of Computation
Now that we have quantified the behavior of the cell in response to current pulses and current steps as delivered by the physiologist's microelectrode, let us study ...
More
Now that we have quantified the behavior of the cell in response to current pulses and current steps as delivered by the physiologist's microelectrode, let us study the behavior of the cell responding to a more physiological input. For instance, a visual stimulus in the environment will activate cells in the retina and its target, neurons in the lateral geniculate nucleus. These, in turn, make on the order of 50 excitatory synapses onto the apical tree of a layer 5 pyramidal cell in primary visual cortex such as the one we use throughout the book, and about 100-150 synapses onto a layer 4 spiny stellate cell (Peters and Payne, 1993; Ahmed et al., 1994, 1996; Peters, Payne, and Rudd, 1994). All of these synapses will be triggered within a fraction of a millisecond (Alonso, Usrey, and Reid, 1996). Thus, any sensory input to a neuron is likely to activate on the order of 102 synapses, rather than one or two very specific synapses as envisioned in Chap. 5 in the discussion of synaptic AND-NOT logic. This chapter will reexamine the effect of synaptic input to a realistic dendritic tree. We will commence by considering a single synaptic input as a sort of baseline condition. This represents a rather artificial condition; but because the excitatory postsynaptic potential and current at the soma are frequently experimentally recorded and provide important insights into the situation prevailing in the presence of massive synaptic input, we will discuss them in detail. Next we will treat the case of many temporally dispersed synaptic inputs to a leaky integrate-and-fire model and to the passive dendritic tree of the pyramidal cell. In particular, we are interested in uncovering the exact relationship between the temporal input jitter and the output jitter. The bulk of this chapter deals with the effect of massive synaptic input onto the firing behavior of the cell, by making use of the convenient fiction that the detailed temporal arrangement of action potentials is irrelevant for neuronal information processing. This allows us to derive an analytical expression relating the synaptic input to the somatic current and ultimately to the output frequency of the cell.
Less
Now that we have quantified the behavior of the cell in response to current pulses and current steps as delivered by the physiologist's microelectrode, let us study the behavior of the cell responding to a more physiological input. For instance, a visual stimulus in the environment will activate cells in the retina and its target, neurons in the lateral geniculate nucleus. These, in turn, make on the order of 50 excitatory synapses onto the apical tree of a layer 5 pyramidal cell in primary visual cortex such as the one we use throughout the book, and about 100-150 synapses onto a layer 4 spiny stellate cell (Peters and Payne, 1993; Ahmed et al., 1994, 1996; Peters, Payne, and Rudd, 1994). All of these synapses will be triggered within a fraction of a millisecond (Alonso, Usrey, and Reid, 1996). Thus, any sensory input to a neuron is likely to activate on the order of 102 synapses, rather than one or two very specific synapses as envisioned in Chap. 5 in the discussion of synaptic AND-NOT logic. This chapter will reexamine the effect of synaptic input to a realistic dendritic tree. We will commence by considering a single synaptic input as a sort of baseline condition. This represents a rather artificial condition; but because the excitatory postsynaptic potential and current at the soma are frequently experimentally recorded and provide important insights into the situation prevailing in the presence of massive synaptic input, we will discuss them in detail. Next we will treat the case of many temporally dispersed synaptic inputs to a leaky integrate-and-fire model and to the passive dendritic tree of the pyramidal cell. In particular, we are interested in uncovering the exact relationship between the temporal input jitter and the output jitter. The bulk of this chapter deals with the effect of massive synaptic input onto the firing behavior of the cell, by making use of the convenient fiction that the detailed temporal arrangement of action potentials is irrelevant for neuronal information processing. This allows us to derive an analytical expression relating the synaptic input to the somatic current and ultimately to the output frequency of the cell.
Thomas K. Budge and Arian Pregenzer
- Published in print:
- 2005
- Published Online:
- November 2020
- ISBN:
- 9780195139853
- eISBN:
- 9780197561720
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195139853.003.0022
- Subject:
- Earth Sciences and Geography, Environmental Geography
As biodiversity, ecosystem function, and ecosystem services become more closely linked with human well-being at all scales, the study of ecology takes on ...
More
As biodiversity, ecosystem function, and ecosystem services become more closely linked with human well-being at all scales, the study of ecology takes on increasing social, economic, and political importance. However, when compared with other disciplines long linked with human well-being, such as medicine, chemistry, and physics, the technical tools and instruments of the ecologist have generally lagged behind those of the others. This disparity is beginning to be overcome with the increasing use of biotelemetric techniques, microtechnologies, satellite and airborne imagery, geographic information systems (GIS), and both regional and global data networks. We believe that the value and efficiency of ecosystem studies can advance significantly with more widespread use of existing technologies, and with the adaptation of technologies currently used in other disciplines to ecosystem studies. More importantly, the broader use of these technologies is critical for contributing to the preservation of biodiversity and the development of sustainable natural resource use by humans. The concept of human management of biodiversity and natural systems is a contentious one. However, we assert that as human population and resource consumption continue to increase, biodiversity and resource sustainability will only be preserved by increasing management efforts—if not of the biodiversity and resources themselves, then of human impacts on them. The technologies described in this chapter will help enable better management efforts. In this context, biodiversity refers not only to numbers of species (i.e., richness) in an arbitrarily defined area, but also to species abundances within that area. Sustainability refers to the maintenance of natural systems, biodiversity, and resources for the benefit of future generations. Arid-land grazing systems support human social systems and economies in regions all over the world, and can be expected to play increasingly critical roles as human populations increase. Further, grazing systems represent a nexus of natural and domesticated systems. In these systems, native biodiversity exists side by side with introduced species and populations, and in fact can benefit from them.
Less
As biodiversity, ecosystem function, and ecosystem services become more closely linked with human well-being at all scales, the study of ecology takes on increasing social, economic, and political importance. However, when compared with other disciplines long linked with human well-being, such as medicine, chemistry, and physics, the technical tools and instruments of the ecologist have generally lagged behind those of the others. This disparity is beginning to be overcome with the increasing use of biotelemetric techniques, microtechnologies, satellite and airborne imagery, geographic information systems (GIS), and both regional and global data networks. We believe that the value and efficiency of ecosystem studies can advance significantly with more widespread use of existing technologies, and with the adaptation of technologies currently used in other disciplines to ecosystem studies. More importantly, the broader use of these technologies is critical for contributing to the preservation of biodiversity and the development of sustainable natural resource use by humans. The concept of human management of biodiversity and natural systems is a contentious one. However, we assert that as human population and resource consumption continue to increase, biodiversity and resource sustainability will only be preserved by increasing management efforts—if not of the biodiversity and resources themselves, then of human impacts on them. The technologies described in this chapter will help enable better management efforts. In this context, biodiversity refers not only to numbers of species (i.e., richness) in an arbitrarily defined area, but also to species abundances within that area. Sustainability refers to the maintenance of natural systems, biodiversity, and resources for the benefit of future generations. Arid-land grazing systems support human social systems and economies in regions all over the world, and can be expected to play increasingly critical roles as human populations increase. Further, grazing systems represent a nexus of natural and domesticated systems. In these systems, native biodiversity exists side by side with introduced species and populations, and in fact can benefit from them.
William A. Mitchell and Burt P. Kotler
- Published in print:
- 2005
- Published Online:
- November 2020
- ISBN:
- 9780195139853
- eISBN:
- 9780197561720
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195139853.003.0009
- Subject:
- Earth Sciences and Geography, Environmental Geography
Despite their apparent simplicity, arid environments can be quite heterogeneous. From small-scale variation in substrate and slope to large-scale ...
More
Despite their apparent simplicity, arid environments can be quite heterogeneous. From small-scale variation in substrate and slope to large-scale geographic variation in solar input and productivity, drylands and deserts provide organisms with a tremendous range of ecological challenges (Schmidt-Nielsen 1964, Huggett 1995). Any single species is unable to meet all of these challenges equally well. A species will do better in some environments than others because evolution in heterogeneous environments is constrained by fitness tradeoffs. Such tradeoffs prevent the evolution of a versatile species, competitively superior to all other species across the entire spectrum of heterogeneity (Rosenzweig 1987). Although fitness tradeoffs may hinder species’ evolution in heterogeneous environments, they are a blessing for biodiversity. The source of biodiversity that we address in this chapter is the interplay of heterogeneity, tradeoffs, and density dependence. While we focus on species interactions at the local scale, our presentation includes a model that predicts changes in local diversity as a function of climate. The model’s predictions are based on changes in the nature of competition wrought by changes in productivity levels and climatic regimes. Cast in terms of evolutionary stable strategies (ESSs), the predictions refer to evolutionary as well as ecological patterns. A mechanism of coexistence consists of an axis of environmental heterogeneity together with an axis that indicates a tradeoff in the abilities of species to exploit different parts of the axis. In the absence of some kind of heterogeneity, there is only one environmental type, and whatever species is best adapted to it will competitively exclude others. In the absence of a tradeoff, one species could evolve competitive superiority over the full range of heterogeneity, again resulting in a monomorphic community. Consider some examples of mechanisms of species’ coexistence from dryland communities (Kotler and Brown 1988, Brown et al. 1994). For many taxa, spatial heterogeneity in predation risk is a consequence of the pattern of bushy and open areas common in drylands. In certain rodent communities, some species are able to exploit the relatively riskier open microhabitats by virtue of antipredator morphologies (Kotler 1984).
Less
Despite their apparent simplicity, arid environments can be quite heterogeneous. From small-scale variation in substrate and slope to large-scale geographic variation in solar input and productivity, drylands and deserts provide organisms with a tremendous range of ecological challenges (Schmidt-Nielsen 1964, Huggett 1995). Any single species is unable to meet all of these challenges equally well. A species will do better in some environments than others because evolution in heterogeneous environments is constrained by fitness tradeoffs. Such tradeoffs prevent the evolution of a versatile species, competitively superior to all other species across the entire spectrum of heterogeneity (Rosenzweig 1987). Although fitness tradeoffs may hinder species’ evolution in heterogeneous environments, they are a blessing for biodiversity. The source of biodiversity that we address in this chapter is the interplay of heterogeneity, tradeoffs, and density dependence. While we focus on species interactions at the local scale, our presentation includes a model that predicts changes in local diversity as a function of climate. The model’s predictions are based on changes in the nature of competition wrought by changes in productivity levels and climatic regimes. Cast in terms of evolutionary stable strategies (ESSs), the predictions refer to evolutionary as well as ecological patterns. A mechanism of coexistence consists of an axis of environmental heterogeneity together with an axis that indicates a tradeoff in the abilities of species to exploit different parts of the axis. In the absence of some kind of heterogeneity, there is only one environmental type, and whatever species is best adapted to it will competitively exclude others. In the absence of a tradeoff, one species could evolve competitive superiority over the full range of heterogeneity, again resulting in a monomorphic community. Consider some examples of mechanisms of species’ coexistence from dryland communities (Kotler and Brown 1988, Brown et al. 1994). For many taxa, spatial heterogeneity in predation risk is a consequence of the pattern of bushy and open areas common in drylands. In certain rodent communities, some species are able to exploit the relatively riskier open microhabitats by virtue of antipredator morphologies (Kotler 1984).
Mary Jane West-Eberhard
- Published in print:
- 2003
- Published Online:
- November 2020
- ISBN:
- 9780195122343
- eISBN:
- 9780197561300
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195122343.003.0022
- Subject:
- Earth Sciences and Geography, Palaeontology: Earth Sciences
Preceding chapters have discussed evolutionary transitions as changes in the expression of discrete, modular traits. This chapter discusses transitions ...
More
Preceding chapters have discussed evolutionary transitions as changes in the expression of discrete, modular traits. This chapter discusses transitions that are due to shifts in the magnitude, rather than the time, place, or repetition, of trait expression. Especially, it considers examples where environmental extremes induce quantitative change in the expression of continuously variable plastic traits. Quantitative shifts can produce novel extremes, novel combinations of extremes, or simultaneously opposite shifts due to negative correlations between traits, as in trade-offs. As pointed out by Brien (1969; see also chapter 7), correlated adaptive shifts can produce major changes in which large steps are not lethal because the usual adaptive plasticity of the organism accommodates the kind of change that occurs when “a new type of organization is born”. The same developmental plasticity that is responsible for phenotypic accommodation and homeostatic stability (see chapter 3) can produce correlated change as well. Correlations among the environmental responses of plastic traits mean that several quantitative traits can change at once, if they respond simultaneously to the same mutation or environmental factor. The two-legged goat described in chapter 3 shows how the multidimensional plasticity of the phenotype can produce a strikingly novel form that, lacking intermediates, appears to be a qualitative change—a change in kind, not merely degree. The fact that the same plasticity-mediated changes would occur whether the cause of shortened front legs were due to a mutation or to an environmental effect early in development illustrates the interchangeability of genetic and environmental factors in inducing correlated change. Raff and Kaufman (1983, p. 202) called correlated effects due to developmental relationships among continuously variable traits “relational pleiotropy.” Positive relational pleiotropy can result when numerous positively correlated traits respond in unison to a single stimulus or condition, such as variation in size. Negative relational pleiotropy can give rise to trade-offs, or negative fitness effects among traits such that an increase in the magnitude of one means a decrease in the magnitude of one or more others.
Less
Preceding chapters have discussed evolutionary transitions as changes in the expression of discrete, modular traits. This chapter discusses transitions that are due to shifts in the magnitude, rather than the time, place, or repetition, of trait expression. Especially, it considers examples where environmental extremes induce quantitative change in the expression of continuously variable plastic traits. Quantitative shifts can produce novel extremes, novel combinations of extremes, or simultaneously opposite shifts due to negative correlations between traits, as in trade-offs. As pointed out by Brien (1969; see also chapter 7), correlated adaptive shifts can produce major changes in which large steps are not lethal because the usual adaptive plasticity of the organism accommodates the kind of change that occurs when “a new type of organization is born”. The same developmental plasticity that is responsible for phenotypic accommodation and homeostatic stability (see chapter 3) can produce correlated change as well. Correlations among the environmental responses of plastic traits mean that several quantitative traits can change at once, if they respond simultaneously to the same mutation or environmental factor. The two-legged goat described in chapter 3 shows how the multidimensional plasticity of the phenotype can produce a strikingly novel form that, lacking intermediates, appears to be a qualitative change—a change in kind, not merely degree. The fact that the same plasticity-mediated changes would occur whether the cause of shortened front legs were due to a mutation or to an environmental effect early in development illustrates the interchangeability of genetic and environmental factors in inducing correlated change. Raff and Kaufman (1983, p. 202) called correlated effects due to developmental relationships among continuously variable traits “relational pleiotropy.” Positive relational pleiotropy can result when numerous positively correlated traits respond in unison to a single stimulus or condition, such as variation in size. Negative relational pleiotropy can give rise to trade-offs, or negative fitness effects among traits such that an increase in the magnitude of one means a decrease in the magnitude of one or more others.
Mary Jane West-Eberhard
- Published in print:
- 2003
- Published Online:
- November 2020
- ISBN:
- 9780195122343
- eISBN:
- 9780197561300
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195122343.003.0032
- Subject:
- Earth Sciences and Geography, Palaeontology: Earth Sciences
Ever since Darwin, there has been a tension between selectionists and developmentalists over the question of gradualism versus saltation and selection ...
More
Ever since Darwin, there has been a tension between selectionists and developmentalists over the question of gradualism versus saltation and selection versus variation. Does form evolve by a series of small modifications, each one mediated by selection? Or does complex change in form originate suddenly due to a developmental change? Why should there have been such an enduring controversy over these questions? There would seem to be no intrinsic conflict between a belief that development produces variations, some of them discrete and phenotypically complex, and a belief that selection chooses among them. The significance of gradualism for Darwin’s argument regarding natural selection has often been lost from view. The gradualism versus saltation question is not just a debate over whether or not large variants can occur and be selected, although this might seem to be the issue from the dichotomy “gradual versus saltatory.” One may get the impression from Bateson’s (1894) compendium of complex developmental anomalies (figure 24.1), and Goldschmidt’s (1940 [1982]) discussion of hopeful monsters, that Darwin overlooked the evolutionary importance of large developmental variants. In fact, Darwin (1868b [1875b]) extensively reviewed developmental anomalies, including meristic freaks such as those emphasized by Bateson, and considered large qualitative variants likely important in artificial selection producing certain breeds of dogs. In essence, the gradualism-saltation debate is a debate over the causes of adaptive design. Is adaptive design primarily the result of selection, which molds the phenotype step by small step, as Darwin argued? Or is it mainly due to the nature of developmentally mediated variation, with selection playing only a minor, if any, role in the creation of form, as argued by Bateson? Bateson (1894) was among the first to articulate the variationist position: . . . the crude belief that living beings are plastic conglomerates of miscellaneous attributes, and that order of form . . . has been impressed upon this medley by Selection alone; and that by Variation any of these attributes may be subtracted or any other attribute added in indefinite proportion, is a fancy which the Study of Variation does not support. . . .
Less
Ever since Darwin, there has been a tension between selectionists and developmentalists over the question of gradualism versus saltation and selection versus variation. Does form evolve by a series of small modifications, each one mediated by selection? Or does complex change in form originate suddenly due to a developmental change? Why should there have been such an enduring controversy over these questions? There would seem to be no intrinsic conflict between a belief that development produces variations, some of them discrete and phenotypically complex, and a belief that selection chooses among them. The significance of gradualism for Darwin’s argument regarding natural selection has often been lost from view. The gradualism versus saltation question is not just a debate over whether or not large variants can occur and be selected, although this might seem to be the issue from the dichotomy “gradual versus saltatory.” One may get the impression from Bateson’s (1894) compendium of complex developmental anomalies (figure 24.1), and Goldschmidt’s (1940 [1982]) discussion of hopeful monsters, that Darwin overlooked the evolutionary importance of large developmental variants. In fact, Darwin (1868b [1875b]) extensively reviewed developmental anomalies, including meristic freaks such as those emphasized by Bateson, and considered large qualitative variants likely important in artificial selection producing certain breeds of dogs. In essence, the gradualism-saltation debate is a debate over the causes of adaptive design. Is adaptive design primarily the result of selection, which molds the phenotype step by small step, as Darwin argued? Or is it mainly due to the nature of developmentally mediated variation, with selection playing only a minor, if any, role in the creation of form, as argued by Bateson? Bateson (1894) was among the first to articulate the variationist position: . . . the crude belief that living beings are plastic conglomerates of miscellaneous attributes, and that order of form . . . has been impressed upon this medley by Selection alone; and that by Variation any of these attributes may be subtracted or any other attribute added in indefinite proportion, is a fancy which the Study of Variation does not support. . . .