Michael J. North and Charles M. Macal
- Published in print:
- 2007
- Published Online:
- September 2007
- ISBN:
- 9780195172119
- eISBN:
- 9780199789894
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195172119.003.0006
- Subject:
- Business and Management, Strategy
This chapter details how to find and document the agent behaviors in systems including the use of knowledge engineering and the Unified Modeling Language (UML).
This chapter details how to find and document the agent behaviors in systems including the use of knowledge engineering and the Unified Modeling Language (UML).
Sylvain Baillet
- Published in print:
- 2010
- Published Online:
- September 2010
- ISBN:
- 9780195307238
- eISBN:
- 9780199863990
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195307238.003.0005
- Subject:
- Neuroscience, Behavioral Neuroscience, Techniques
This chapter reviews the statistical tools available for the analysis of distributed activation maps defined either on the 2D cortical surface or throughout the 3D brain volume. Statistical analysis ...
More
This chapter reviews the statistical tools available for the analysis of distributed activation maps defined either on the 2D cortical surface or throughout the 3D brain volume. Statistical analysis of MEG data bears a great resemblance to the analysis of functional magnetic resonance imaging (fMRI) or positron emission tomography (PET) activation maps, therefore much of the methodology can be borrowed or adapted from the functional neuroimaging literature. In particular, the General Linear Modeling (GLM) approach, where the MEG data are first mapped into brain space, and then fitted to a univariate or multivariate model at each surface or volume element, is described. A desired contrast of the estimated parameters produces a statistical map, which is then thresholded for evidence of an experimental effect. The chapter also describes several approaches that can produce corrected thresholds and control for false positives: Bonferroni, Random Field Theory (RFT), permutation tests, and False Discovery error Rate (FDR).Less
This chapter reviews the statistical tools available for the analysis of distributed activation maps defined either on the 2D cortical surface or throughout the 3D brain volume. Statistical analysis of MEG data bears a great resemblance to the analysis of functional magnetic resonance imaging (fMRI) or positron emission tomography (PET) activation maps, therefore much of the methodology can be borrowed or adapted from the functional neuroimaging literature. In particular, the General Linear Modeling (GLM) approach, where the MEG data are first mapped into brain space, and then fitted to a univariate or multivariate model at each surface or volume element, is described. A desired contrast of the estimated parameters produces a statistical map, which is then thresholded for evidence of an experimental effect. The chapter also describes several approaches that can produce corrected thresholds and control for false positives: Bonferroni, Random Field Theory (RFT), permutation tests, and False Discovery error Rate (FDR).
Robert E. Rudd
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780199233854
- eISBN:
- 9780191715532
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199233854.003.0005
- Subject:
- Mathematics, Applied Mathematics
Coarse-grained molecular dynamics (CGMD) is a computer modeling technique that couples conventional molecular dynamics (MD) in some spatial regions of the simulation to a more coarse-grained ...
More
Coarse-grained molecular dynamics (CGMD) is a computer modeling technique that couples conventional molecular dynamics (MD) in some spatial regions of the simulation to a more coarse-grained description in others. This concurrent multiscale modeling approach allows a more efficient use of computer power as it focuses only on those degrees of freedom that are physically relevant. In the spirit of finite element modeling (FEM), the coarse-grained regions are modeled on a mesh with variable mesh size. CGMD is derived solely from the MD model, however, and has no continuum parameters. As a result, it provides a coupling that is smooth and provides control of errors that arise at the coupling between the atomistic and coarse-grained regions. In this chapter, we review the formulation of CGMD, describing how coarse graining, the systematic removal of irrelevant degrees of freedom, is accomplished for a finite temperature system. We then describe practical implementation of CGMD for large-scale simulations and some tests of validity. We conclude with an outlook on some of the directions future development may take.Less
Coarse-grained molecular dynamics (CGMD) is a computer modeling technique that couples conventional molecular dynamics (MD) in some spatial regions of the simulation to a more coarse-grained description in others. This concurrent multiscale modeling approach allows a more efficient use of computer power as it focuses only on those degrees of freedom that are physically relevant. In the spirit of finite element modeling (FEM), the coarse-grained regions are modeled on a mesh with variable mesh size. CGMD is derived solely from the MD model, however, and has no continuum parameters. As a result, it provides a coupling that is smooth and provides control of errors that arise at the coupling between the atomistic and coarse-grained regions. In this chapter, we review the formulation of CGMD, describing how coarse graining, the systematic removal of irrelevant degrees of freedom, is accomplished for a finite temperature system. We then describe practical implementation of CGMD for large-scale simulations and some tests of validity. We conclude with an outlook on some of the directions future development may take.
Royal Skousen
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780199547548
- eISBN:
- 9780191720628
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199547548.003.0008
- Subject:
- Linguistics, Psycholinguistics / Neurolinguistics / Cognitive Linguistics, Computational Linguistics
In Analogical Modeling, language prediction is closely determined by the specific variables used. The kinds of structures that must be dealt with in a full theory of analogical prediction include ...
More
In Analogical Modeling, language prediction is closely determined by the specific variables used. The kinds of structures that must be dealt with in a full theory of analogical prediction include strings of characters, scalar variables, syntactic trees, and semantic variables. These structures as well as a number of procedural issues are discussed in this chapter.Less
In Analogical Modeling, language prediction is closely determined by the specific variables used. The kinds of structures that must be dealt with in a full theory of analogical prediction include strings of characters, scalar variables, syntactic trees, and semantic variables. These structures as well as a number of procedural issues are discussed in this chapter.
Robert W. Batterman
- Published in print:
- 2021
- Published Online:
- August 2021
- ISBN:
- 9780197568613
- eISBN:
- 9780197568644
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197568613.003.0006
- Subject:
- Philosophy, Philosophy of Science
This chapter begins with a discussion of Julian Schwinger’s “engineering approach” to particle physics. Schwinger argued from a number of perspectives that the very theory (Quantum Electrodynamics, ...
More
This chapter begins with a discussion of Julian Schwinger’s “engineering approach” to particle physics. Schwinger argued from a number of perspectives that the very theory (Quantum Electrodynamics, for which he won a Nobel prize) was inadequate. Further, he claimed that an intermediate theory between the fundamental and the phenomenological was superior. Such a theory focuses on a few parameters at intermediate or mesoscales that we employ to organize the world. Schwinger’s motivations were avowedly pragmatic, although he did offer nonpragmatic reasons for preferring such a mesoscale approach. This engineering approach fits well with the idea that the introduction of order parameters in condensed matter physics introduced a natural foliation of the world into microscopic, mesoscopic, and macroscopic levels. It further suggests that a middle-out approach to many-body systems is superior to a bottom-up reductionist approach. The chapter also discusses a middle out approach to multiscale modeling in biology.Less
This chapter begins with a discussion of Julian Schwinger’s “engineering approach” to particle physics. Schwinger argued from a number of perspectives that the very theory (Quantum Electrodynamics, for which he won a Nobel prize) was inadequate. Further, he claimed that an intermediate theory between the fundamental and the phenomenological was superior. Such a theory focuses on a few parameters at intermediate or mesoscales that we employ to organize the world. Schwinger’s motivations were avowedly pragmatic, although he did offer nonpragmatic reasons for preferring such a mesoscale approach. This engineering approach fits well with the idea that the introduction of order parameters in condensed matter physics introduced a natural foliation of the world into microscopic, mesoscopic, and macroscopic levels. It further suggests that a middle-out approach to many-body systems is superior to a bottom-up reductionist approach. The chapter also discusses a middle out approach to multiscale modeling in biology.
Johanna Drucker
- Published in print:
- 2009
- Published Online:
- February 2013
- ISBN:
- 9780226165073
- eISBN:
- 9780226165097
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226165097.003.0003
- Subject:
- Literature, Film, Media, and Cultural Studies
This chapter discusses Temporal Modeling, the first project at SpecLab. This project was designed to test whether humanistic principles could be used in the design and implementation of digital ...
More
This chapter discusses Temporal Modeling, the first project at SpecLab. This project was designed to test whether humanistic principles could be used in the design and implementation of digital projects, and whether graphical means could serve as a primary mode of knowledge production. Temporal Modeling was built with a team of players that include Bethany Nowviskie, who guided the design process, freelance Flash designer Jim Allman, and designer Petra Michel. The project has demonstrated that a visual theater for knowledge production could create primary information and analysis, not merely serve as its display.Less
This chapter discusses Temporal Modeling, the first project at SpecLab. This project was designed to test whether humanistic principles could be used in the design and implementation of digital projects, and whether graphical means could serve as a primary mode of knowledge production. Temporal Modeling was built with a team of players that include Bethany Nowviskie, who guided the design process, freelance Flash designer Jim Allman, and designer Petra Michel. The project has demonstrated that a visual theater for knowledge production could create primary information and analysis, not merely serve as its display.
Sarah S. Richardson
- Published in print:
- 2013
- Published Online:
- May 2014
- ISBN:
- 9780226084688
- eISBN:
- 9780226084718
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226084718.003.0001
- Subject:
- History, History of Science, Technology, and Medicine
This chapter situates the sex chromosomes within the history of twentieth century theories of sex, gender, and sexuality. The author shows how the X and Y chromosomes, thought of as “sex itself,” ...
More
This chapter situates the sex chromosomes within the history of twentieth century theories of sex, gender, and sexuality. The author shows how the X and Y chromosomes, thought of as “sex itself,” came to anchor a conception of sex as a biologically fixed and unalterable binary. The chapter frames the book’s major questions, locating them within scholarship in feminist science studies and social, historical, and philosophical research on the social dimensions of science. The book’s theoretical and methodological innovations, including the concepts of “modeling gender in science,” “gender analysis,” “gender criticality,” and “gender valence,” are introduced and defined.Less
This chapter situates the sex chromosomes within the history of twentieth century theories of sex, gender, and sexuality. The author shows how the X and Y chromosomes, thought of as “sex itself,” came to anchor a conception of sex as a biologically fixed and unalterable binary. The chapter frames the book’s major questions, locating them within scholarship in feminist science studies and social, historical, and philosophical research on the social dimensions of science. The book’s theoretical and methodological innovations, including the concepts of “modeling gender in science,” “gender analysis,” “gender criticality,” and “gender valence,” are introduced and defined.
Zhong-Lin Lu and Barbara Dosher
- Published in print:
- 2013
- Published Online:
- May 2014
- ISBN:
- 9780262019453
- eISBN:
- 9780262314930
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262019453.003.0010
- Subject:
- Psychology, Vision
This chapter provides a guidebook to the basic issues in quantitative data analysis and modeling. It shows how we test the quality of a model and fit the model to observed data. The quality of a ...
More
This chapter provides a guidebook to the basic issues in quantitative data analysis and modeling. It shows how we test the quality of a model and fit the model to observed data. The quality of a model or theory includes qualitative assessments related to internal consistency, breadth of application, and the ability to make new and useful predictions. Another assessment of the quality of a model is its ability to predict or fit the observed behavioral data in relevant domains quantitatively. Two criteria for fitting a model to data are considered, a least-squared error criterion and a maximum likelihood criterion, along with methods of estimating the best-fitting parameters of the models. Bootstrap methods are used to estimate the variability of derived data and model parameters. Several methods of comparing and selecting between models are considered. The chapter uses typical psychophysical testing situations to illustrate several standard applications of these methods of data analysis and modeling.Less
This chapter provides a guidebook to the basic issues in quantitative data analysis and modeling. It shows how we test the quality of a model and fit the model to observed data. The quality of a model or theory includes qualitative assessments related to internal consistency, breadth of application, and the ability to make new and useful predictions. Another assessment of the quality of a model is its ability to predict or fit the observed behavioral data in relevant domains quantitatively. Two criteria for fitting a model to data are considered, a least-squared error criterion and a maximum likelihood criterion, along with methods of estimating the best-fitting parameters of the models. Bootstrap methods are used to estimate the variability of derived data and model parameters. Several methods of comparing and selecting between models are considered. The chapter uses typical psychophysical testing situations to illustrate several standard applications of these methods of data analysis and modeling.
Vladan Starcevic, MD, PhD
- Published in print:
- 2009
- Published Online:
- November 2020
- ISBN:
- 9780195369250
- eISBN:
- 9780197562642
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195369250.003.0009
- Subject:
- Clinical Medicine and Allied Health, Psychiatry
Specific phobias (also referred to as simple phobias and isolated phobias) represent a heterogeneous group of disorders characterized by excessive and/or irrational fear of one of relatively few ...
More
Specific phobias (also referred to as simple phobias and isolated phobias) represent a heterogeneous group of disorders characterized by excessive and/or irrational fear of one of relatively few and usually related objects, situations, places, phenomena, or activities (phobic stimuli). The phobic stimuli are either avoided or endured with intense anxiety or discomfort. People with specific phobias are aware that their fear is unreasonable, but this does not diminish the intensity of the fear. Rather, they are quite distressed about being afraid or feel handicapped by their phobia. Specific phobias are frequently encountered in the general population, but they are relatively uncommon in the clinical setting. Most phobias have a remarkable tendency to persist, prompting an assumption that they cannot be easily extinguished because of their ‘‘purpose’’ to protect against danger. Specific phobias are deceptively simple, as they are easy to describe and recognize but often difficult to understand. There are several conceptual problems and a number of issues associated with specific phobias:… 1. Where are the boundaries of specific phobias? How can we develop better criteria on the basis of which specific phobia could be distinguished as a psychiatric disorder from fears and avoidance considered to be within the realm of ‘‘normality?’’ 2. How can specific phobias be taken seriously by both the sufferers and clinicians? 3. In view of the considerable differences between various types of specific phobias, should they continue to be grouped together? 4. Should specific phobias be grouped on the basis of whether they are driven by fear or disgust? 5. In view of its unique features, should the blood-injection-injury type of specific phobia be given a separate psychopathological, diagnostic, and nosological status? 6. Considering a significant overlap between situational phobias and agoraphobia, should they be grouped together, along a hypothetical situational phobia/agoraphobia spectrum? 7. What is the relationship between specific phobias and other psychopathology? Are they relatively isolated from other disorders, both cross-sectionally and longitudinally, or should they more appropriately be conceptualized as a predisposition to or a risk factor for some psychiatric conditions? 8. How specific are pathways that lead to specific phobias? 9. Has the dominant treatment model for specific phobias, based on exposure therapy, exhausted its potential? Is the tendency for specific phobias to persist adequately addressed by treatments derived from learning theory?
Less
Specific phobias (also referred to as simple phobias and isolated phobias) represent a heterogeneous group of disorders characterized by excessive and/or irrational fear of one of relatively few and usually related objects, situations, places, phenomena, or activities (phobic stimuli). The phobic stimuli are either avoided or endured with intense anxiety or discomfort. People with specific phobias are aware that their fear is unreasonable, but this does not diminish the intensity of the fear. Rather, they are quite distressed about being afraid or feel handicapped by their phobia. Specific phobias are frequently encountered in the general population, but they are relatively uncommon in the clinical setting. Most phobias have a remarkable tendency to persist, prompting an assumption that they cannot be easily extinguished because of their ‘‘purpose’’ to protect against danger. Specific phobias are deceptively simple, as they are easy to describe and recognize but often difficult to understand. There are several conceptual problems and a number of issues associated with specific phobias:… 1. Where are the boundaries of specific phobias? How can we develop better criteria on the basis of which specific phobia could be distinguished as a psychiatric disorder from fears and avoidance considered to be within the realm of ‘‘normality?’’ 2. How can specific phobias be taken seriously by both the sufferers and clinicians? 3. In view of the considerable differences between various types of specific phobias, should they continue to be grouped together? 4. Should specific phobias be grouped on the basis of whether they are driven by fear or disgust? 5. In view of its unique features, should the blood-injection-injury type of specific phobia be given a separate psychopathological, diagnostic, and nosological status? 6. Considering a significant overlap between situational phobias and agoraphobia, should they be grouped together, along a hypothetical situational phobia/agoraphobia spectrum? 7. What is the relationship between specific phobias and other psychopathology? Are they relatively isolated from other disorders, both cross-sectionally and longitudinally, or should they more appropriately be conceptualized as a predisposition to or a risk factor for some psychiatric conditions? 8. How specific are pathways that lead to specific phobias? 9. Has the dominant treatment model for specific phobias, based on exposure therapy, exhausted its potential? Is the tendency for specific phobias to persist adequately addressed by treatments derived from learning theory?
Thanh V. Tran and Keith T. Chan
- Published in print:
- 2021
- Published Online:
- September 2021
- ISBN:
- 9780190888510
- eISBN:
- 9780190888527
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190888510.003.0006
- Subject:
- Social Work, Crime and Justice, Communities and Organizations
In this chapter, we focus on the use of Structural Equation Modeling (SEM) to compare path models across two or more cultural groups. SEM can be used to test the goodness of fit of a causal model, as ...
More
In this chapter, we focus on the use of Structural Equation Modeling (SEM) to compare path models across two or more cultural groups. SEM can be used to test the goodness of fit of a causal model, as well as to test equivalence of causal relationships among variables of interest across various cultural groups. We will demonstrate SEM through the use of Stata for these purposes. We begin with a rationale for path mode analysis, move on to provide context for the construction of theories in path models using SEM, and provide examples of SEM models for various cultural groups for comparison. We provide examples of Stata commands for examining differences in direct and indirect effects along with goodness of fit statistics across various cultural groups using SEM.Less
In this chapter, we focus on the use of Structural Equation Modeling (SEM) to compare path models across two or more cultural groups. SEM can be used to test the goodness of fit of a causal model, as well as to test equivalence of causal relationships among variables of interest across various cultural groups. We will demonstrate SEM through the use of Stata for these purposes. We begin with a rationale for path mode analysis, move on to provide context for the construction of theories in path models using SEM, and provide examples of SEM models for various cultural groups for comparison. We provide examples of Stata commands for examining differences in direct and indirect effects along with goodness of fit statistics across various cultural groups using SEM.
Matthew C. Fitzpatrick and Aaron M. Ellison
- Published in print:
- 2017
- Published Online:
- February 2018
- ISBN:
- 9780198779841
- eISBN:
- 9780191825873
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198779841.003.0028
- Subject:
- Biology, Plant Sciences and Forestry, Ecology
Climatic change likely will exacerbate current threats to carnivorous plants. However, estimating the severity of climatic change is challenged by the unique ecology of carnivorous plants, including ...
More
Climatic change likely will exacerbate current threats to carnivorous plants. However, estimating the severity of climatic change is challenged by the unique ecology of carnivorous plants, including habitat specialization, dispersal limitation, small ranges, and small population sizes. We discuss and apply methods for modeling species distributions to overcome these challenges and quantify the vulnerability of carnivorous plants to rapid climatic change. Results suggest that climatic change will reduce habitat suitability for most carnivorous plants. Models also project increases in habitat suitability for many species, but the extent to which these increases may offset habitat losses will depend on whether individuals can disperse to and establish in newly suitable habitats outside of their current distribution. Reducing existing stressors and protecting habitats where numerous carnivorous plant species occur may ameliorate impacts of climatic change on this unique group of plants.Less
Climatic change likely will exacerbate current threats to carnivorous plants. However, estimating the severity of climatic change is challenged by the unique ecology of carnivorous plants, including habitat specialization, dispersal limitation, small ranges, and small population sizes. We discuss and apply methods for modeling species distributions to overcome these challenges and quantify the vulnerability of carnivorous plants to rapid climatic change. Results suggest that climatic change will reduce habitat suitability for most carnivorous plants. Models also project increases in habitat suitability for many species, but the extent to which these increases may offset habitat losses will depend on whether individuals can disperse to and establish in newly suitable habitats outside of their current distribution. Reducing existing stressors and protecting habitats where numerous carnivorous plant species occur may ameliorate impacts of climatic change on this unique group of plants.
Eaton E. Lattman, Thomas D. Grant, and Edward H. Snell
- Published in print:
- 2018
- Published Online:
- September 2018
- ISBN:
- 9780199670871
- eISBN:
- 9780191749575
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199670871.003.0004
- Subject:
- Physics, Soft Matter / Biological Physics
This chapter discusses recovering shape or structural information from SAXS data. Key to any such process is the ability to generate a calculated intensity from a model, and to compare this curve ...
More
This chapter discusses recovering shape or structural information from SAXS data. Key to any such process is the ability to generate a calculated intensity from a model, and to compare this curve with the experimental one. Models for the particle scattering density can be approximated as pure homogenenous geometric shapes. More complex particle surfaces can be represented by spherical harmonics or by a set of close-packed beads. Sometimes structural information is known for components of a particle. Rigid body modeling attempts to rotate and translate structures relative to one another, such that the resulting scattering profile calculated from the model agrees with the experimental SAXS data. More advanced hybrid modelling procedures aim to incorporate as much structural information as is available, including modelling protein dynamics. Solutions may not always contain a homogeneous set of particles. A common case is the presence of two or more conformations of a single particle or a mixture of oligomeric species. The method of singular value decomposition can extract scattering for conformationally distinct species.Less
This chapter discusses recovering shape or structural information from SAXS data. Key to any such process is the ability to generate a calculated intensity from a model, and to compare this curve with the experimental one. Models for the particle scattering density can be approximated as pure homogenenous geometric shapes. More complex particle surfaces can be represented by spherical harmonics or by a set of close-packed beads. Sometimes structural information is known for components of a particle. Rigid body modeling attempts to rotate and translate structures relative to one another, such that the resulting scattering profile calculated from the model agrees with the experimental SAXS data. More advanced hybrid modelling procedures aim to incorporate as much structural information as is available, including modelling protein dynamics. Solutions may not always contain a homogeneous set of particles. A common case is the presence of two or more conformations of a single particle or a mixture of oligomeric species. The method of singular value decomposition can extract scattering for conformationally distinct species.
Douglas A. Schenck and Peter R. Wilson
- Published in print:
- 1994
- Published Online:
- November 2020
- ISBN:
- 9780195087147
- eISBN:
- 9780197560532
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195087147.003.0010
- Subject:
- Computer Science, Software Engineering
Each information model is unique, as is the process of developing that model. In this Chapter we provide some broad guidelines to assist you in creating a quality model. We are basically ...
More
Each information model is unique, as is the process of developing that model. In this Chapter we provide some broad guidelines to assist you in creating a quality model. We are basically recommending a policy of progressive refinement when modeling but the actual process usually turns out to be iterative. So, although one might start out with good intentions of using a top-down approach, one often ends up with a mixture of top-down, bottom-up, and middle-out strategies. The recommendations are principally cast in the form of check lists and give a skeleton outline of the process. Chapter 4 provides a complete worked example which puts some flesh on the bones. An information model may be created by a single person, given sufficient knowledge, or preferably and more likely by a team of people. An information model represents some portion of the real world. In order to produce such a model an obvious requirement is knowledge of the particular real world aspects that are of interest. People with this knowledge are called domain experts. The other side of the coin is that knowledge of information modeling is required in order to develop an information model. These people are called modeling experts. Typically, the domain experts are not conversant with information modeling and the modeling experts are not conversant with the subject. Hence the usual need for at least two parties to join forces. Together the domain and modeling experts can produce an information model that satisfies their own requirements. However, an information model is typically meant to be used by a larger audience than just its creators. There is a need to communicate the model to those who may not have the skills and knowledge to create such a model but who do have the background to utilize it. Thus the requirement for a third group to review the model during its formative stages to ensure that it is understandable by the target audience. This is the review team who act somewhat like the editors in a publishing house, or like friendly quality control inspectors.
Less
Each information model is unique, as is the process of developing that model. In this Chapter we provide some broad guidelines to assist you in creating a quality model. We are basically recommending a policy of progressive refinement when modeling but the actual process usually turns out to be iterative. So, although one might start out with good intentions of using a top-down approach, one often ends up with a mixture of top-down, bottom-up, and middle-out strategies. The recommendations are principally cast in the form of check lists and give a skeleton outline of the process. Chapter 4 provides a complete worked example which puts some flesh on the bones. An information model may be created by a single person, given sufficient knowledge, or preferably and more likely by a team of people. An information model represents some portion of the real world. In order to produce such a model an obvious requirement is knowledge of the particular real world aspects that are of interest. People with this knowledge are called domain experts. The other side of the coin is that knowledge of information modeling is required in order to develop an information model. These people are called modeling experts. Typically, the domain experts are not conversant with information modeling and the modeling experts are not conversant with the subject. Hence the usual need for at least two parties to join forces. Together the domain and modeling experts can produce an information model that satisfies their own requirements. However, an information model is typically meant to be used by a larger audience than just its creators. There is a need to communicate the model to those who may not have the skills and knowledge to create such a model but who do have the background to utilize it. Thus the requirement for a third group to review the model during its formative stages to ensure that it is understandable by the target audience. This is the review team who act somewhat like the editors in a publishing house, or like friendly quality control inspectors.
Douglas A. Schenck and Peter R. Wilson
- Published in print:
- 1994
- Published Online:
- November 2020
- ISBN:
- 9780195087147
- eISBN:
- 9780197560532
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195087147.003.0015
- Subject:
- Computer Science, Software Engineering
One of the assumptions underlying EXPRESS is that there is, somewhere, going to be an information base that contains instances of data corresponding to the information model. In this Chapter we ...
More
One of the assumptions underlying EXPRESS is that there is, somewhere, going to be an information base that contains instances of data corresponding to the information model. In this Chapter we examine aspects of this hypothetical information base. We also briefly note some of the software tools that could be useful when you are modeling using EXPRESS. We use the term information base in a very general sense; it is any repository that contains data corresponding to an EXPRESS (or EXPRESS-G) information model. The idea that probably comes to mind when hearing the term is that ‘an information base is a fancy name for a database.’ In our sense, an information base may be a database, but it may also be more than or less than a database. In fact, it may not even be computer-based at all! Some examples of information bases are: • Intelligent Knowledgebases • Knowledgebases • Databases • Computer files • Printed documents These examples are listed in approximately decreasing order of technical complexity and increasing order of technology age. Thus, intelligent knowledgebases are at, or even beyond, the leading edge of technology, while printed documents have been available for some centuries, although the technology for producing these has made dramatic strides over the last decade. Below we briefly describe various types of information bases, starting with databases. Knowledgebases are at the leading edge of the technology and are not treated; we merely note that there appears to be no fundamental reason why EXPRESS models should not be stored and instanced using this advanced technology. Databases provide a structured means of both storing data in a computer system and of querying the data in an efficient manner. Internally, the data may be structured in the form of a network, a hierarchy, or in tables following the relational model. Most new databases are relational, while older ones may be hierarchical or network based. Object-Oriented databases have recently appeared, but as yet there appears to be no consensus on exactly what an OODB is. Databases are designed so that they can be modified and queried by mutiple users.
Less
One of the assumptions underlying EXPRESS is that there is, somewhere, going to be an information base that contains instances of data corresponding to the information model. In this Chapter we examine aspects of this hypothetical information base. We also briefly note some of the software tools that could be useful when you are modeling using EXPRESS. We use the term information base in a very general sense; it is any repository that contains data corresponding to an EXPRESS (or EXPRESS-G) information model. The idea that probably comes to mind when hearing the term is that ‘an information base is a fancy name for a database.’ In our sense, an information base may be a database, but it may also be more than or less than a database. In fact, it may not even be computer-based at all! Some examples of information bases are: • Intelligent Knowledgebases • Knowledgebases • Databases • Computer files • Printed documents These examples are listed in approximately decreasing order of technical complexity and increasing order of technology age. Thus, intelligent knowledgebases are at, or even beyond, the leading edge of technology, while printed documents have been available for some centuries, although the technology for producing these has made dramatic strides over the last decade. Below we briefly describe various types of information bases, starting with databases. Knowledgebases are at the leading edge of the technology and are not treated; we merely note that there appears to be no fundamental reason why EXPRESS models should not be stored and instanced using this advanced technology. Databases provide a structured means of both storing data in a computer system and of querying the data in an efficient manner. Internally, the data may be structured in the form of a network, a hierarchy, or in tables following the relational model. Most new databases are relational, while older ones may be hierarchical or network based. Object-Oriented databases have recently appeared, but as yet there appears to be no consensus on exactly what an OODB is. Databases are designed so that they can be modified and queried by mutiple users.
Raymond Fox
- Published in print:
- 2011
- Published Online:
- November 2020
- ISBN:
- 9780190616144
- eISBN:
- 9780197559680
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190616144.003.0009
- Subject:
- Education, Adult Education and Continuous Learning
The critical role of the teacher is laying the paradigmatic groundwork for students learning to be professional. Teachers manifest in their comportment the intellectual, affective, and ethical ...
More
The critical role of the teacher is laying the paradigmatic groundwork for students learning to be professional. Teachers manifest in their comportment the intellectual, affective, and ethical bases of professional expertise. Their very conduct, enhanced by knowledge, embodies the essential message about how to be a helper. Three interwoven processes modeling, mentoring, and mirroring form the basis for professional education. They are converging and commingling processes, not independent elements in learning, as described here for intelligibilitys sake; they are multidirectional in influence and spiral back on each other, comprising a wholesome and fulfilling professional educational venture. Each individual mode is important in and of itself, but their interrelationship is the compelling element. Modeling is a complex process involving observation, imitation, and identification by students of the teacher. It occurs whether or not you intend it or not. Many of the same skills and conditions that promote client growth promote student growth. Strive to create an ambiance that engages students. Seek to engross them at a level that allows them to take the concepts they learn, as well as the examples you provide, whether tacitly or explicitly, from seeing you practice with them in class, and transfer them to their contact with clients. The words you utter, the actions you take, the manner in which you conduct the class are carefully observed and considered by students. They attend to your preparation, enthusiasm, and relatedness as lived lessons about how to deliver these same attributes and functions with clients. They observe your unspoken feedback how your tone and facial expression reveal whether you are attuned and on the right track. In your interaction with students, whether consciously or not, you continually display your own competence in your discipline. Students observe how you practice what you preach in your dealings with them, with colleagues, with syllabus material, nascent ideas, and theories. They inevitably appraise your ability to facilitate communication, manage dilemmas, encourage mutuality, and foster cooperation in working associations with others. They assess your patience, availability, and skill.
Less
The critical role of the teacher is laying the paradigmatic groundwork for students learning to be professional. Teachers manifest in their comportment the intellectual, affective, and ethical bases of professional expertise. Their very conduct, enhanced by knowledge, embodies the essential message about how to be a helper. Three interwoven processes modeling, mentoring, and mirroring form the basis for professional education. They are converging and commingling processes, not independent elements in learning, as described here for intelligibilitys sake; they are multidirectional in influence and spiral back on each other, comprising a wholesome and fulfilling professional educational venture. Each individual mode is important in and of itself, but their interrelationship is the compelling element. Modeling is a complex process involving observation, imitation, and identification by students of the teacher. It occurs whether or not you intend it or not. Many of the same skills and conditions that promote client growth promote student growth. Strive to create an ambiance that engages students. Seek to engross them at a level that allows them to take the concepts they learn, as well as the examples you provide, whether tacitly or explicitly, from seeing you practice with them in class, and transfer them to their contact with clients. The words you utter, the actions you take, the manner in which you conduct the class are carefully observed and considered by students. They attend to your preparation, enthusiasm, and relatedness as lived lessons about how to deliver these same attributes and functions with clients. They observe your unspoken feedback how your tone and facial expression reveal whether you are attuned and on the right track. In your interaction with students, whether consciously or not, you continually display your own competence in your discipline. Students observe how you practice what you preach in your dealings with them, with colleagues, with syllabus material, nascent ideas, and theories. They inevitably appraise your ability to facilitate communication, manage dilemmas, encourage mutuality, and foster cooperation in working associations with others. They assess your patience, availability, and skill.
Robert W. Batterman
- Published in print:
- 2021
- Published Online:
- August 2021
- ISBN:
- 9780197568613
- eISBN:
- 9780197568644
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197568613.003.0002
- Subject:
- Philosophy, Philosophy of Science
This chapter relates the philosophical concept of multiple realizability to the physics concept of universality. It discusses and responds to Elliott Sober’s defense of reductionism in the face of ...
More
This chapter relates the philosophical concept of multiple realizability to the physics concept of universality. It discusses and responds to Elliott Sober’s defense of reductionism in the face of multiple realizability. Further, it introduces an important explanatory question (labelled AUT). This asks how systems that are heterogeneous at some micro-scale can exhibit the same pattern of behavior at the macro-scale. It is shown that reductionists do not have the resources to provide a successful answer. Two, related, answers are proposed. One involving Renormalization Group arguments, the other invoking the theory of homogenization.Less
This chapter relates the philosophical concept of multiple realizability to the physics concept of universality. It discusses and responds to Elliott Sober’s defense of reductionism in the face of multiple realizability. Further, it introduces an important explanatory question (labelled AUT). This asks how systems that are heterogeneous at some micro-scale can exhibit the same pattern of behavior at the macro-scale. It is shown that reductionists do not have the resources to provide a successful answer. Two, related, answers are proposed. One involving Renormalization Group arguments, the other invoking the theory of homogenization.
Michel J. G. van Eeten and Emery Roe
- Published in print:
- 2002
- Published Online:
- November 2020
- ISBN:
- 9780195139686
- eISBN:
- 9780197561713
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195139686.003.0006
- Subject:
- Earth Sciences and Geography, Environmental Geography
The examples found at the beginning of this book are, to our minds, neither instances of a lack of societal commitment to saving the environment nor evidence of unreasonable demands for highly ...
More
The examples found at the beginning of this book are, to our minds, neither instances of a lack of societal commitment to saving the environment nor evidence of unreasonable demands for highly reliable services. If they were that, the obvious answer would then be to bite the bullet and take either the environment or the services more seriously. In our view, the examples really express the hard paradox of having to improve the environment while ensuring reliable services at the same time. Beyond specific examples, the strongest expressions of the paradox being taken seriously in terms of the budgets and stakes involved are those large-scale adaptive management initiatives proposed and undertaken in regions where they seem most difficult to implement; that is, where the reliable provision of services is a priority. Just what “reliability” is for the kinds of organizations we study is detailed in chapter 4. Here, we take a closer look at our case studies to see how the issues are articulated empirically. The paradox is even enshrined in law. The mandate of the Pacific Northwest Electric Power Planning and Conservation Act of 1980, for example, is to “protect, mitigate and enhance fish and wildlife affected by the development, operation, and management of [power generation] facilities while assuring the Pacific Northwest an adequate, efficient, economical, and reliable water supply.” But how to do this? Or, as one ecologist, Lance Gunderson (1999b, p. 27), phrased the paradox, “So how does one assess the unpredictable in order to manage the unmanageable?” The answer usually given by ecologists and others is to “undertake adaptive management” (chapter 2). The decision maker learns by experimenting with the system or its elements, systematically and step-by-step, in order to develop greater insight into what is known and not known for managing ecosystem functions and services. Learning more on the ground about the system to be managed is imperative, especially given imprecisely defined terms such as “restore,” “enhance,” and “reliable.” As the senior biologist planner at the Northwest Power Planning Council told us, the last clause of the Power Act “AERPS” (adequate, efficient, economical, and reliable power supply) “never has been quantified, so it is not very clear what it actually means.” He is not alone.
Less
The examples found at the beginning of this book are, to our minds, neither instances of a lack of societal commitment to saving the environment nor evidence of unreasonable demands for highly reliable services. If they were that, the obvious answer would then be to bite the bullet and take either the environment or the services more seriously. In our view, the examples really express the hard paradox of having to improve the environment while ensuring reliable services at the same time. Beyond specific examples, the strongest expressions of the paradox being taken seriously in terms of the budgets and stakes involved are those large-scale adaptive management initiatives proposed and undertaken in regions where they seem most difficult to implement; that is, where the reliable provision of services is a priority. Just what “reliability” is for the kinds of organizations we study is detailed in chapter 4. Here, we take a closer look at our case studies to see how the issues are articulated empirically. The paradox is even enshrined in law. The mandate of the Pacific Northwest Electric Power Planning and Conservation Act of 1980, for example, is to “protect, mitigate and enhance fish and wildlife affected by the development, operation, and management of [power generation] facilities while assuring the Pacific Northwest an adequate, efficient, economical, and reliable water supply.” But how to do this? Or, as one ecologist, Lance Gunderson (1999b, p. 27), phrased the paradox, “So how does one assess the unpredictable in order to manage the unmanageable?” The answer usually given by ecologists and others is to “undertake adaptive management” (chapter 2). The decision maker learns by experimenting with the system or its elements, systematically and step-by-step, in order to develop greater insight into what is known and not known for managing ecosystem functions and services. Learning more on the ground about the system to be managed is imperative, especially given imprecisely defined terms such as “restore,” “enhance,” and “reliable.” As the senior biologist planner at the Northwest Power Planning Council told us, the last clause of the Power Act “AERPS” (adequate, efficient, economical, and reliable power supply) “never has been quantified, so it is not very clear what it actually means.” He is not alone.
Michel J. G. van Eeten and Emery Roe
- Published in print:
- 2002
- Published Online:
- November 2020
- ISBN:
- 9780195139686
- eISBN:
- 9780197561713
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195139686.003.0008
- Subject:
- Earth Sciences and Geography, Environmental Geography
Many of the most expensive and important ecosystem management initiatives under way today are in “zones of conflict” between increasing human populations, resource utilization, and demands for ...
More
Many of the most expensive and important ecosystem management initiatives under way today are in “zones of conflict” between increasing human populations, resource utilization, and demands for environmental services. The four cases in this book—the San Francisco Bay-Delta, the Everglades, the Columbia River Basin, and the Green Heart of the western Netherlands—are no exception. Each combines the need for large-scale ecosystem restoration with the widespread provision of reliable ecosystem services. As seen in chapter 4, case-by-case management is the regime most suited for such contentious issues in zones of conflict. It is no small irony, therefore, that these ecosystem management initiatives are often presented as showcases for adaptive management (e.g., Gunderson et al. 1995; Johnson et al. 1999). This showcasing is understandable when we realize that here the paradox is at its sharpest. Consequently, the initiatives are unique in the considerable amount of resources made available to adaptive management or ecosystem management, precisely because the ecosystems are in zones on conflict. Much of the funds come not from natural resource or regulatory agencies, but from the organizations that produce and deliver services from these ecosystems, such as water-supply or power-generation companies. In southern Florida, the Army Corps of Engineers (ACE) and the South Florida Water Management District (SFMWD) estimate the costs of their proposed ecosystem restoration plan to be $7.8 billion; in the Bay- Delta, the CALFED Program expects to spend about $10 billion during this implementation having already spent more than $300 million on ecosystem restoration in recent years; and in the Columbia River Basin, the Bonneville Power Administration (BPA) alone provides some $427 million per year for fish and wildlife measures. As a senior BPA planner remarked, “We are the largest fish and wildlife agency in the world.” Contrast these millions and billions to the funding problems often reported by “purer” forms of adaptive management for ecosystems towards the left of the gradient in figure 4.1. In short, although important services are derived from these ecosystems, the services do not override ecosystem-functions, thus raising the resource demands of ecosystem management.
Less
Many of the most expensive and important ecosystem management initiatives under way today are in “zones of conflict” between increasing human populations, resource utilization, and demands for environmental services. The four cases in this book—the San Francisco Bay-Delta, the Everglades, the Columbia River Basin, and the Green Heart of the western Netherlands—are no exception. Each combines the need for large-scale ecosystem restoration with the widespread provision of reliable ecosystem services. As seen in chapter 4, case-by-case management is the regime most suited for such contentious issues in zones of conflict. It is no small irony, therefore, that these ecosystem management initiatives are often presented as showcases for adaptive management (e.g., Gunderson et al. 1995; Johnson et al. 1999). This showcasing is understandable when we realize that here the paradox is at its sharpest. Consequently, the initiatives are unique in the considerable amount of resources made available to adaptive management or ecosystem management, precisely because the ecosystems are in zones on conflict. Much of the funds come not from natural resource or regulatory agencies, but from the organizations that produce and deliver services from these ecosystems, such as water-supply or power-generation companies. In southern Florida, the Army Corps of Engineers (ACE) and the South Florida Water Management District (SFMWD) estimate the costs of their proposed ecosystem restoration plan to be $7.8 billion; in the Bay- Delta, the CALFED Program expects to spend about $10 billion during this implementation having already spent more than $300 million on ecosystem restoration in recent years; and in the Columbia River Basin, the Bonneville Power Administration (BPA) alone provides some $427 million per year for fish and wildlife measures. As a senior BPA planner remarked, “We are the largest fish and wildlife agency in the world.” Contrast these millions and billions to the funding problems often reported by “purer” forms of adaptive management for ecosystems towards the left of the gradient in figure 4.1. In short, although important services are derived from these ecosystems, the services do not override ecosystem-functions, thus raising the resource demands of ecosystem management.