Rein Taagepera
- Published in print:
- 2007
- Published Online:
- September 2007
- ISBN:
- 9780199287741
- eISBN:
- 9780191713408
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199287741.003.0012
- Subject:
- Political Science, Democratization
The cube root law of assembly sizes applies to first or only chambers. It says that assembly size is approximately the cube root of the country's population, because this size minimizes the workload ...
More
The cube root law of assembly sizes applies to first or only chambers. It says that assembly size is approximately the cube root of the country's population, because this size minimizes the workload of a representative. This quantitatively predictive logical model agrees with the world averages. Smaller countries have fewer registered parties but more party members per 1,000 population.Less
The cube root law of assembly sizes applies to first or only chambers. It says that assembly size is approximately the cube root of the country's population, because this size minimizes the workload of a representative. This quantitatively predictive logical model agrees with the world averages. Smaller countries have fewer registered parties but more party members per 1,000 population.
Rein Taagepera
- Published in print:
- 2007
- Published Online:
- September 2007
- ISBN:
- 9780199287741
- eISBN:
- 9780191713408
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199287741.003.0016
- Subject:
- Political Science, Democratization
The number of seats in the European Parliament roughly equals the cube root of the population of the European Union. This theoretically based ‘cube root law of assembly sizes’ also fits most national ...
More
The number of seats in the European Parliament roughly equals the cube root of the population of the European Union. This theoretically based ‘cube root law of assembly sizes’ also fits most national assemblies, and it could be made the official norm for the EP. Allocation of EP seats and Council of the EU voting weights among member states has for forty years closely approximated the distribution a ‘minority enhancement equation’ predicts, solely on the basis of the number and populations of member states plus the total number of seats/voting weights. This logically founded formula could be made the official norm, so as to save political wrangling. It may also be of use for some other supranational bodies and federal second chambers.Less
The number of seats in the European Parliament roughly equals the cube root of the population of the European Union. This theoretically based ‘cube root law of assembly sizes’ also fits most national assemblies, and it could be made the official norm for the EP. Allocation of EP seats and Council of the EU voting weights among member states has for forty years closely approximated the distribution a ‘minority enhancement equation’ predicts, solely on the basis of the number and populations of member states plus the total number of seats/voting weights. This logically founded formula could be made the official norm, so as to save political wrangling. It may also be of use for some other supranational bodies and federal second chambers.
Mathew Penrose
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198506263
- eISBN:
- 9780191707858
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198506263.003.0007
- Subject:
- Mathematics, Probability / Statistics
This chapter is concerned with the minimum degree of the random geometric graph G(n,r), or equivalently with the threshold value of r at which the minimum degree achieves value k. Laws of large ...
More
This chapter is concerned with the minimum degree of the random geometric graph G(n,r), or equivalently with the threshold value of r at which the minimum degree achieves value k. Laws of large numbers are given for this threshold in cases where k is fixed or grows logarithmically, and the random points are placed in a smoothly bounded compact region in d-space with density bounded below on that region. Similar results are given when the random points are in a cube. The limiting constants depend on the minimum value of the density on the boundary, or on the interior of the region; on the cube, different types of boundary need to be considered separately.Less
This chapter is concerned with the minimum degree of the random geometric graph G(n,r), or equivalently with the threshold value of r at which the minimum degree achieves value k. Laws of large numbers are given for this threshold in cases where k is fixed or grows logarithmically, and the random points are placed in a smoothly bounded compact region in d-space with density bounded below on that region. Similar results are given when the random points are in a cube. The limiting constants depend on the minimum value of the density on the boundary, or on the interior of the region; on the cube, different types of boundary need to be considered separately.
Warwick J. McKibbin
- Published in print:
- 2008
- Published Online:
- May 2008
- ISBN:
- 9780199235889
- eISBN:
- 9780191717109
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199235889.003.0002
- Subject:
- Economics and Finance, South and East Asia
This study examines the environmental consequences of rapid growth in China, focusing on the environmental consequences of rising energy use. It explores the recent past as well as potential future ...
More
This study examines the environmental consequences of rapid growth in China, focusing on the environmental consequences of rising energy use. It explores the recent past as well as potential future developments and potential policy options. The chapter is organized as follows. Section 2.2 presents a brief overview of energy use in China. It also provides projections from the US Energy Information Agency of energy use in China up to 2030 as well as projections from the G-Cubed model for carbon emissions under different assumptions about the sources of economic growth in China. As well as considering the environmental problems in China, Section 2.3 considers policy responses and some quantitative evaluation of these for greenhouse emissions.Less
This study examines the environmental consequences of rapid growth in China, focusing on the environmental consequences of rising energy use. It explores the recent past as well as potential future developments and potential policy options. The chapter is organized as follows. Section 2.2 presents a brief overview of energy use in China. It also provides projections from the US Energy Information Agency of energy use in China up to 2030 as well as projections from the G-Cubed model for carbon emissions under different assumptions about the sources of economic growth in China. As well as considering the environmental problems in China, Section 2.3 considers policy responses and some quantitative evaluation of these for greenhouse emissions.
Ralph Schroeder
- Published in print:
- 2010
- Published Online:
- January 2011
- ISBN:
- 9780195371284
- eISBN:
- 9780199865000
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195371284.003.0004
- Subject:
- Psychology, Human-Technology Interaction
This chapter focuses on how people collaborate in various kinds of multi-user virtual environments. It begins by reviewing findings on distributed work and findings about problems in collaborating in ...
More
This chapter focuses on how people collaborate in various kinds of multi-user virtual environments. It begins by reviewing findings on distributed work and findings about problems in collaborating in multi-user virtual environments, such as the limited field of view and in referencing objects. Next, it describes various trials which have compared how users perform the same task in different multi-user systems, including comparisons with face-to-face collaboration. Various combinations of systems, such as an immersive projection technology system linked to doing the task on a desktop computer, are also analyzed. A key finding that is reported is that an object manipulation task (putting together a Rubik's cube) can be done just as effectively by two people working at-a-distance in immersive projection technology systems as in a physical face-to-face setting. The chapter also discusses collaboration over longer periods in online virtual worlds.Less
This chapter focuses on how people collaborate in various kinds of multi-user virtual environments. It begins by reviewing findings on distributed work and findings about problems in collaborating in multi-user virtual environments, such as the limited field of view and in referencing objects. Next, it describes various trials which have compared how users perform the same task in different multi-user systems, including comparisons with face-to-face collaboration. Various combinations of systems, such as an immersive projection technology system linked to doing the task on a desktop computer, are also analyzed. A key finding that is reported is that an object manipulation task (putting together a Rubik's cube) can be done just as effectively by two people working at-a-distance in immersive projection technology systems as in a physical face-to-face setting. The chapter also discusses collaboration over longer periods in online virtual worlds.
Deborah A. Rockman
- Published in print:
- 2000
- Published Online:
- November 2020
- ISBN:
- 9780195130799
- eISBN:
- 9780197561447
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195130799.003.0008
- Subject:
- Education, Teaching of a Specific Subject
Perspective drawing is a system for creating a two-dimensional illusion of a three-dimensional subject or three-dimensional space. Information, whether observed ...
More
Perspective drawing is a system for creating a two-dimensional illusion of a three-dimensional subject or three-dimensional space. Information, whether observed (empirically based) or imagined (theoretically based), is translated into a language or system that allows three-dimensional forms and space to be illusionistically represented on a two-dimensional surface. Although Brunelleschi is credited with the discovery or creation of perspective theory during the Renaissance in Italy, it is Albrecht Dürer, a German artist, who is best known for his exploration of perspective theory in his prints and drawings. Perspective theory is often separated into two parts: TECHNICAL OR MECHANICAL PERSPECTIVE,which is based on systems and geometry and is the primary focus of this chapter; and FREEHAND PERSPECTIVE, which is based on perception and observation of forms in space and is a more intuitive exploration of perspective theory. Freehand perspective relies to a significant degree on the process of sighting to judge the rate of convergence, depth, angle, etc. Technical or mechanical perspective utilizes drafting tools such as T-squares, compasses, and triangles, while freehand perspective generally explores perspective principles without the use of technical tools. While it is useful to study perspective in its most precise form with the aid of drafting tools and a simple straight-edge, it is also useful to explore these same principles in a purely freehand fashion, which allows for a more relaxed application of perspective principles. In studying perspective, it also becomes important to make a distinction between linear perspective and atmospheric perspective. LINEAR PERSPECTIVE addresses how the shapes, edges, and sizes of objects change in appearance when seen from different positions relative to the observer—off to one side, directly in front, close or far away, above or below, or any number of infinite variations. ATMOSPHERIC PERSPECTIVE describes other characteristics seen in objects that are some distance from the observer. A veil of atmospheric haze affects and decreases clarity, contrast, detail, and color. Atmospheric perspective, which is not mathematically or geometrically based, is a powerful complement to linear perspective, and when used together the illusion of three-dimensionality and space can be powerful.
Less
Perspective drawing is a system for creating a two-dimensional illusion of a three-dimensional subject or three-dimensional space. Information, whether observed (empirically based) or imagined (theoretically based), is translated into a language or system that allows three-dimensional forms and space to be illusionistically represented on a two-dimensional surface. Although Brunelleschi is credited with the discovery or creation of perspective theory during the Renaissance in Italy, it is Albrecht Dürer, a German artist, who is best known for his exploration of perspective theory in his prints and drawings. Perspective theory is often separated into two parts: TECHNICAL OR MECHANICAL PERSPECTIVE,which is based on systems and geometry and is the primary focus of this chapter; and FREEHAND PERSPECTIVE, which is based on perception and observation of forms in space and is a more intuitive exploration of perspective theory. Freehand perspective relies to a significant degree on the process of sighting to judge the rate of convergence, depth, angle, etc. Technical or mechanical perspective utilizes drafting tools such as T-squares, compasses, and triangles, while freehand perspective generally explores perspective principles without the use of technical tools. While it is useful to study perspective in its most precise form with the aid of drafting tools and a simple straight-edge, it is also useful to explore these same principles in a purely freehand fashion, which allows for a more relaxed application of perspective principles. In studying perspective, it also becomes important to make a distinction between linear perspective and atmospheric perspective. LINEAR PERSPECTIVE addresses how the shapes, edges, and sizes of objects change in appearance when seen from different positions relative to the observer—off to one side, directly in front, close or far away, above or below, or any number of infinite variations. ATMOSPHERIC PERSPECTIVE describes other characteristics seen in objects that are some distance from the observer. A veil of atmospheric haze affects and decreases clarity, contrast, detail, and color. Atmospheric perspective, which is not mathematically or geometrically based, is a powerful complement to linear perspective, and when used together the illusion of three-dimensionality and space can be powerful.
Torsten Reimer and Ulrich Hoffrage
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780195315448
- eISBN:
- 9780199932429
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195315448.003.0096
- Subject:
- Psychology, Cognitive Psychology, Human-Technology Interaction
This chapter applies the concept of ecological rationality to the context of groups and teams. A summary of agent-based computer simulations is provided in which groups integrated member opinions on ...
More
This chapter applies the concept of ecological rationality to the context of groups and teams. A summary of agent-based computer simulations is provided in which groups integrated member opinions on the basis of a majority rule. The simulations demonstrate that the performance of a group may be strongly affected by the decision strategies used by its individual members, and specify how this effect is moderated by environmental features. Group performance strongly depended on the distribution of cue validities. When validities were linearly distributed, groups using a compensatory strategy achieved the highest accuracy. Conversely, when cue validities followed a J-shaped distribution, groups using a simple noncompensatory heuristic performed best. While these effects were robust across different quantities of shared information, the validity of shared information exerted stronger effects on group performance. Consequences for prescriptive theories of group decision making are discussed.Less
This chapter applies the concept of ecological rationality to the context of groups and teams. A summary of agent-based computer simulations is provided in which groups integrated member opinions on the basis of a majority rule. The simulations demonstrate that the performance of a group may be strongly affected by the decision strategies used by its individual members, and specify how this effect is moderated by environmental features. Group performance strongly depended on the distribution of cue validities. When validities were linearly distributed, groups using a compensatory strategy achieved the highest accuracy. Conversely, when cue validities followed a J-shaped distribution, groups using a simple noncompensatory heuristic performed best. While these effects were robust across different quantities of shared information, the validity of shared information exerted stronger effects on group performance. Consequences for prescriptive theories of group decision making are discussed.
Richard Cohn
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780199772698
- eISBN:
- 9780199932238
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199772698.003.0005
- Subject:
- Music, Theory, Analysis, Composition
Chapter 5 combines the two preliminary models into a single connected model of the triadic universe. Their points of intersections are theorized as eight voice leading zones. Motion within a zone ...
More
Chapter 5 combines the two preliminary models into a single connected model of the triadic universe. Their points of intersections are theorized as eight voice leading zones. Motion within a zone involves contrary motion, and is neutral; motion between zones involves either upshifting or downshifting voice leading. Upshifting and downshifting are viewed in terms of a melodic dualism which supervenes on the physically apocryphal and metaphysically obsolete harmonic dualism of much nineteenth century theory. The zones are assigned numbers, whose differences modulo 12 are equivalent to the number of semitonal work required for their constituent triads. Progressions through the connected model are traced on two related geometric layouts, the Tonnetz and Douthett’s Cube Dance, each of which has distinct advantages. Analyses focused on chromatic sequences, some of them perturbed, in music of Chopin, Liszt, Brahms, Schubert, and Bruckner.Less
Chapter 5 combines the two preliminary models into a single connected model of the triadic universe. Their points of intersections are theorized as eight voice leading zones. Motion within a zone involves contrary motion, and is neutral; motion between zones involves either upshifting or downshifting voice leading. Upshifting and downshifting are viewed in terms of a melodic dualism which supervenes on the physically apocryphal and metaphysically obsolete harmonic dualism of much nineteenth century theory. The zones are assigned numbers, whose differences modulo 12 are equivalent to the number of semitonal work required for their constituent triads. Progressions through the connected model are traced on two related geometric layouts, the Tonnetz and Douthett’s Cube Dance, each of which has distinct advantages. Analyses focused on chromatic sequences, some of them perturbed, in music of Chopin, Liszt, Brahms, Schubert, and Bruckner.
Barry M. McCoy
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780199556632
- eISBN:
- 9780191723278
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199556632.003.0007
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
This chapter derives the modification of the Mayer expansion made by Ree and Hoover. Analytic expressions for the virial coefficients B2,B3, and B4 are given and Monte–Carlo results for Bn for 5 ≤ n ...
More
This chapter derives the modification of the Mayer expansion made by Ree and Hoover. Analytic expressions for the virial coefficients B2,B3, and B4 are given and Monte–Carlo results for Bn for 5 ≤ n ≤ 10 in dimensions 1 ≤ D ≤ 10 are presented. Various approximate equations of state used to ‘fit’ these coefficients are summarized. Low order virial coefficients for hard squares, cubes and hexagons are given. Open questions relating to the signs of the virial coefficients for hard spheres and discs and to the relation of virial expansions to freezing are discussed.Less
This chapter derives the modification of the Mayer expansion made by Ree and Hoover. Analytic expressions for the virial coefficients B2,B3, and B4 are given and Monte–Carlo results for Bn for 5 ≤ n ≤ 10 in dimensions 1 ≤ D ≤ 10 are presented. Various approximate equations of state used to ‘fit’ these coefficients are summarized. Low order virial coefficients for hard squares, cubes and hexagons are given. Open questions relating to the signs of the virial coefficients for hard spheres and discs and to the relation of virial expansions to freezing are discussed.
Subrata Dasgupta
- Published in print:
- 2018
- Published Online:
- November 2020
- ISBN:
- 9780190843861
- eISBN:
- 9780197559826
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190843861.003.0011
- Subject:
- Computer Science, History of Computer Science
At first blush, computing and biology seem an odd couple, yet they formed a liaison of sorts from the very first years of the electronic digital computer. Following a ...
More
At first blush, computing and biology seem an odd couple, yet they formed a liaison of sorts from the very first years of the electronic digital computer. Following a seminal paper published in 1943 by neurophysiologist Warren McCulloch and mathematical logician Warren Pitts on a mathematical model of neuronal activity, John von Neumann of the Institute of Advanced Study, Princeton, presented at a symposium in 1948 a paper that compared the behaviors of computer circuits and neuronal circuits in the brain. The resulting publication was the fountainhead of what came to be called cellular automata in the 1960s. Von Neumann’s insight was the parallel between the abstraction of biological neurons (nerve cells) as natural binary (on–off) switches and the abstraction of physical computer circuit elements (at the time, relays and vacuum tubes) as artificial binary switches. His ambition was to unify the two and construct a formal universal theory. One remarkable aspect of von Neumann’s program was inspired by the biology: His universal automata must be able to self-reproduce. So his neuron-like automata must be both computational and constructive. In 1955, invited by Yale University to deliver the Silliman Lectures for 1956, von Neumann chose as his topic the relationship between the computer and the brain. He died before being able to deliver the lectures, but the unfinished manuscript was published by Yale University Press under the title The Computer and the Brain (1958). Von Neumann’s definitive writings on self-reproducing cellular automata, edited by his one-time collaborator Arthur Burks of the University of Michigan, was eventually published in 1966 as the book Theory of Self-Reproducing Automata. A possible structure of a von Neumann–style cellular automaton is depicted in Figure 7.1. It comprises a (finite or infinite) configuration of cells in which a cell can be in one of a finite set of states. The state of a cell at any time t is determined by its own state and those of its immediate neighbors in the preceding point of time t – 1, according to a state transition rule.
Less
At first blush, computing and biology seem an odd couple, yet they formed a liaison of sorts from the very first years of the electronic digital computer. Following a seminal paper published in 1943 by neurophysiologist Warren McCulloch and mathematical logician Warren Pitts on a mathematical model of neuronal activity, John von Neumann of the Institute of Advanced Study, Princeton, presented at a symposium in 1948 a paper that compared the behaviors of computer circuits and neuronal circuits in the brain. The resulting publication was the fountainhead of what came to be called cellular automata in the 1960s. Von Neumann’s insight was the parallel between the abstraction of biological neurons (nerve cells) as natural binary (on–off) switches and the abstraction of physical computer circuit elements (at the time, relays and vacuum tubes) as artificial binary switches. His ambition was to unify the two and construct a formal universal theory. One remarkable aspect of von Neumann’s program was inspired by the biology: His universal automata must be able to self-reproduce. So his neuron-like automata must be both computational and constructive. In 1955, invited by Yale University to deliver the Silliman Lectures for 1956, von Neumann chose as his topic the relationship between the computer and the brain. He died before being able to deliver the lectures, but the unfinished manuscript was published by Yale University Press under the title The Computer and the Brain (1958). Von Neumann’s definitive writings on self-reproducing cellular automata, edited by his one-time collaborator Arthur Burks of the University of Michigan, was eventually published in 1966 as the book Theory of Self-Reproducing Automata. A possible structure of a von Neumann–style cellular automaton is depicted in Figure 7.1. It comprises a (finite or infinite) configuration of cells in which a cell can be in one of a finite set of states. The state of a cell at any time t is determined by its own state and those of its immediate neighbors in the preceding point of time t – 1, according to a state transition rule.
R. M. Steinman, W. Menezes, and A. N. Herst
- Published in print:
- 2005
- Published Online:
- March 2012
- ISBN:
- 9780195172881
- eISBN:
- 9780199847570
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195172881.003.0012
- Subject:
- Psychology, Cognitive Psychology
It is possible to measure human gaze control, especially under restricted conditions. This chapter provides a PowerPoint presentation and propriety eye movement visualization to elaborate on human ...
More
It is possible to measure human gaze control, especially under restricted conditions. This chapter provides a PowerPoint presentation and propriety eye movement visualization to elaborate on human gaze control.Less
It is possible to measure human gaze control, especially under restricted conditions. This chapter provides a PowerPoint presentation and propriety eye movement visualization to elaborate on human gaze control.
David Krackhardt
- Published in print:
- 2003
- Published Online:
- November 2020
- ISBN:
- 9780195159509
- eISBN:
- 9780197562017
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195159509.003.0008
- Subject:
- Computer Science, Computer Architecture and Logic Design
In 1973, Mark Granovetter proposed that weak ties are often more important than strong ties in understanding certain network-based phenomena. His argument ...
More
In 1973, Mark Granovetter proposed that weak ties are often more important than strong ties in understanding certain network-based phenomena. His argument rests on the assumption that strong ties tend to bond similar people to each other and these similar people tend to cluster together such that they are all mutually connected. The information obtained through such a network tie is more likely to be redundant, and the network is therefore not a channel for innovation. By contrast, a weak tie more often constitutes a “local bridge” to parts of the social system that are otherwise disconnected, and therefore a weak tie is likely to provide new information from disparate parts of the system. Thus, this theory argues, tie strength is curvilinear with a host of dependent variables: no tie (or an extremely weak tie) is of little consequence; a weak tie provides maximum impact, and a strong tie provides diminished impact. Subsequent research has generally supported Granovetter’s theory (Granovetter 1982), but two issues have been neglected in the research stream. First, there is considerable ambiguity as to what constitutes a strong tie and what constitutes a weak tie. Granovetter laid out four identifying properties of a strong tie: “The strength of a tie is a (probably linear) combination of the amount of time, the emotional intensity, the intimacy (mutual confiding), and the reciprocal services which characterize the tie” (1973:1361). This makes tie strength a linear function of four quasi-independent indicators. At what point is a tie to be considered weak? This is not simply a question for the methodologically curious. It is an important part of the theory itself, since the theory makes a curvilinear prediction. If we happen to be on the very left side of the continuum of tie strength, then increasing the strength of the tie (going from no tie to weak tie) will increase the relevant information access. On the other hand, at some point making the ties stronger will theoretically decrease their impact. How do we know where we are on this theoretical curve? Do all four indicators count equally toward tie strength? In practice, tie strength has been measured many different ways.
Less
In 1973, Mark Granovetter proposed that weak ties are often more important than strong ties in understanding certain network-based phenomena. His argument rests on the assumption that strong ties tend to bond similar people to each other and these similar people tend to cluster together such that they are all mutually connected. The information obtained through such a network tie is more likely to be redundant, and the network is therefore not a channel for innovation. By contrast, a weak tie more often constitutes a “local bridge” to parts of the social system that are otherwise disconnected, and therefore a weak tie is likely to provide new information from disparate parts of the system. Thus, this theory argues, tie strength is curvilinear with a host of dependent variables: no tie (or an extremely weak tie) is of little consequence; a weak tie provides maximum impact, and a strong tie provides diminished impact. Subsequent research has generally supported Granovetter’s theory (Granovetter 1982), but two issues have been neglected in the research stream. First, there is considerable ambiguity as to what constitutes a strong tie and what constitutes a weak tie. Granovetter laid out four identifying properties of a strong tie: “The strength of a tie is a (probably linear) combination of the amount of time, the emotional intensity, the intimacy (mutual confiding), and the reciprocal services which characterize the tie” (1973:1361). This makes tie strength a linear function of four quasi-independent indicators. At what point is a tie to be considered weak? This is not simply a question for the methodologically curious. It is an important part of the theory itself, since the theory makes a curvilinear prediction. If we happen to be on the very left side of the continuum of tie strength, then increasing the strength of the tie (going from no tie to weak tie) will increase the relevant information access. On the other hand, at some point making the ties stronger will theoretically decrease their impact. How do we know where we are on this theoretical curve? Do all four indicators count equally toward tie strength? In practice, tie strength has been measured many different ways.
Woodrow Barfield and David Zeltzer
- Published in print:
- 1995
- Published Online:
- November 2020
- ISBN:
- 9780195075557
- eISBN:
- 9780197560310
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195075557.003.0023
- Subject:
- Computer Science, Human-Computer Interaction
Recent developments in display technology, specifically head-mounted displays slaved to the user’s head position, techniques to spatialize sound, and ...
More
Recent developments in display technology, specifically head-mounted displays slaved to the user’s head position, techniques to spatialize sound, and computer-generated tactile and kinesthetic feedback allow humans to experience impressive visual, auditory, and tactile simulations of virtual environments. However, while technological advancements in the equipment to produce virtual environments have been quite impressive, what is currently lacking is a conceptual and analytical framework in which to guide research in this developing area. What is also lacking is a set of metrics which can be used to measure performance within virtual environments and to quantify the level of presence experienced by participants of virtual worlds. Given the importance of achieving presence in virtual environments, it is interesting to note that we currently have no theory of presence, let alone a theory of virtual presence (feeling like you are present in the environment generated by the computer) or telepresence (feeling like you are actually “there” at the remote site of operation). This in spite of the fact that students of literature, the graphic arts, the theater arts, film, and TV have long been concerned with the observer’s sense of presence. In fact, one might ask, what do the new technological interfaces in the virtual environment domain add, and how do they affect this sense, beyond the ways in which our imaginations (mental models) have been stimulated by authors and artists for centuries? Not only is it necessary to develop a theory of presence for virtual environments, it is also necessary to develop a basic research program to investigate the relationship between presence and performance using virtual environments. To develop a basic research program focusing on presence, several important questions need to be addressed. The first question to pose is, how do we measure the level of presence experienced by an operator within a virtual environment? We need to develop an operational, reliable, useful, and robust measure of presence in order to evaluate various techniques used to produce virtual environments. Second, we need to determine when, and under what conditions, presence can be a benefit or a detriment to performance.
Less
Recent developments in display technology, specifically head-mounted displays slaved to the user’s head position, techniques to spatialize sound, and computer-generated tactile and kinesthetic feedback allow humans to experience impressive visual, auditory, and tactile simulations of virtual environments. However, while technological advancements in the equipment to produce virtual environments have been quite impressive, what is currently lacking is a conceptual and analytical framework in which to guide research in this developing area. What is also lacking is a set of metrics which can be used to measure performance within virtual environments and to quantify the level of presence experienced by participants of virtual worlds. Given the importance of achieving presence in virtual environments, it is interesting to note that we currently have no theory of presence, let alone a theory of virtual presence (feeling like you are present in the environment generated by the computer) or telepresence (feeling like you are actually “there” at the remote site of operation). This in spite of the fact that students of literature, the graphic arts, the theater arts, film, and TV have long been concerned with the observer’s sense of presence. In fact, one might ask, what do the new technological interfaces in the virtual environment domain add, and how do they affect this sense, beyond the ways in which our imaginations (mental models) have been stimulated by authors and artists for centuries? Not only is it necessary to develop a theory of presence for virtual environments, it is also necessary to develop a basic research program to investigate the relationship between presence and performance using virtual environments. To develop a basic research program focusing on presence, several important questions need to be addressed. The first question to pose is, how do we measure the level of presence experienced by an operator within a virtual environment? We need to develop an operational, reliable, useful, and robust measure of presence in order to evaluate various techniques used to produce virtual environments. Second, we need to determine when, and under what conditions, presence can be a benefit or a detriment to performance.
Philip J. Stewart
- Published in print:
- 2018
- Published Online:
- November 2020
- ISBN:
- 9780190668532
- eISBN:
- 9780197559765
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190668532.003.0006
- Subject:
- Chemistry, Physical Chemistry
Amateurs Have Made Valuable contributions to various sciences, including astronomy, geology, biology, and engineering. In chemistry they have been drawn to the ...
More
Amateurs Have Made Valuable contributions to various sciences, including astronomy, geology, biology, and engineering. In chemistry they have been drawn to the periodic system of the elements, with its deceptive simplicity, its density of information, its aesthetic potential, and its implication of deep order in the universe. They have suggested novel ways of representing it visually, in particular spirals and lemniscates in two or three dimensions. However, in the course of a century and a half, professional chemists have generally ignored the amateur versions of the table, and contented themselves with a couple of utilitarian tabulations. Edward Mazurs, one of the two great historians of the periodic system, surveyed approximately 700 graphic representations produced between 1862 and 1972. He was obsessed with classification, and he counted 146 different types. It seems astonishing that out of all these only two have ever attained any lasting and widespread currency among professionals. Dmitri Mendeleev’s short form was rapidly taken up by chemists and remained the standard for half a century. It was compact and easy to read, and by the clever device of combining three or four groups of elements as column VIII it concealed the difference in length between what we now call the p block and the d block. Indeed it confused the two; Mendeleev’s predicted properties of scandium (in the d block) were based on those of boron (in the p block). His inability to deal with the f block did not attract attention because as yet so few lanthanoids were known and that the early actinoids behave rather like the first members of the d-block. In the interwar years, the standard medium-long form gradually displaced Mendeleev’s short form, and since the 1940s it has become ubiquitous. Mazurs classified it as his type IIC2-4, and he referenced it 67 times—more than any other type (pp. 175–180)—but he thought so poorly of it that he accorded less than a page to discussing it.
Less
Amateurs Have Made Valuable contributions to various sciences, including astronomy, geology, biology, and engineering. In chemistry they have been drawn to the periodic system of the elements, with its deceptive simplicity, its density of information, its aesthetic potential, and its implication of deep order in the universe. They have suggested novel ways of representing it visually, in particular spirals and lemniscates in two or three dimensions. However, in the course of a century and a half, professional chemists have generally ignored the amateur versions of the table, and contented themselves with a couple of utilitarian tabulations. Edward Mazurs, one of the two great historians of the periodic system, surveyed approximately 700 graphic representations produced between 1862 and 1972. He was obsessed with classification, and he counted 146 different types. It seems astonishing that out of all these only two have ever attained any lasting and widespread currency among professionals. Dmitri Mendeleev’s short form was rapidly taken up by chemists and remained the standard for half a century. It was compact and easy to read, and by the clever device of combining three or four groups of elements as column VIII it concealed the difference in length between what we now call the p block and the d block. Indeed it confused the two; Mendeleev’s predicted properties of scandium (in the d block) were based on those of boron (in the p block). His inability to deal with the f block did not attract attention because as yet so few lanthanoids were known and that the early actinoids behave rather like the first members of the d-block. In the interwar years, the standard medium-long form gradually displaced Mendeleev’s short form, and since the 1940s it has become ubiquitous. Mazurs classified it as his type IIC2-4, and he referenced it 67 times—more than any other type (pp. 175–180)—but he thought so poorly of it that he accorded less than a page to discussing it.
Brian Bayly
- Published in print:
- 1993
- Published Online:
- November 2020
- ISBN:
- 9780195067644
- eISBN:
- 9780197560211
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195067644.003.0017
- Subject:
- Earth Sciences and Geography, Geochemistry
The suggestion in view is that when volume is lost by diffusive mass transfer, the consequent shortening rate along some direction n is controlled by ∇2σnn ...
More
The suggestion in view is that when volume is lost by diffusive mass transfer, the consequent shortening rate along some direction n is controlled by ∇2σnn regardless of the spatial variations in other stress components. The nature of the argument advanced is comparable with the one on which the theory of relativity is based: “At two separate points in a universe, it is not reasonable to suppose that the fundamental laws of behavior will be different at one point from the other.” If it is only in respect to some reference frame set up by an observer that point P differs from point Q, one should not expect behavior at P to differ from behavior at Q. It is convenient to use anthropomorphic phrasing: “If there is nothing intrinsic about point P to tell the material there to behave differently, the material at P will behave in the same way as the material at Q.” The theme of this chapter is that the material process for diffusive mass transfer is almost indistinguishable from the process for volume-conserving viscous change of shape at a point. In fact it will be argued that the two processes are so similar that it is not reasonable to suppose that behavior will be governed by different laws in the two modes: only an observer can distinguish one process from the other. Again anthropomorphically, “The moving material itself has no means of knowing which process it is involved in. Hence, if it is direction-dependent quantities such as σnn that control behavior in change of shape at a point, it must also be direction-dependent quantities such as σnn that control diffusive mass transfer.” In presenting the argument, it is convenient to imagine an atomic material for purposes of example, and for the sake of concreteness; but it is emphasized at the outset that the atoms are of minimal significance—the objective is a theory for a continuum. We wish to treat a continuum in which diffusion occurs, and even a continuum with only one component in which self-diffusion occurs, and most people find that this requires imagining division of the continuum into particles on some scale: but we need this division only in the most abstract sense, just enough to permit the idea that the continuum is self-diffusive.
Less
The suggestion in view is that when volume is lost by diffusive mass transfer, the consequent shortening rate along some direction n is controlled by ∇2σnn regardless of the spatial variations in other stress components. The nature of the argument advanced is comparable with the one on which the theory of relativity is based: “At two separate points in a universe, it is not reasonable to suppose that the fundamental laws of behavior will be different at one point from the other.” If it is only in respect to some reference frame set up by an observer that point P differs from point Q, one should not expect behavior at P to differ from behavior at Q. It is convenient to use anthropomorphic phrasing: “If there is nothing intrinsic about point P to tell the material there to behave differently, the material at P will behave in the same way as the material at Q.” The theme of this chapter is that the material process for diffusive mass transfer is almost indistinguishable from the process for volume-conserving viscous change of shape at a point. In fact it will be argued that the two processes are so similar that it is not reasonable to suppose that behavior will be governed by different laws in the two modes: only an observer can distinguish one process from the other. Again anthropomorphically, “The moving material itself has no means of knowing which process it is involved in. Hence, if it is direction-dependent quantities such as σnn that control behavior in change of shape at a point, it must also be direction-dependent quantities such as σnn that control diffusive mass transfer.” In presenting the argument, it is convenient to imagine an atomic material for purposes of example, and for the sake of concreteness; but it is emphasized at the outset that the atoms are of minimal significance—the objective is a theory for a continuum. We wish to treat a continuum in which diffusion occurs, and even a continuum with only one component in which self-diffusion occurs, and most people find that this requires imagining division of the continuum into particles on some scale: but we need this division only in the most abstract sense, just enough to permit the idea that the continuum is self-diffusive.
John Gaventa
- Published in print:
- 2018
- Published Online:
- September 2018
- ISBN:
- 9780813175324
- eISBN:
- 9780813175676
- Item type:
- chapter
- Publisher:
- University Press of Kentucky
- DOI:
- 10.5810/kentucky/9780813175324.003.0005
- Subject:
- Society and Culture, Cultural Studies
The political scientist John Gaventa’s prizewinning analysis of power and powerlessness was a foundational study in the early development of Appalachian studies. In this chapter he outlines a new, ...
More
The political scientist John Gaventa’s prizewinning analysis of power and powerlessness was a foundational study in the early development of Appalachian studies. In this chapter he outlines a new, multidimensional conception of power (the “power cube”) to understand the “power of place” and the “place of power.” He suggests that effective efforts at place-based social transformation must operate on three dimensions that challenge the forms, spaces, and levels of power. He also describes how the places in which he has worked and lived, including African nations, Appalachia, Canada, and the United Kingdom, have influenced his thinking about power dynamics.Less
The political scientist John Gaventa’s prizewinning analysis of power and powerlessness was a foundational study in the early development of Appalachian studies. In this chapter he outlines a new, multidimensional conception of power (the “power cube”) to understand the “power of place” and the “place of power.” He suggests that effective efforts at place-based social transformation must operate on three dimensions that challenge the forms, spaces, and levels of power. He also describes how the places in which he has worked and lived, including African nations, Appalachia, Canada, and the United Kingdom, have influenced his thinking about power dynamics.
Max Boisot and Michel Fiol
- Published in print:
- 2013
- Published Online:
- September 2013
- ISBN:
- 9780199669165
- eISBN:
- 9780191749346
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199669165.003.0010
- Subject:
- Business and Management, Organization Studies, Knowledge Management
The Learning Cube is a simple diagnostic tool that can be used to analyze and evaluate training programs. It comprises three dimensions: (1) abstraction-concreteness; (2) direction-autonomy; (3) ...
More
The Learning Cube is a simple diagnostic tool that can be used to analyze and evaluate training programs. It comprises three dimensions: (1) abstraction-concreteness; (2) direction-autonomy; (3) individual-interactive. The Learning Cube is presented in outline, with a description of its dimensions and their meanings. Action learning, which encourages the learner and the trainer to share responsibility for developing a suitable learning strategy, is characterized by a combination of concreteness, autonomy, and interaction. This was the philosophy of the China-EEC Management Program (CEMP) initiated in 1984, which contrasted with traditional Chinese views of knowledge transfer. The CEMP situation is assessed in terms of the Learning Cube, and suggestions for reconciling action learning with traditional Chinese values and skills are presented.Less
The Learning Cube is a simple diagnostic tool that can be used to analyze and evaluate training programs. It comprises three dimensions: (1) abstraction-concreteness; (2) direction-autonomy; (3) individual-interactive. The Learning Cube is presented in outline, with a description of its dimensions and their meanings. Action learning, which encourages the learner and the trainer to share responsibility for developing a suitable learning strategy, is characterized by a combination of concreteness, autonomy, and interaction. This was the philosophy of the China-EEC Management Program (CEMP) initiated in 1984, which contrasted with traditional Chinese views of knowledge transfer. The CEMP situation is assessed in terms of the Learning Cube, and suggestions for reconciling action learning with traditional Chinese values and skills are presented.
John E. Kelly and Steve Hamm
- Published in print:
- 2013
- Published Online:
- November 2015
- ISBN:
- 9780231168564
- eISBN:
- 9780231537278
- Item type:
- chapter
- Publisher:
- Columbia University Press
- DOI:
- 10.7312/columbia/9780231168564.003.0005
- Subject:
- Business and Management, Information Technology
This chapter examines the development of the data-centric computer that will greatly reduce the amount of movement required in data processing. This new design puts the data, rather than the ...
More
This chapter examines the development of the data-centric computer that will greatly reduce the amount of movement required in data processing. This new design puts the data, rather than the microprocessor, at the center. It will perform computations faster, make sense of large amounts of data, and be more energy efficient. In a concept called the hybrid memory cube, engineers will integrate memory with logic and messaging in dense, three-dimensional packages. The design will require ninety percent less space than today's memory chips, and seventy percent less energy. In time, memory cubes will likely have their own processors attached to them. They will be used in next-generation server farms to greatly reduce the amount of space and electricity required. Furthermore, the technology will make it possible to pack an immense amount of computing resources into smartphones, tablets, and other portable devices.Less
This chapter examines the development of the data-centric computer that will greatly reduce the amount of movement required in data processing. This new design puts the data, rather than the microprocessor, at the center. It will perform computations faster, make sense of large amounts of data, and be more energy efficient. In a concept called the hybrid memory cube, engineers will integrate memory with logic and messaging in dense, three-dimensional packages. The design will require ninety percent less space than today's memory chips, and seventy percent less energy. In time, memory cubes will likely have their own processors attached to them. They will be used in next-generation server farms to greatly reduce the amount of space and electricity required. Furthermore, the technology will make it possible to pack an immense amount of computing resources into smartphones, tablets, and other portable devices.
Richardson John T. E.
- Published in print:
- 2011
- Published Online:
- November 2015
- ISBN:
- 9780231141680
- eISBN:
- 9780231512114
- Item type:
- chapter
- Publisher:
- Columbia University Press
- DOI:
- 10.7312/columbia/9780231141680.003.0011
- Subject:
- Psychology, Cognitive Psychology
This chapter focuses on the wider use of intelligence tests throughout the 1920s and 1930s until they were generally superseded by David Wechsler’s scales from 1939 onward. From May 1912 to May 1916, ...
More
This chapter focuses on the wider use of intelligence tests throughout the 1920s and 1930s until they were generally superseded by David Wechsler’s scales from 1939 onward. From May 1912 to May 1916, Howard Andrew Knox and his colleagues produced an array of psychological tests to estimate mental deficiency among emigrants at Ellis Island in New York. These tests were later borrowed and adapted in the test batteries that were devised to measure intelligence. The publication of Rudolf Pintner and Donald Gildersleeve Paterson’s A Scale of Performance Tests (1917) and of Clarence Stone Yoakum and Robert Mearns Yerkes’ manual, Army Mental Tests (1920) brought many of Knox’s tests to the attention of psychologists. This chapter considers James Drever and Mary Collins’s “series of non-linguistic tests”, Harriet Babcock’s test of mental efficiency, and other performance scales of the 1930s, along with tests that measured race, ethnicity, and performance. It also describes the Wechsler Intelligence Scales and the many variants of the Cube Imitation Test before concluding with an assessment of the demise of performance scales.Less
This chapter focuses on the wider use of intelligence tests throughout the 1920s and 1930s until they were generally superseded by David Wechsler’s scales from 1939 onward. From May 1912 to May 1916, Howard Andrew Knox and his colleagues produced an array of psychological tests to estimate mental deficiency among emigrants at Ellis Island in New York. These tests were later borrowed and adapted in the test batteries that were devised to measure intelligence. The publication of Rudolf Pintner and Donald Gildersleeve Paterson’s A Scale of Performance Tests (1917) and of Clarence Stone Yoakum and Robert Mearns Yerkes’ manual, Army Mental Tests (1920) brought many of Knox’s tests to the attention of psychologists. This chapter considers James Drever and Mary Collins’s “series of non-linguistic tests”, Harriet Babcock’s test of mental efficiency, and other performance scales of the 1930s, along with tests that measured race, ethnicity, and performance. It also describes the Wechsler Intelligence Scales and the many variants of the Cube Imitation Test before concluding with an assessment of the demise of performance scales.
Ethan Berkove, David Cervantes-Nava, Daniel Condon, Andrew Eickemeyer, Rachel Katz, and Michael J. Schulman
- Published in print:
- 2017
- Published Online:
- May 2018
- ISBN:
- 9780691171920
- eISBN:
- 9781400889136
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691171920.003.0008
- Subject:
- Mathematics, History of Mathematics
This chapter analyzes a puzzle related to a classic problem first posed by English mathematician Percy MacMahon. MacMahon Given a palette of six colors, a 6-color cube is one where each face is one ...
More
This chapter analyzes a puzzle related to a classic problem first posed by English mathematician Percy MacMahon. MacMahon Given a palette of six colors, a 6-color cube is one where each face is one color and all six colors appear on some face. It is a straightforward counting argument to show that there are exactly thirty distinct 6-color cubes up to rigid isometry. MacMahon introduced this set of cubes and posed a number of questions about it. The most natural one—and the motivating problem for this chapter—was whether one could take twenty-seven cubes from the set and build a 3 × 3 × 3 cube where each face was one color. The chapter first provides background and terminology, including the coloring condition and a description of a useful partial order on cubes. It then applies these tools to solve the 2-color case, the 3-color problem, and the problem for frames of all sizes.Less
This chapter analyzes a puzzle related to a classic problem first posed by English mathematician Percy MacMahon. MacMahon Given a palette of six colors, a 6-color cube is one where each face is one color and all six colors appear on some face. It is a straightforward counting argument to show that there are exactly thirty distinct 6-color cubes up to rigid isometry. MacMahon introduced this set of cubes and posed a number of questions about it. The most natural one—and the motivating problem for this chapter—was whether one could take twenty-seven cubes from the set and build a 3 × 3 × 3 cube where each face was one color. The chapter first provides background and terminology, including the coloring condition and a description of a useful partial order on cubes. It then applies these tools to solve the 2-color case, the 3-color problem, and the problem for frames of all sizes.