Charles D. Bailyn
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691148823
- eISBN:
- 9781400850563
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691148823.003.0006
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This chapter discusses the formation and evolution of black holes. Stellar-mass black holes are generally understood to be created in supernova explosions that mark the end of the life of a massive ...
More
This chapter discusses the formation and evolution of black holes. Stellar-mass black holes are generally understood to be created in supernova explosions that mark the end of the life of a massive star. However, many supernovae create neutron stars rather than black holes, and the precise conditions under which black holes form are still not fully understood. If the black hole is to be detected, further events are required, such as the formation of a binary star system of a kind that can be observed, and in which the existence of a black hole can be demonstrated. In contrast with stellar-mass black hole formation, there is no obvious route to the creation of a supermassive black hole directly from collapsing interstellar gas. Most discussions of the origin and evolution of supermassive black holes posit an initial “seed” black hole of relatively low mass, which then grows over time.Less
This chapter discusses the formation and evolution of black holes. Stellar-mass black holes are generally understood to be created in supernova explosions that mark the end of the life of a massive star. However, many supernovae create neutron stars rather than black holes, and the precise conditions under which black holes form are still not fully understood. If the black hole is to be detected, further events are required, such as the formation of a binary star system of a kind that can be observed, and in which the existence of a black hole can be demonstrated. In contrast with stellar-mass black hole formation, there is no obvious route to the creation of a supermassive black hole directly from collapsing interstellar gas. Most discussions of the origin and evolution of supermassive black holes posit an initial “seed” black hole of relatively low mass, which then grows over time.
Valeri P. Frolov and Andrei Zelnikov
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199692293
- eISBN:
- 9780191731860
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199692293.003.0001
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
Gravity is the weakest interaction known in the Physics. Nevertheless it plays the leading role in astrophysics and cosmology. We discuss specific properties of gravity, explaining why it happens, ...
More
Gravity is the weakest interaction known in the Physics. Nevertheless it plays the leading role in astrophysics and cosmology. We discuss specific properties of gravity, explaining why it happens, and introduce the notion of a black hole. We describe final states of a star evolution and conditions, when a massive star collapses and forms a black hole. The Chapter contains also a brief review of the astrophysical evidences of the black hole existence, and describes methods used for identification of stellar mass and supermassive black holes. At the end of this Chapter we review the status of black holes in the modern theoretical physics, unsolved problems of black hole physics, and new ideas how to use black holes as probes of extra dimensions.Less
Gravity is the weakest interaction known in the Physics. Nevertheless it plays the leading role in astrophysics and cosmology. We discuss specific properties of gravity, explaining why it happens, and introduce the notion of a black hole. We describe final states of a star evolution and conditions, when a massive star collapses and forms a black hole. The Chapter contains also a brief review of the astrophysical evidences of the black hole existence, and describes methods used for identification of stellar mass and supermassive black holes. At the end of this Chapter we review the status of black holes in the modern theoretical physics, unsolved problems of black hole physics, and new ideas how to use black holes as probes of extra dimensions.
Luciano Rezzolla and Olindo Zanotti
- Published in print:
- 2013
- Published Online:
- January 2014
- ISBN:
- 9780198528906
- eISBN:
- 9780191746505
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198528906.003.0012
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
The last chapter of the book deals with physical systems whose conditions require the solution both of the Einstein equations and of the hydrodynamics equations. The first examples considered are ...
More
The last chapter of the book deals with physical systems whose conditions require the solution both of the Einstein equations and of the hydrodynamics equations. The first examples considered are those of stationary isolated stars, including gravastars and rotating stars, followed by the analysis of compact stars collapsing to a black hole, which are treated both through the dust solution of Oppenheimer–Snyder and through fluid solutions. Since the nonlinearity and complexity of the equations that need to be solved make it increasingly difficult to obtain analytic solutions, the role of numerical simulations becomes increasingly important. Numerical simulations are indeed crucial for the investigation of complex systems such as neutron-star binaries and black-hole–neutron-star binaries, which are treated with an eye on their possible detection through the emission of gravitational waves.Less
The last chapter of the book deals with physical systems whose conditions require the solution both of the Einstein equations and of the hydrodynamics equations. The first examples considered are those of stationary isolated stars, including gravastars and rotating stars, followed by the analysis of compact stars collapsing to a black hole, which are treated both through the dust solution of Oppenheimer–Snyder and through fluid solutions. Since the nonlinearity and complexity of the equations that need to be solved make it increasingly difficult to obtain analytic solutions, the role of numerical simulations becomes increasingly important. Numerical simulations are indeed crucial for the investigation of complex systems such as neutron-star binaries and black-hole–neutron-star binaries, which are treated with an eye on their possible detection through the emission of gravitational waves.
Andrea Belgrano, Ursula M. Scharler, Jennifer Dunne, and Robert E. Ulanowicz (eds)
- Published in print:
- 2005
- Published Online:
- September 2007
- ISBN:
- 9780198564836
- eISBN:
- 9780191713828
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198564836.001.0001
- Subject:
- Biology, Aquatic Biology
This book provides a synthesis of theoretical and empirical food web research. Whether they are binary systems or weighted networks, food webs are of particular interest to ecologists by providing a ...
More
This book provides a synthesis of theoretical and empirical food web research. Whether they are binary systems or weighted networks, food webs are of particular interest to ecologists by providing a macroscopic view of ecosystems. They describe interactions between species and their environment, and subsequent advances in the understanding of their structure, function, and dynamics are of vital importance to ecosystem management and conservation. This book covers issues of structure, function, scaling, complexity, and stability in the contexts of conservation, fisheries, and climate. Although the focus of this volume is upon aquatic food webs (where many of the recent advances have been made), many other issues are addressed.Less
This book provides a synthesis of theoretical and empirical food web research. Whether they are binary systems or weighted networks, food webs are of particular interest to ecologists by providing a macroscopic view of ecosystems. They describe interactions between species and their environment, and subsequent advances in the understanding of their structure, function, and dynamics are of vital importance to ecosystem management and conservation. This book covers issues of structure, function, scaling, complexity, and stability in the contexts of conservation, fisheries, and climate. Although the focus of this volume is upon aquatic food webs (where many of the recent advances have been made), many other issues are addressed.
H. Asada, T. Futamase, and P. A. Hogan
- Published in print:
- 2010
- Published Online:
- January 2011
- ISBN:
- 9780199584109
- eISBN:
- 9780191723421
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199584109.003.0004
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
For a binary star system the periastron advance per orbit period is calculated as well as the Shapiro time delay in radar arrival time due to the passage of the signal through the gravitational field ...
More
For a binary star system the periastron advance per orbit period is calculated as well as the Shapiro time delay in radar arrival time due to the passage of the signal through the gravitational field of the binary. The effect of gravitational radiation reaction on the orbital period and on the shape of the orbit is derived by studying the perturbations of the osculating elements of the orbit and alternatively by using traditional energy and angular momentum balance arguments. The gravitational interaction of spin is also discussed.Less
For a binary star system the periastron advance per orbit period is calculated as well as the Shapiro time delay in radar arrival time due to the passage of the signal through the gravitational field of the binary. The effect of gravitational radiation reaction on the orbital period and on the shape of the orbit is derived by studying the perturbations of the osculating elements of the orbit and alternatively by using traditional energy and angular momentum balance arguments. The gravitational interaction of spin is also discussed.
Meghann Meeusen
- Published in print:
- 2020
- Published Online:
- January 2021
- ISBN:
- 9781496828644
- eISBN:
- 9781496828699
- Item type:
- chapter
- Publisher:
- University Press of Mississippi
- DOI:
- 10.14325/mississippi/9781496828644.003.0002
- Subject:
- Literature, 20th-century and Contemporary Literature
Chapter two describes an important cause for binary polarization: children’s films often focalize around a single theme from the source text and make it a driving element of the adaptation, ...
More
Chapter two describes an important cause for binary polarization: children’s films often focalize around a single theme from the source text and make it a driving element of the adaptation, amplifying the weight and intensity of that theme. First, this chapter explores binaries in Henry Selick’s adaption of Neil Gaiman’s Coraline to claim that instead of distorting some of Gaiman’s themes, Selick makes them stronger, leading to a widening of independent/dependent, real/other, and adult/child binaries. The chapter next highlights how the movie adaptation of The Tale of Despereaux amplifies a set of overlapping binary systems, and then uses the film version of How to Train Your Dragon to illustrate how thematic amplification is culturally bound and historically situated. Overall, the chapter suggests that when film adaptors select a theme of the novel and use it as a cornerstone in the adaptation, the result is binary polarization.Less
Chapter two describes an important cause for binary polarization: children’s films often focalize around a single theme from the source text and make it a driving element of the adaptation, amplifying the weight and intensity of that theme. First, this chapter explores binaries in Henry Selick’s adaption of Neil Gaiman’s Coraline to claim that instead of distorting some of Gaiman’s themes, Selick makes them stronger, leading to a widening of independent/dependent, real/other, and adult/child binaries. The chapter next highlights how the movie adaptation of The Tale of Despereaux amplifies a set of overlapping binary systems, and then uses the film version of How to Train Your Dragon to illustrate how thematic amplification is culturally bound and historically situated. Overall, the chapter suggests that when film adaptors select a theme of the novel and use it as a cornerstone in the adaptation, the result is binary polarization.
Roger Penrose and Martin Gardner
- Published in print:
- 1989
- Published Online:
- November 2020
- ISBN:
- 9780198519737
- eISBN:
- 9780191917080
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198519737.003.0010
- Subject:
- Computer Science, Artificial Intelligence, Machine Learning
What Precisely is an algorithm, or a Turing machine, or a universal Turing machine? Why should these concepts be so central to the modern view of what could constitute a ‘thinking device’? Are ...
More
What Precisely is an algorithm, or a Turing machine, or a universal Turing machine? Why should these concepts be so central to the modern view of what could constitute a ‘thinking device’? Are there any absolute limitations to what an algorithm could in principle achieve? In order to address these questions adequately, we shall need to examine the idea of an algorithm and of Turing machines in some detail. In the various discussions which follow, I shall sometimes need to refer to mathematical expressions. I appreciate that some readers may be put off by such things, or perhaps find them intimidating. If you are such a reader, I ask your indulgence, and recommend that you follow the advice I have given in my ‘Note to the reader’ on p. viii! The arguments given here do not require mathematical knowledge beyond that of elementary school, but to follow them in detail, some serious thought would be required. In fact, most of the descriptions are quite explicit, and a good understanding can be obtained by following the details. But much can also be gained even if one simply skims over the arguments in order to obtain merely their flavour. If, on the other hand, you are an expert, I again ask your indulgence. I suspect that it may still be worth your while to look through what I have to say, and there may indeed be a thing or two to catch your interest. The word ‘algorithm’ comes from the name of the ninth century Persian mathematician Abu Ja’far Mohammed ibn Mûsâ alKhowârizm who wrote an influential mathematical textbook, in about 825 AD, entitled ‘Kitab al-jabr wa’l-muqabala’. The way that the name ‘algorithm’ has now come to be spelt, rather than the earlier and more accurate ‘algorism’, seems to have been due to an association with the word ‘arithmetic’. (It is noteworthy, also, that the word ‘algebra’ comes from the Arabic ‘al-jabr’ appearing in the title of his book.) Instances of algorithms were, however, known very much earlier than al-Khowârizm’s book.
Less
What Precisely is an algorithm, or a Turing machine, or a universal Turing machine? Why should these concepts be so central to the modern view of what could constitute a ‘thinking device’? Are there any absolute limitations to what an algorithm could in principle achieve? In order to address these questions adequately, we shall need to examine the idea of an algorithm and of Turing machines in some detail. In the various discussions which follow, I shall sometimes need to refer to mathematical expressions. I appreciate that some readers may be put off by such things, or perhaps find them intimidating. If you are such a reader, I ask your indulgence, and recommend that you follow the advice I have given in my ‘Note to the reader’ on p. viii! The arguments given here do not require mathematical knowledge beyond that of elementary school, but to follow them in detail, some serious thought would be required. In fact, most of the descriptions are quite explicit, and a good understanding can be obtained by following the details. But much can also be gained even if one simply skims over the arguments in order to obtain merely their flavour. If, on the other hand, you are an expert, I again ask your indulgence. I suspect that it may still be worth your while to look through what I have to say, and there may indeed be a thing or two to catch your interest. The word ‘algorithm’ comes from the name of the ninth century Persian mathematician Abu Ja’far Mohammed ibn Mûsâ alKhowârizm who wrote an influential mathematical textbook, in about 825 AD, entitled ‘Kitab al-jabr wa’l-muqabala’. The way that the name ‘algorithm’ has now come to be spelt, rather than the earlier and more accurate ‘algorism’, seems to have been due to an association with the word ‘arithmetic’. (It is noteworthy, also, that the word ‘algebra’ comes from the Arabic ‘al-jabr’ appearing in the title of his book.) Instances of algorithms were, however, known very much earlier than al-Khowârizm’s book.
Leslie Bow
- Published in print:
- 2010
- Published Online:
- March 2016
- ISBN:
- 9780814791325
- eISBN:
- 9780814739129
- Item type:
- book
- Publisher:
- NYU Press
- DOI:
- 10.18574/nyu/9780814791325.001.0001
- Subject:
- Society and Culture, Cultural Studies
Arkansas, 1943. The Deep South during the heart of Jim Crow-era segregation. A Japanese-American person boards a bus, and immediately is faced with a dilemma. Not white. Not black. Where to sit? By ...
More
Arkansas, 1943. The Deep South during the heart of Jim Crow-era segregation. A Japanese-American person boards a bus, and immediately is faced with a dilemma. Not white. Not black. Where to sit? By elucidating the experience of interstitial ethnic groups such as Mexican, Asian, and Native Americans—groups that are held to be neither black nor white—this book explores how the color line accommodated—or refused to accommodate—“other” ethnicities within a binary racial system. Analyzing pre- and post-1954 American literature, film, autobiography, government documents, ethnography, photographs, and popular culture, the book investigates the ways in which racially “in-between” people and communities were brought to heel within the South's prevailing cultural logic, while locating the interstitial as a site of cultural anxiety and negotiation. Spanning the pre- to the post-segregation eras, this book traces the compelling history of “third race” individuals in the U.S. South, and in the process forces us to contend with the multiracial panorama that constitutes American culture and history.Less
Arkansas, 1943. The Deep South during the heart of Jim Crow-era segregation. A Japanese-American person boards a bus, and immediately is faced with a dilemma. Not white. Not black. Where to sit? By elucidating the experience of interstitial ethnic groups such as Mexican, Asian, and Native Americans—groups that are held to be neither black nor white—this book explores how the color line accommodated—or refused to accommodate—“other” ethnicities within a binary racial system. Analyzing pre- and post-1954 American literature, film, autobiography, government documents, ethnography, photographs, and popular culture, the book investigates the ways in which racially “in-between” people and communities were brought to heel within the South's prevailing cultural logic, while locating the interstitial as a site of cultural anxiety and negotiation. Spanning the pre- to the post-segregation eras, this book traces the compelling history of “third race” individuals in the U.S. South, and in the process forces us to contend with the multiracial panorama that constitutes American culture and history.
Luciano Rezzolla and Olindo Zanotti
- Published in print:
- 2013
- Published Online:
- January 2014
- ISBN:
- 9780198528906
- eISBN:
- 9780191746505
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198528906.001.0001
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
The book provides a lively and approachable introduction to the main concepts and techniques of relativistic hydrodynamics in a form which will appeal to physicists at advanced undergraduate and ...
More
The book provides a lively and approachable introduction to the main concepts and techniques of relativistic hydrodynamics in a form which will appeal to physicists at advanced undergraduate and postgraduate levels. The book is divided into three parts. The first part deals with the physical aspects of relativistic hydrodynamics, touching on fundamental topics such as kinetic theory, equations of state, mathematical aspects of hyperbolic partial differential equations, linear and nonlinear waves in fluids, reaction fronts, and the treatment of non-ideal fluids. The second part provides an introductory but complete description of those numerical methods currently adopted in the solution of the relativistic-hydrodynamic equations. Starting from traditional finite-difference methods, modern high-resolution shock-capturing methods are discussed with special emphasis on Godunov upwind schemes based on Riemann solvers. High-order schemes are also treated, focusing on essentially non-oscillatory and weighted non-oscillatory methods, Galerkin methods and on modern ADER approaches. Finally, the third part of the book is devoted to applications and considers several physical and astrophysical systems for which relativistic hydrodynamics plays a crucial role. Several non-self-gravitating systems are first studied, including self-similar flows, relativistic blast waves, spherical flows onto a compact object, relativistic accreting disks, relativistic jets and heavy-ion collisions. Self-gravitating systems are also considered, from isolated stars, to more dynamical configurations such as the collapse to a black hole or the dynamics of binary systems. The book is especially recommended to astrophysicists, particle physicists and applied mathematicians.Less
The book provides a lively and approachable introduction to the main concepts and techniques of relativistic hydrodynamics in a form which will appeal to physicists at advanced undergraduate and postgraduate levels. The book is divided into three parts. The first part deals with the physical aspects of relativistic hydrodynamics, touching on fundamental topics such as kinetic theory, equations of state, mathematical aspects of hyperbolic partial differential equations, linear and nonlinear waves in fluids, reaction fronts, and the treatment of non-ideal fluids. The second part provides an introductory but complete description of those numerical methods currently adopted in the solution of the relativistic-hydrodynamic equations. Starting from traditional finite-difference methods, modern high-resolution shock-capturing methods are discussed with special emphasis on Godunov upwind schemes based on Riemann solvers. High-order schemes are also treated, focusing on essentially non-oscillatory and weighted non-oscillatory methods, Galerkin methods and on modern ADER approaches. Finally, the third part of the book is devoted to applications and considers several physical and astrophysical systems for which relativistic hydrodynamics plays a crucial role. Several non-self-gravitating systems are first studied, including self-similar flows, relativistic blast waves, spherical flows onto a compact object, relativistic accreting disks, relativistic jets and heavy-ion collisions. Self-gravitating systems are also considered, from isolated stars, to more dynamical configurations such as the collapse to a black hole or the dynamics of binary systems. The book is especially recommended to astrophysicists, particle physicists and applied mathematicians.
Angel Adams Parham
- Published in print:
- 2017
- Published Online:
- March 2017
- ISBN:
- 9780190624750
- eISBN:
- 9780190624781
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780190624750.003.0001
- Subject:
- Sociology, Race and Ethnicity, Migration Studies (including Refugee Studies)
The introduction presents the St. Domingue/Haiti to Louisiana migration case, which traces the integration of white and free black refugees and their descendants over the course of two hundred years. ...
More
The introduction presents the St. Domingue/Haiti to Louisiana migration case, which traces the integration of white and free black refugees and their descendants over the course of two hundred years. The St. Domingue refugees initially reinforced Louisiana’s triracial system. Then, over the course of the nineteenth century and into the twentieth, the binary Anglo-American racial system came to dominate as the Anglo-American population grew and their racial practices asserted increased pressure on the Latin/Caribbean system. The introduction discusses the ways these immigrants and their descendants coped with contrasting understandings of race and draws parallels between this historical case and the situation of contemporary immigrants from Latin America and the Caribbean who often resist the binary logic of the Anglo-American US system.Less
The introduction presents the St. Domingue/Haiti to Louisiana migration case, which traces the integration of white and free black refugees and their descendants over the course of two hundred years. The St. Domingue refugees initially reinforced Louisiana’s triracial system. Then, over the course of the nineteenth century and into the twentieth, the binary Anglo-American racial system came to dominate as the Anglo-American population grew and their racial practices asserted increased pressure on the Latin/Caribbean system. The introduction discusses the ways these immigrants and their descendants coped with contrasting understandings of race and draws parallels between this historical case and the situation of contemporary immigrants from Latin America and the Caribbean who often resist the binary logic of the Anglo-American US system.
Subrata Dasgupta
- Published in print:
- 2014
- Published Online:
- November 2020
- ISBN:
- 9780199309412
- eISBN:
- 9780197562857
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199309412.003.0011
- Subject:
- Computer Science, History of Computer Science
On February 15, 1946, a giant of a machine called the ENIAC, an acronym for Electronic Numerical Integrator And Computer, was commissioned at a ceremony at the Moore School of Electrical ...
More
On February 15, 1946, a giant of a machine called the ENIAC, an acronym for Electronic Numerical Integrator And Computer, was commissioned at a ceremony at the Moore School of Electrical Engineering at the University of Pennsylvania, Philadelphia. The name is noteworthy. We see that the word computer—to mean the machine and not the person—had cautiously entered the emerging vocabulary of computer culture. Bell Laboratories named one of its machines Complex Computer; another, Ballistic Computer (see Chapter 5, Section I). Still, the embryonic world of computing was hesitant; the terms “calculator”, “calculating machine”, “computing machine”, and “computing engine” still prevailed. The ENIAC’s full name (which, of course, would never be used after the acronym was established) seemed, at last, to flaunt the fact that this machine had a definite identity, that it was a computer. The tale of the ENIAC is a fascinating tale in its own right, but it is also a very important tale. Computer scientists and engineers of later times may be ignorant about the Bell Laboratories machines, they may be hazy about the Harvard Mark series, they may have only an inkling about Babbage’s dream machines, but they will more than likely have heard about the ENIAC. Why was this so? What was it about the ENIAC that admits its story into the larger story? It was not the first electronic computer; the Colossus preceded the ENIAC by 2 years. True, no one outside the Bletchley Park community knew about the Colossus, but from a historical perspective, for historians writing about the state of computing in the 1940s, the Colossus clearly took precedence over the ENIAC. In fact (as we will soon see), there was another electronic computer built in America that preceded the ENIAC. Nor was the ENIAC the first programmable computer. Zuse’s Z3 and Aiken’s Harvard Mark I, as well as the Colossus, well preceded the ENIAC in this realm. As for that other Holy Grail, general purposeness, this was, as we have noted, an elusive target (see Chapter 6, Section III).
Less
On February 15, 1946, a giant of a machine called the ENIAC, an acronym for Electronic Numerical Integrator And Computer, was commissioned at a ceremony at the Moore School of Electrical Engineering at the University of Pennsylvania, Philadelphia. The name is noteworthy. We see that the word computer—to mean the machine and not the person—had cautiously entered the emerging vocabulary of computer culture. Bell Laboratories named one of its machines Complex Computer; another, Ballistic Computer (see Chapter 5, Section I). Still, the embryonic world of computing was hesitant; the terms “calculator”, “calculating machine”, “computing machine”, and “computing engine” still prevailed. The ENIAC’s full name (which, of course, would never be used after the acronym was established) seemed, at last, to flaunt the fact that this machine had a definite identity, that it was a computer. The tale of the ENIAC is a fascinating tale in its own right, but it is also a very important tale. Computer scientists and engineers of later times may be ignorant about the Bell Laboratories machines, they may be hazy about the Harvard Mark series, they may have only an inkling about Babbage’s dream machines, but they will more than likely have heard about the ENIAC. Why was this so? What was it about the ENIAC that admits its story into the larger story? It was not the first electronic computer; the Colossus preceded the ENIAC by 2 years. True, no one outside the Bletchley Park community knew about the Colossus, but from a historical perspective, for historians writing about the state of computing in the 1940s, the Colossus clearly took precedence over the ENIAC. In fact (as we will soon see), there was another electronic computer built in America that preceded the ENIAC. Nor was the ENIAC the first programmable computer. Zuse’s Z3 and Aiken’s Harvard Mark I, as well as the Colossus, well preceded the ENIAC in this realm. As for that other Holy Grail, general purposeness, this was, as we have noted, an elusive target (see Chapter 6, Section III).
Angel Adams Parham
- Published in print:
- 2017
- Published Online:
- March 2017
- ISBN:
- 9780190624750
- eISBN:
- 9780190624781
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780190624750.003.0007
- Subject:
- Sociology, Race and Ethnicity, Migration Studies (including Refugee Studies)
Chapter 6 examines the contemporary experience of Creoles of African descent. Changing conceptions of blackness during and after the civil rights and Black Power movements challenged Creole identity ...
More
Chapter 6 examines the contemporary experience of Creoles of African descent. Changing conceptions of blackness during and after the civil rights and Black Power movements challenged Creole identity and made it more difficult for many Creoles of color to see themselves as distinct from other black Americans. These tensions are explored by examining the stories of Creoles of color as gathered from interviews and participant observation with Creole cultural organizations. These stories show three different responses Creoles of color have had to the pressures to assimilate to the binary US racial system: (1) adopt a black American identity; (2) pass as white; or (3) resist the categories of black and white. The chapter concludes by considering similarities between Louisiana’s Creoles of color and Latino immigrants of color who have experienced many of the same tensions and misunderstandings as they have struggled with Anglo-American conceptions of whiteness and blackness.Less
Chapter 6 examines the contemporary experience of Creoles of African descent. Changing conceptions of blackness during and after the civil rights and Black Power movements challenged Creole identity and made it more difficult for many Creoles of color to see themselves as distinct from other black Americans. These tensions are explored by examining the stories of Creoles of color as gathered from interviews and participant observation with Creole cultural organizations. These stories show three different responses Creoles of color have had to the pressures to assimilate to the binary US racial system: (1) adopt a black American identity; (2) pass as white; or (3) resist the categories of black and white. The chapter concludes by considering similarities between Louisiana’s Creoles of color and Latino immigrants of color who have experienced many of the same tensions and misunderstandings as they have struggled with Anglo-American conceptions of whiteness and blackness.
Leslie Bow
- Published in print:
- 2010
- Published Online:
- March 2016
- ISBN:
- 9780814791325
- eISBN:
- 9780814739129
- Item type:
- chapter
- Publisher:
- NYU Press
- DOI:
- 10.18574/nyu/9780814791325.003.0004
- Subject:
- Society and Culture, Cultural Studies
This chapter looks at narratives articulating Chinese caste elevation in the Mississippi Delta within academic studies, popular culture, film, and memoir. James Loewen's The Mississippi Chinese ...
More
This chapter looks at narratives articulating Chinese caste elevation in the Mississippi Delta within academic studies, popular culture, film, and memoir. James Loewen's The Mississippi Chinese argues that when faced with a binary racial system that had no accommodation for a third race, the Chinese engineered a shift in status from “colored” to white in the course of one generation. The chapter highlights what becomes repressed in positing racial uplift in response to intermediate status. In contrast to European immigrant groups, the Asian's supposed caste rise can only be characterized as a registered incompletion, as near-whiteness. This incompletion is likewise reflected in the discourses that have sought to represent such status, the scholarship surrounding and generated by Loewen's thesis, including the 1982 documentary film Mississippi Triangle. The chapter thus examines what discursive contradictions were generated in the incomplete attempts to convince of African American disassociation, specifically, the repression of Chinese-Black intimacy.Less
This chapter looks at narratives articulating Chinese caste elevation in the Mississippi Delta within academic studies, popular culture, film, and memoir. James Loewen's The Mississippi Chinese argues that when faced with a binary racial system that had no accommodation for a third race, the Chinese engineered a shift in status from “colored” to white in the course of one generation. The chapter highlights what becomes repressed in positing racial uplift in response to intermediate status. In contrast to European immigrant groups, the Asian's supposed caste rise can only be characterized as a registered incompletion, as near-whiteness. This incompletion is likewise reflected in the discourses that have sought to represent such status, the scholarship surrounding and generated by Loewen's thesis, including the 1982 documentary film Mississippi Triangle. The chapter thus examines what discursive contradictions were generated in the incomplete attempts to convince of African American disassociation, specifically, the repression of Chinese-Black intimacy.
Subrata Dasgupta
- Published in print:
- 2014
- Published Online:
- November 2020
- ISBN:
- 9780199309412
- eISBN:
- 9780197562857
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199309412.003.0010
- Subject:
- Computer Science, History of Computer Science
By the end of World War II, independent of one another (and sometimes in mutual ignorance), a small assortment of highly creative minds—mathematicians, engineers, physicists, astronomers, and even ...
More
By the end of World War II, independent of one another (and sometimes in mutual ignorance), a small assortment of highly creative minds—mathematicians, engineers, physicists, astronomers, and even an actuary, some working in solitary mode, some in twos or threes, others in small teams, some backed by corporations, others by governments, many driven by the imperative of war—had developed a shadowy shape of what the elusive Holy Grail of automatic computing might look like. They may not have been able to define a priori the nature of this entity, but they were beginning to grasp how they might recognize it when they saw it. Which brings us to the nature of a computational paradigm. Ever since the historian and philosopher of science Thomas Kuhn (1922–1996) published The Structure of Scientific Revolutions (1962), we have all become ultraconscious of the concept and significance of the paradigm, not just in the scientific context (with which Kuhn was concerned), but in all intellectual and cultural discourse. A paradigm is a complex network of theories, models, procedures and practices, exemplars, and philosophical assumptions and values that establishes a framework within which scientists in a given field identify and solve problems. A paradigm, in effect, defines a community of scientists; it determines their shared working culture as scientists in a branch of science and a shared mentality. A hallmark of a mature science, according to Kuhn, is the emergence of a dominant paradigm to which a majority of scientists in that field of science adhere and broadly, although not necessarily in detail, agree on. In particular, they agree on the fundamental philosophical assumptions and values that oversee the science in question; its methods of experimental and analytical inquiry; and its major theories, laws, and principles. A scientist “grows up” inside a paradigm, beginning from his earliest formal training in a science in high school, through undergraduate and graduate schools, through doctoral work into postdoctoral days. Scientists nurtured within and by a paradigm more or less speak the same language, understand the same terms, and read the same texts (which codify the paradigm).
Less
By the end of World War II, independent of one another (and sometimes in mutual ignorance), a small assortment of highly creative minds—mathematicians, engineers, physicists, astronomers, and even an actuary, some working in solitary mode, some in twos or threes, others in small teams, some backed by corporations, others by governments, many driven by the imperative of war—had developed a shadowy shape of what the elusive Holy Grail of automatic computing might look like. They may not have been able to define a priori the nature of this entity, but they were beginning to grasp how they might recognize it when they saw it. Which brings us to the nature of a computational paradigm. Ever since the historian and philosopher of science Thomas Kuhn (1922–1996) published The Structure of Scientific Revolutions (1962), we have all become ultraconscious of the concept and significance of the paradigm, not just in the scientific context (with which Kuhn was concerned), but in all intellectual and cultural discourse. A paradigm is a complex network of theories, models, procedures and practices, exemplars, and philosophical assumptions and values that establishes a framework within which scientists in a given field identify and solve problems. A paradigm, in effect, defines a community of scientists; it determines their shared working culture as scientists in a branch of science and a shared mentality. A hallmark of a mature science, according to Kuhn, is the emergence of a dominant paradigm to which a majority of scientists in that field of science adhere and broadly, although not necessarily in detail, agree on. In particular, they agree on the fundamental philosophical assumptions and values that oversee the science in question; its methods of experimental and analytical inquiry; and its major theories, laws, and principles. A scientist “grows up” inside a paradigm, beginning from his earliest formal training in a science in high school, through undergraduate and graduate schools, through doctoral work into postdoctoral days. Scientists nurtured within and by a paradigm more or less speak the same language, understand the same terms, and read the same texts (which codify the paradigm).
Subrata Dasgupta
- Published in print:
- 2014
- Published Online:
- November 2020
- ISBN:
- 9780199309412
- eISBN:
- 9780197562857
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199309412.003.0014
- Subject:
- Computer Science, History of Computer Science
In august 1951, David Wheeler submitted a PhD dissertation titled Automatic Computing with the EDSAC to the faculty of mathematics (D. F. Hartley, personal ...
More
In august 1951, David Wheeler submitted a PhD dissertation titled Automatic Computing with the EDSAC to the faculty of mathematics (D. F. Hartley, personal communication, September 7, 2011) at the University of Cambridge. The year after, in November 1952, another of Maurice Wilkes’s students, Stanley Gill, submitted a thesis titled The Application of an Electronic Digital Computer to Problems in Mathematics and Physics. Wheeler’s was not the first doctoral degree awarded on the subject of computing. That honor must surely go to Herman Hollerith for his thesis submitted to Columbia University in 1890 on his invention of an electrical tabulating system (see Chapter 3, Section IV). Nor was Wheeler’s the first doctoral degree on a subject devoted to electronic computing. In December 1947, Tom Kilburn (codesigner with Frederic C. Williams of the Manchester Mark I [see Chapter 8, Section XIII]) had written a report on the CRT-based memory system he and Williams had developed (but called the Williams tube). This report was widely distributed in both Britain and the United States (and even found its way to Russia), and it became the basis for Kilburn’s PhD dissertation awarded in 1948 by the University of Manchester (S. H. Lavington, personal communication, August 31, 2011). Wheeler’s doctoral dissertation, however, was almost certainly the first on the subject of programming. And one might say that the award of these first doctoral degrees in the realm of computer “hardware” (in Kilburn’s case) and computer “software” (in Wheeler’s case) made the invention and design of computers and computing systems an academically respectable university discipline. As we have witnessed before in this story, establishing priority in the realm of computing is a murky business, especially at the birth of this new discipline. Thus, if by “computer science” we mean the study of computers and the phenomena surrounding computers (as three eminent computer scientists Allan Newell, Alan Perlis (1922–1990), and Herbert Simon suggested in 1967), then—assuming we agree on what “computers” are—the boundary between hardware and soft ware, between the physical computer and the activity of computing, dissolves.
Less
In august 1951, David Wheeler submitted a PhD dissertation titled Automatic Computing with the EDSAC to the faculty of mathematics (D. F. Hartley, personal communication, September 7, 2011) at the University of Cambridge. The year after, in November 1952, another of Maurice Wilkes’s students, Stanley Gill, submitted a thesis titled The Application of an Electronic Digital Computer to Problems in Mathematics and Physics. Wheeler’s was not the first doctoral degree awarded on the subject of computing. That honor must surely go to Herman Hollerith for his thesis submitted to Columbia University in 1890 on his invention of an electrical tabulating system (see Chapter 3, Section IV). Nor was Wheeler’s the first doctoral degree on a subject devoted to electronic computing. In December 1947, Tom Kilburn (codesigner with Frederic C. Williams of the Manchester Mark I [see Chapter 8, Section XIII]) had written a report on the CRT-based memory system he and Williams had developed (but called the Williams tube). This report was widely distributed in both Britain and the United States (and even found its way to Russia), and it became the basis for Kilburn’s PhD dissertation awarded in 1948 by the University of Manchester (S. H. Lavington, personal communication, August 31, 2011). Wheeler’s doctoral dissertation, however, was almost certainly the first on the subject of programming. And one might say that the award of these first doctoral degrees in the realm of computer “hardware” (in Kilburn’s case) and computer “software” (in Wheeler’s case) made the invention and design of computers and computing systems an academically respectable university discipline. As we have witnessed before in this story, establishing priority in the realm of computing is a murky business, especially at the birth of this new discipline. Thus, if by “computer science” we mean the study of computers and the phenomena surrounding computers (as three eminent computer scientists Allan Newell, Alan Perlis (1922–1990), and Herbert Simon suggested in 1967), then—assuming we agree on what “computers” are—the boundary between hardware and soft ware, between the physical computer and the activity of computing, dissolves.