Seb Franklin
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262029537
- eISBN:
- 9780262331135
- Item type:
- book
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262029537.001.0001
- Subject:
- Society and Culture, Technology and Society
This book addresses the conditions of knowledge that make the concept of the “information economy” possible while at the same time obscuring its deleterious effects on material social spaces. In so ...
More
This book addresses the conditions of knowledge that make the concept of the “information economy” possible while at the same time obscuring its deleterious effects on material social spaces. In so doing, the book traces three intertwined threads: the relationships among information, labor, and social management that emerged in the nineteenth century; the mid-twentieth-century diffusion of computational metaphors; and the appearance of informatic principles in certain contemporary socioeconomic and cultural practices. Drawing on critical theory, media theory, and the history of science, the book names control as the episteme grounding late capitalism. Beyond any specific device or set of technically mediated practices, digitality functions within this episteme as the logical basis for reshaped concepts of labor, subjectivity, and collectivity, as well as for the intensification of older modes of exclusion and dispossession. In tracking the pervasiveness of this logical mode into the present, the book locates the cultural traces of control across a diverse body of objects and practices, from cybernetics to economic theory and management styles, and from concepts of language and subjectivity to literary texts, films, and video games.Less
This book addresses the conditions of knowledge that make the concept of the “information economy” possible while at the same time obscuring its deleterious effects on material social spaces. In so doing, the book traces three intertwined threads: the relationships among information, labor, and social management that emerged in the nineteenth century; the mid-twentieth-century diffusion of computational metaphors; and the appearance of informatic principles in certain contemporary socioeconomic and cultural practices. Drawing on critical theory, media theory, and the history of science, the book names control as the episteme grounding late capitalism. Beyond any specific device or set of technically mediated practices, digitality functions within this episteme as the logical basis for reshaped concepts of labor, subjectivity, and collectivity, as well as for the intensification of older modes of exclusion and dispossession. In tracking the pervasiveness of this logical mode into the present, the book locates the cultural traces of control across a diverse body of objects and practices, from cybernetics to economic theory and management styles, and from concepts of language and subjectivity to literary texts, films, and video games.
Larry A. Sklar
- Published in print:
- 2005
- Published Online:
- November 2020
- ISBN:
- 9780195183146
- eISBN:
- 9780197561898
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195183146.003.0004
- Subject:
- Chemistry, Physical Chemistry
Flow cytometry is a mature technology: Instruments recognizable as having elements of modern flow cytometers date back at least 30 years. There are many good sources ...
More
Flow cytometry is a mature technology: Instruments recognizable as having elements of modern flow cytometers date back at least 30 years. There are many good sources for information about the essential features of flow cytometers, how they operate, and how they have been used. For the purposes of this book, it is necessary to know that flow cytometers have fluidic, optical, electronic, computational, and mechanical features. The main function of the fluidic components is to use hydrodynamic focusing to create a stable particle stream in which particles are aligned in single file within a sheath stream, so that the particles can be analyzed and sorted. The main functions of the optical components are to allow the particles to be illuminated by one or more lasers or other light sources and to allow scattered light as well as multiple fluorescence signals to be resolved and be routed to individual detectors. The electronics coordinate these functions, from the acquisition of the signals (pulse collection, pulse analysis, triggering, time delay, data, gating, detector control) to forming and charging individual droplets, and to making sort decisions. The computational components are directed at postacquisition data display and analysis, analysis of multivariate populations and multiplexing assays, and calibration and analysis of time-dependent cell or reaction phenomena. Mechanical components are now being integrated with flow cytometers to handle plates of samples and to coordinate automation such as the movement of a cloning tray with the collection of the droplets. The reader is directed to a concise description of these processes in Robinson’s article in the Encyclopedia of Biomaterials and Biomedical Engineering. This book was conceived of to provide a perspective on the future of flow cytometry, and particularly its application to biotechnology. It attempts to answer the question I heard repeatedly, especially during my association with the National Institutes of Health–funded National Flow Cytometry Resource at Los Alamos National Laboratory: What is the potential for innovation in flow cytometer design and application? This volume brings together those approaches that identify the unique contributions of flow cytometry to the modern world of biotechnology.
Less
Flow cytometry is a mature technology: Instruments recognizable as having elements of modern flow cytometers date back at least 30 years. There are many good sources for information about the essential features of flow cytometers, how they operate, and how they have been used. For the purposes of this book, it is necessary to know that flow cytometers have fluidic, optical, electronic, computational, and mechanical features. The main function of the fluidic components is to use hydrodynamic focusing to create a stable particle stream in which particles are aligned in single file within a sheath stream, so that the particles can be analyzed and sorted. The main functions of the optical components are to allow the particles to be illuminated by one or more lasers or other light sources and to allow scattered light as well as multiple fluorescence signals to be resolved and be routed to individual detectors. The electronics coordinate these functions, from the acquisition of the signals (pulse collection, pulse analysis, triggering, time delay, data, gating, detector control) to forming and charging individual droplets, and to making sort decisions. The computational components are directed at postacquisition data display and analysis, analysis of multivariate populations and multiplexing assays, and calibration and analysis of time-dependent cell or reaction phenomena. Mechanical components are now being integrated with flow cytometers to handle plates of samples and to coordinate automation such as the movement of a cloning tray with the collection of the droplets. The reader is directed to a concise description of these processes in Robinson’s article in the Encyclopedia of Biomaterials and Biomedical Engineering. This book was conceived of to provide a perspective on the future of flow cytometry, and particularly its application to biotechnology. It attempts to answer the question I heard repeatedly, especially during my association with the National Institutes of Health–funded National Flow Cytometry Resource at Los Alamos National Laboratory: What is the potential for innovation in flow cytometer design and application? This volume brings together those approaches that identify the unique contributions of flow cytometry to the modern world of biotechnology.
Adam Treister
- Published in print:
- 2005
- Published Online:
- November 2020
- ISBN:
- 9780195183146
- eISBN:
- 9780197561898
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195183146.003.0012
- Subject:
- Chemistry, Physical Chemistry
Flow cytometry is a result of the computer revolution. Biologists used fluorescent dyes in microscopy and medicine almost a hundred years before the first ...
More
Flow cytometry is a result of the computer revolution. Biologists used fluorescent dyes in microscopy and medicine almost a hundred years before the first flow cytometer. Only after electronics became sophisticated enough to control individual cells and computers became fast enough to analyze the data coming out of the instrument, and to make a decision in time to deflect the stream, did cell sorting become viable. Since the 1970s, the capabilities of computers have grown exponentially. According to the famed Moore’s Law, the size of the computer, as tracked by the number of transistors on a chip, doubles every 18 months. This rule has held for three decades so far, and new technologies continue to appear to keep that growth on track. The clock speed of chips is now measured in gigahertz—billions of instructions per second—and hard drives are now available with capacities measured in terabytes. Having computers so powerful, cheap, and ubiquitous changes the nature of scientific exploration. We are in the early steps of a long march of biotechnology breakthroughs spawned from this excess of compute power. From genomics to proteomics to high-throughput flow cytometry, the trend in biological research is toward massproduced, high-volume experiments. Automation is the key to scaling their size and scope and to lowering their cost per test. Each step that was previously done by human hands is being delegated to a computer or a robot for the implementation to be more precise and to scale efficiently. From making sort decisions in milliseconds to creating data archives that may last for centuries, computers control the information involved with cytometry, and software controls the computers. As the technology matures and the size and number of exper iments increase, the emphasis of software development switches from instrument control to analysis and management. The challenge for computers is not in running the cytometer any more. The more modern challenge for informatics is to analyze, aggregate, maintain, access, and exchange the huge volume of flow cytometry data. Clinical and other regulated use of cytometry necessitates more rigorous data administration techniques. These techniques introduce issues of security, integrity, and privacy into the processing of data.
Less
Flow cytometry is a result of the computer revolution. Biologists used fluorescent dyes in microscopy and medicine almost a hundred years before the first flow cytometer. Only after electronics became sophisticated enough to control individual cells and computers became fast enough to analyze the data coming out of the instrument, and to make a decision in time to deflect the stream, did cell sorting become viable. Since the 1970s, the capabilities of computers have grown exponentially. According to the famed Moore’s Law, the size of the computer, as tracked by the number of transistors on a chip, doubles every 18 months. This rule has held for three decades so far, and new technologies continue to appear to keep that growth on track. The clock speed of chips is now measured in gigahertz—billions of instructions per second—and hard drives are now available with capacities measured in terabytes. Having computers so powerful, cheap, and ubiquitous changes the nature of scientific exploration. We are in the early steps of a long march of biotechnology breakthroughs spawned from this excess of compute power. From genomics to proteomics to high-throughput flow cytometry, the trend in biological research is toward massproduced, high-volume experiments. Automation is the key to scaling their size and scope and to lowering their cost per test. Each step that was previously done by human hands is being delegated to a computer or a robot for the implementation to be more precise and to scale efficiently. From making sort decisions in milliseconds to creating data archives that may last for centuries, computers control the information involved with cytometry, and software controls the computers. As the technology matures and the size and number of exper iments increase, the emphasis of software development switches from instrument control to analysis and management. The challenge for computers is not in running the cytometer any more. The more modern challenge for informatics is to analyze, aggregate, maintain, access, and exchange the huge volume of flow cytometry data. Clinical and other regulated use of cytometry necessitates more rigorous data administration techniques. These techniques introduce issues of security, integrity, and privacy into the processing of data.
Colin Koopman
- Published in print:
- 2019
- Published Online:
- January 2020
- ISBN:
- 9780226626444
- eISBN:
- 9780226626611
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226626611.003.0004
- Subject:
- Philosophy, General
The central argument of How We Became Our Data is that over the past century we have become informational persons whose lives are increasingly conducted through an information politics. This chapter ...
More
The central argument of How We Became Our Data is that over the past century we have become informational persons whose lives are increasingly conducted through an information politics. This chapter begins with the project of real estate redlining in the United States in the 1930s to tell the story of the production of an informatics of race in the twentieth century that has been used to pursue racism by other, more subtle and insidious, means. During the 1910s and 1920s, the social categorization of race was enrolled in data technologies that to this day always announce our race for us in advance of any arrival. Interrogating the formation of these data technologies offers a way of seeing how sometimes-transformable race was made into ever-obdurate data. Building on recent contributions to the study of racializing technology, especially Simone Browne’s studies of racialized surveillance in light of Michel Foucault’s history of the Panopticon, this chapter offers a detailed inventory of a technology of racialization rooted in data. This critical genealogy excavates the informational conditions that help facilitate the ongoing persistence of race, and hence of racism, in our contemporary data-obsessed moment.Less
The central argument of How We Became Our Data is that over the past century we have become informational persons whose lives are increasingly conducted through an information politics. This chapter begins with the project of real estate redlining in the United States in the 1930s to tell the story of the production of an informatics of race in the twentieth century that has been used to pursue racism by other, more subtle and insidious, means. During the 1910s and 1920s, the social categorization of race was enrolled in data technologies that to this day always announce our race for us in advance of any arrival. Interrogating the formation of these data technologies offers a way of seeing how sometimes-transformable race was made into ever-obdurate data. Building on recent contributions to the study of racializing technology, especially Simone Browne’s studies of racialized surveillance in light of Michel Foucault’s history of the Panopticon, this chapter offers a detailed inventory of a technology of racialization rooted in data. This critical genealogy excavates the informational conditions that help facilitate the ongoing persistence of race, and hence of racism, in our contemporary data-obsessed moment.