Walter Phillip Bailey, Amy Brock Martin, Elizabeth H. Corley, Daniel J. Friedman, and R. Gibson Parrish II
- Published in print:
- 2005
- Published Online:
- September 2009
- ISBN:
- 9780195149289
- eISBN:
- 9780199865130
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195149289.003.0007
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
Traditional health statistics provide a baseline of information that helps characterize the health of a population or population subgroup. Where health statistics often fall short is in their ability ...
More
Traditional health statistics provide a baseline of information that helps characterize the health of a population or population subgroup. Where health statistics often fall short is in their ability to identify factors that affect population health but fall outside the traditional purview of public health agencies or health care providers. The term complementary data is used to refer to data on those factors that affect population health and yet are not generally collected by, or analyzed within, U.S. public health agencies. Examples include data on air and water quality monitoring, transportation, employment, crime, abuse and neglect, tax revenues from the sale of tobacco and alcohol, and housing characteristics. This chapter describes the major types of complementary data and presents examples of the use of such data. The examples provide an understanding of the variety of ways in which complementary data can be used. When both traditional health data and complementary data contain sufficient detail to enable linkage, powerful tools for assessing communities, designing and targeting programs, evaluating programs, creating knowledge, and informing the public emerge.Less
Traditional health statistics provide a baseline of information that helps characterize the health of a population or population subgroup. Where health statistics often fall short is in their ability to identify factors that affect population health but fall outside the traditional purview of public health agencies or health care providers. The term complementary data is used to refer to data on those factors that affect population health and yet are not generally collected by, or analyzed within, U.S. public health agencies. Examples include data on air and water quality monitoring, transportation, employment, crime, abuse and neglect, tax revenues from the sale of tobacco and alcohol, and housing characteristics. This chapter describes the major types of complementary data and presents examples of the use of such data. The examples provide an understanding of the variety of ways in which complementary data can be used. When both traditional health data and complementary data contain sufficient detail to enable linkage, powerful tools for assessing communities, designing and targeting programs, evaluating programs, creating knowledge, and informing the public emerge.
Hirokazu Yoshikawa and Marybeth Shinn
- Published in print:
- 2008
- Published Online:
- April 2010
- ISBN:
- 9780195327892
- eISBN:
- 9780199301478
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195327892.003.0019
- Subject:
- Psychology, Clinical Child Psychology / School Psychology
This chapter summarizes three kinds of intervention goals and strategies that can improve youth-serving social settings. First, organizations and communities can come together in participatory ways ...
More
This chapter summarizes three kinds of intervention goals and strategies that can improve youth-serving social settings. First, organizations and communities can come together in participatory ways to plan and implement setting-level change. Processes of buy-in, collaboration, and capacity-building are considered from both organizational and community perspectives. Second, organizations and communities can better use setting-level data to monitor progress, rather than relying on the typical bean-counting approaches to measure youth participation, or single youth indicators like high-stakes testing. Third, organizations and communities can increase both the representation of diverse groups of youth in social settings and the quality of their experience. Examples are drawn from the other chapters in the volume.Less
This chapter summarizes three kinds of intervention goals and strategies that can improve youth-serving social settings. First, organizations and communities can come together in participatory ways to plan and implement setting-level change. Processes of buy-in, collaboration, and capacity-building are considered from both organizational and community perspectives. Second, organizations and communities can better use setting-level data to monitor progress, rather than relying on the typical bean-counting approaches to measure youth participation, or single youth indicators like high-stakes testing. Third, organizations and communities can increase both the representation of diverse groups of youth in social settings and the quality of their experience. Examples are drawn from the other chapters in the volume.
Sarah Brayne
- Published in print:
- 2020
- Published Online:
- October 2020
- ISBN:
- 9780190684099
- eISBN:
- 9780190684129
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190684099.003.0003
- Subject:
- Sociology, Law, Crime and Deviance, Science, Technology and Environment
This chapter discusses dragnet surveillance, which is the collection and analysis of information on everyone, rather than merely those under suspicion. Dragnet surveillance—and the data it ...
More
This chapter discusses dragnet surveillance, which is the collection and analysis of information on everyone, rather than merely those under suspicion. Dragnet surveillance—and the data it produces—can be useful for law enforcement to solve crimes. Dragnet surveillance widens and deepens social oversight: it includes a broader swath of people and can follow any single individual across a greater range of institutional settings. It is associated with three key transformations in the practice of policing: the shift from query-based to alert-based systems makes it possible to systematically surveil an unprecedentedly large number of people; individuals with no direct police contact are now included in law enforcement systems, lowering the threshold for inclusion in police databases; and institutional data systems are integrated, with police now collecting and using information gleaned from institutions not typically associated with crime control. However, dragnet surveillance is not an inevitable result of mass digitization. Rather, it is the result of choices that reflect the social and political positions of the subjects and the subject matter under surveillance.Less
This chapter discusses dragnet surveillance, which is the collection and analysis of information on everyone, rather than merely those under suspicion. Dragnet surveillance—and the data it produces—can be useful for law enforcement to solve crimes. Dragnet surveillance widens and deepens social oversight: it includes a broader swath of people and can follow any single individual across a greater range of institutional settings. It is associated with three key transformations in the practice of policing: the shift from query-based to alert-based systems makes it possible to systematically surveil an unprecedentedly large number of people; individuals with no direct police contact are now included in law enforcement systems, lowering the threshold for inclusion in police databases; and institutional data systems are integrated, with police now collecting and using information gleaned from institutions not typically associated with crime control. However, dragnet surveillance is not an inevitable result of mass digitization. Rather, it is the result of choices that reflect the social and political positions of the subjects and the subject matter under surveillance.
Ana Aizcorbe, Colin Baker, Ernst R. Berndt, and David M. Cutler
- Published in print:
- 2018
- Published Online:
- January 2019
- ISBN:
- 9780226530857
- eISBN:
- 9780226530994
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226530994.003.0001
- Subject:
- Economics and Finance, Econometrics
Medical care costs accounts for nearly 18% of Gross Domestic Product (GDP) and 20% of government spending. As a country, we know a lot about where the medical dollar goes. 38% of medical care dollars ...
More
Medical care costs accounts for nearly 18% of Gross Domestic Product (GDP) and 20% of government spending. As a country, we know a lot about where the medical dollar goes. 38% of medical care dollars are paid to hospitals, 31% is paid for professional services, 12% is for outpatient pharmaceuticals, and so forth. But this is not really what we value. The goal of medical care is not to poke, prod, or take pictures of our insides; rather, it is to improve our wellbeing. To really understand health care, we need to determine what it is doing for our health. Health accounting is not easy. Academics and statistical agencies have struggled with it for decades. Questions range from the mundane—how do colonoscopy prices vary across payers? —to the fundamental—to what extent is medical care improving the population’s health? With this much uncertainty about the value of medical care, it is incumbent on public and private researchers alike to regularly survey the landscape. What do we know about medical care costs and output? Where can we make improvements in our measurement systems? What areas remain unexplored? These issues are studied in this volume.Less
Medical care costs accounts for nearly 18% of Gross Domestic Product (GDP) and 20% of government spending. As a country, we know a lot about where the medical dollar goes. 38% of medical care dollars are paid to hospitals, 31% is paid for professional services, 12% is for outpatient pharmaceuticals, and so forth. But this is not really what we value. The goal of medical care is not to poke, prod, or take pictures of our insides; rather, it is to improve our wellbeing. To really understand health care, we need to determine what it is doing for our health. Health accounting is not easy. Academics and statistical agencies have struggled with it for decades. Questions range from the mundane—how do colonoscopy prices vary across payers? —to the fundamental—to what extent is medical care improving the population’s health? With this much uncertainty about the value of medical care, it is incumbent on public and private researchers alike to regularly survey the landscape. What do we know about medical care costs and output? Where can we make improvements in our measurement systems? What areas remain unexplored? These issues are studied in this volume.
Ron Avi Astor, Linda Jacobson, Stephanie L. Wrabel, Rami Benbenishty, and Diana Pineda
- Published in print:
- 2017
- Published Online:
- November 2020
- ISBN:
- 9780190845513
- eISBN:
- 9780197559833
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190845513.003.0007
- Subject:
- Education, Care and Counseling of Students
For schools to be more proactive about addressing the needs of transitioning students and families, it’s important that district officials have a good sense of how ...
More
For schools to be more proactive about addressing the needs of transitioning students and families, it’s important that district officials have a good sense of how often students are changing schools, who these students are, where they’re coming from, and where they’re going. Currently, there is wide variation in how states handle mobility in their student data systems. While some states have a specific definition of mobility, there are also differences in those definitions. By law, states track migrant and homeless students, but not all flag other groups of students that are likely to be mobile, such as military-connected students or those in foster care. Another complication is that when students move, schools do not mark the reason for the transition. Without knowing the reason for the change, all mobile students are lumped into one category— movers. But, as the previous chapter showed, the circumstances surrounding a move can affect students in different ways and have implications for how schools respond. If a move is proactive, for example, the family and the child may feel less stress and the student might feel more positive about the experience. If the change into a new school is reactive—caused perhaps by a difficult financial situation or leaving a negative situation at another school— the student and parents might feel more anxiety about the new school and need additional support and friendship during that time. Current data systems and the information they provide make it very difficult for researchers to separate the effect of the school move from the effect of the circumstances surrounding the move. These are important distinctions for educators to consider. Data systems do allow for researchers and practitioners to understand if a student moved during the summer or during the academic year. The timing of moves may be suggestive of the type of move a student is making; proactive moves may be more likely to occur in the summer months when learning will not be disrupted. Mid-year moves may have a proactive element, such as families moving for a better job, but they may also be reactive in nature, such as a loss of housing.
Less
For schools to be more proactive about addressing the needs of transitioning students and families, it’s important that district officials have a good sense of how often students are changing schools, who these students are, where they’re coming from, and where they’re going. Currently, there is wide variation in how states handle mobility in their student data systems. While some states have a specific definition of mobility, there are also differences in those definitions. By law, states track migrant and homeless students, but not all flag other groups of students that are likely to be mobile, such as military-connected students or those in foster care. Another complication is that when students move, schools do not mark the reason for the transition. Without knowing the reason for the change, all mobile students are lumped into one category— movers. But, as the previous chapter showed, the circumstances surrounding a move can affect students in different ways and have implications for how schools respond. If a move is proactive, for example, the family and the child may feel less stress and the student might feel more positive about the experience. If the change into a new school is reactive—caused perhaps by a difficult financial situation or leaving a negative situation at another school— the student and parents might feel more anxiety about the new school and need additional support and friendship during that time. Current data systems and the information they provide make it very difficult for researchers to separate the effect of the school move from the effect of the circumstances surrounding the move. These are important distinctions for educators to consider. Data systems do allow for researchers and practitioners to understand if a student moved during the summer or during the academic year. The timing of moves may be suggestive of the type of move a student is making; proactive moves may be more likely to occur in the summer months when learning will not be disrupted. Mid-year moves may have a proactive element, such as families moving for a better job, but they may also be reactive in nature, such as a loss of housing.
Margaret Spinelli
- Published in print:
- 2014
- Published Online:
- November 2020
- ISBN:
- 9780199676859
- eISBN:
- 9780191918346
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199676859.003.0030
- Subject:
- Clinical Medicine and Allied Health, Psychiatry
Child abuse is a major cause of morbidity and mortality in the United States and other countries. It is the second leading cause of death among children in ...
More
Child abuse is a major cause of morbidity and mortality in the United States and other countries. It is the second leading cause of death among children in the US. All 50 States, the District of Columbia, and the US Territories have mandatory child abuse and neglect reporting laws that require certain professionals and institutions to report suspected maltreatment to a child protective services (CPS) agency. Four major types of maltreatment are considered: neglect, physical abuse, psychological maltreatment, and sexual abuse (Centers for Disease Control and Prevention 2010). Once an allegation or referral of child abuse is received by a CPS agency, the majority of reports receive investigations to establish whether or not an intervention is needed. Some reports receive an alternative response in which safety and risk assessments are conducted, but the focus is on working with the family to address issues. Investigations involve gathering evidence to substantiate the alleged maltreatment. Data from reports on child abuse is derived from the National Child Abuse and Neglect Data System (NCANDS), which aggregates and publishes statistics from state child protection agencies. The first report from NCANDS was based on data for 1990. Case-level data include information about the characteristics of reports of abuse and neglect that are made to CPS agencies, the children involved, the types of maltreatment that are alleged, the dispositions of the CPS responses, the risk factors of the child and the caregivers, the services that are provided, and the perpetrators (Centers for Disease Control and Prevention 2010). During 2010, the NCANSDS reported that an estimated 3.3 million referrals estimated to include 5.9 million children were received by CPS agencies. Of the nearly 2 million reports that were screened and received a CPS response, 90.3% received an investigation response and 9.7% received an alternative response (Centers for Disease Control and Prevention 2010). Of the 1,793,724 reports that received an investigation in 2010, 436,321 were substantiated; 24,976 were found to be indicated (likely but unsubstantiated); and 1,262,118 were found to be unsubstantiated. Three-fifths of reports of alleged child abuse and neglect were made by professionals.
Less
Child abuse is a major cause of morbidity and mortality in the United States and other countries. It is the second leading cause of death among children in the US. All 50 States, the District of Columbia, and the US Territories have mandatory child abuse and neglect reporting laws that require certain professionals and institutions to report suspected maltreatment to a child protective services (CPS) agency. Four major types of maltreatment are considered: neglect, physical abuse, psychological maltreatment, and sexual abuse (Centers for Disease Control and Prevention 2010). Once an allegation or referral of child abuse is received by a CPS agency, the majority of reports receive investigations to establish whether or not an intervention is needed. Some reports receive an alternative response in which safety and risk assessments are conducted, but the focus is on working with the family to address issues. Investigations involve gathering evidence to substantiate the alleged maltreatment. Data from reports on child abuse is derived from the National Child Abuse and Neglect Data System (NCANDS), which aggregates and publishes statistics from state child protection agencies. The first report from NCANDS was based on data for 1990. Case-level data include information about the characteristics of reports of abuse and neglect that are made to CPS agencies, the children involved, the types of maltreatment that are alleged, the dispositions of the CPS responses, the risk factors of the child and the caregivers, the services that are provided, and the perpetrators (Centers for Disease Control and Prevention 2010). During 2010, the NCANSDS reported that an estimated 3.3 million referrals estimated to include 5.9 million children were received by CPS agencies. Of the nearly 2 million reports that were screened and received a CPS response, 90.3% received an investigation response and 9.7% received an alternative response (Centers for Disease Control and Prevention 2010). Of the 1,793,724 reports that received an investigation in 2010, 436,321 were substantiated; 24,976 were found to be indicated (likely but unsubstantiated); and 1,262,118 were found to be unsubstantiated. Three-fifths of reports of alleged child abuse and neglect were made by professionals.
José van Dijck, Thomas Poell, and Martijn de Waal
- Published in print:
- 2018
- Published Online:
- October 2018
- ISBN:
- 9780190889760
- eISBN:
- 9780190889807
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190889760.001.0001
- Subject:
- Literature, Film, Media, and Cultural Studies
Individuals all over the world can use Airbnb to rent an apartment in a foreign city, check Coursera to find a course on statistics, join PatientsLikeMe to exchange information about one’s disease, ...
More
Individuals all over the world can use Airbnb to rent an apartment in a foreign city, check Coursera to find a course on statistics, join PatientsLikeMe to exchange information about one’s disease, hail a cab using Uber, or read the news through Facebook’s Instant Articles. In The Platform Society, Van Dijck, Poell, and De Waal offer a comprehensive analysis of a connective world where platforms have penetrated the heart of societies—disrupting markets and labor relations, transforming social and civic practices, and affecting democratic processes. The Platform Society analyzes intense struggles between competing ideological systems and contesting societal actors—market, government, and civil society—asking who is or should be responsible for anchoring public values and the common good in a platform society. Public values include, of course, privacy, accuracy, safety, and security; but they also pertain to broader societal effects, such as fairness, accessibility, democratic control, and accountability. Such values are the very stakes in the struggle over the platformization of societies around the globe. The Platform Society highlights how these struggles play out in four private and public sectors: news, urban transport, health, and education. Some of these conflicts highlight local dimensions, for instance, fights over regulation between individual platforms and city councils, while others address the geopolitical level where power clashes between global markets and (supra-)national governments take place.Less
Individuals all over the world can use Airbnb to rent an apartment in a foreign city, check Coursera to find a course on statistics, join PatientsLikeMe to exchange information about one’s disease, hail a cab using Uber, or read the news through Facebook’s Instant Articles. In The Platform Society, Van Dijck, Poell, and De Waal offer a comprehensive analysis of a connective world where platforms have penetrated the heart of societies—disrupting markets and labor relations, transforming social and civic practices, and affecting democratic processes. The Platform Society analyzes intense struggles between competing ideological systems and contesting societal actors—market, government, and civil society—asking who is or should be responsible for anchoring public values and the common good in a platform society. Public values include, of course, privacy, accuracy, safety, and security; but they also pertain to broader societal effects, such as fairness, accessibility, democratic control, and accountability. Such values are the very stakes in the struggle over the platformization of societies around the globe. The Platform Society highlights how these struggles play out in four private and public sectors: news, urban transport, health, and education. Some of these conflicts highlight local dimensions, for instance, fights over regulation between individual platforms and city councils, while others address the geopolitical level where power clashes between global markets and (supra-)national governments take place.
Bruce S. Edwards and Larry A. Sklar
- Published in print:
- 2005
- Published Online:
- November 2020
- ISBN:
- 9780195183146
- eISBN:
- 9780197561898
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195183146.003.0007
- Subject:
- Chemistry, Physical Chemistry
The flow cytometer is unique among biomedical analysis instruments in its ability to make multiple correlated optical measurements on individual cells or ...
More
The flow cytometer is unique among biomedical analysis instruments in its ability to make multiple correlated optical measurements on individual cells or particles at high rates. Moreover, an ever-expanding arsenal of fluorescent probes enables the modern flow cytometer to quantify a large and growing diversity of cell-associated macromolecules and physiological processes. Modern flow cytometers have achieved such a level of sophistication and reliability that unattended operation by automated systems is a practical reality. From its inception, flow cytometry has been in the vanguard of automation in cytological analysis. One of the most powerful automated features is cell sorting, an operation in which highly purified subsets of cells or particles are isolated from heterogeneous source populations on the basis of a targeted, multiparameter phenotype. The method most widely used for sorting today, which is based on electrostatic deflection of charged droplets, was developed over 30 years ago and led to commercial flow cytometers that were capable of sorting cells at rates of hundreds of cells per second. Influenced by the need of the Human Genome Project for efficient isolation of purified chromosomes, a high-speed chromosome flow sorter was developed and patented in 1982 that increased sort rates to tens of thousands of events per second (13). Commercial systems subsequently became available in the 1990s that permitted sorting of cells at such high rates (www.bdbiosciences.com; www.dakocytomation. com). Thus, since the initial development of the technology, the throughput of automated cell sorting has increased by nearly two orders of magnitude. In single cell analysis and sorting, throughput is determined by the rate at which the flow cytometer can process individual cells as they pass single file through the point of detection. Another aspect of flow cytometer throughput concerns the rate at which the flow cytometer can sequentially process multiple discreet collections of cells. This component of throughput will be important, for example, in the screening of collections of test compounds for their effects on bulk populations of cells. This is of particular relevance for modern drug discovery, in which there is a need to test cellular targets against millions of potentially valuable compounds that may bind cellular receptors to effect clinically therapeutic cellular responses.
Less
The flow cytometer is unique among biomedical analysis instruments in its ability to make multiple correlated optical measurements on individual cells or particles at high rates. Moreover, an ever-expanding arsenal of fluorescent probes enables the modern flow cytometer to quantify a large and growing diversity of cell-associated macromolecules and physiological processes. Modern flow cytometers have achieved such a level of sophistication and reliability that unattended operation by automated systems is a practical reality. From its inception, flow cytometry has been in the vanguard of automation in cytological analysis. One of the most powerful automated features is cell sorting, an operation in which highly purified subsets of cells or particles are isolated from heterogeneous source populations on the basis of a targeted, multiparameter phenotype. The method most widely used for sorting today, which is based on electrostatic deflection of charged droplets, was developed over 30 years ago and led to commercial flow cytometers that were capable of sorting cells at rates of hundreds of cells per second. Influenced by the need of the Human Genome Project for efficient isolation of purified chromosomes, a high-speed chromosome flow sorter was developed and patented in 1982 that increased sort rates to tens of thousands of events per second (13). Commercial systems subsequently became available in the 1990s that permitted sorting of cells at such high rates (www.bdbiosciences.com; www.dakocytomation. com). Thus, since the initial development of the technology, the throughput of automated cell sorting has increased by nearly two orders of magnitude. In single cell analysis and sorting, throughput is determined by the rate at which the flow cytometer can process individual cells as they pass single file through the point of detection. Another aspect of flow cytometer throughput concerns the rate at which the flow cytometer can sequentially process multiple discreet collections of cells. This component of throughput will be important, for example, in the screening of collections of test compounds for their effects on bulk populations of cells. This is of particular relevance for modern drug discovery, in which there is a need to test cellular targets against millions of potentially valuable compounds that may bind cellular receptors to effect clinically therapeutic cellular responses.
Adrian F. Tuck
- Published in print:
- 2008
- Published Online:
- November 2020
- ISBN:
- 9780199236534
- eISBN:
- 9780191917462
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199236534.003.0008
- Subject:
- Earth Sciences and Geography, Atmospheric Sciences
During the last two missions performed by the ER-2 in the Arctic lower stratosphere, POLARIS in the summer of 1997 and SOLVE during the winter of 1999–2000, an ...
More
During the last two missions performed by the ER-2 in the Arctic lower stratosphere, POLARIS in the summer of 1997 and SOLVE during the winter of 1999–2000, an unexpected correlation emerged when the data were subjected to analysis by generalized scale invariance. It was between the intermittency of temperature, a number which can be determined for each segment of analysable flight from the temperature measurements, and the average over the flight segment of the photodissociation rate of ozone, which was calculable as a time series along the flight segment by taking the product of the 1Hz measurements of the local ozone concentration and the 1Hz measurements of the ozone photodissociation coefficient. In searching for a physical explanation of this correlation, it was realized that the common link between the quantities was that ozone photodissociation produces photofragments of atomic and molecular oxygen that recoil very fast, while temperature itself is the integral of the translational energy of all air molecules. The next step therefore was to ask if the intermittency of temperature was correlated with the average of the temperature itself over the flight segment: it was. One might think that because ozone is present at about 20km altitude in mixing ratios of about 2−3×10−6, the rapid quenching of the translational energies of the recoiling photofragments by molecular nitrogen and molecular oxygen would prevent any possible effects from showing up in the bulk, observed temperature. However, during the POLARIS mission, it was possible to fly the ER-2 near the terminator, the boundary between day and night, because at Arctic latitudes the planet was rotating slowly enough that it could fly legs in the same, stagnant air mass in both sunlight and darkness. These flights showed that the heating rate was significant, about 0.2Kper hour, and since heating in the stratosphere arises from the absorption of solar radiation by ozone, which leads to photodissociation, there is a prima facie case for considering non-local thermodynamic equilibrium effects from the recoiling fast photofragments. Two arguments may be deployed at this point, both from the theoretical literature; there are as yet no experiments on the translational speed distributions of atmospheric molecules.
Less
During the last two missions performed by the ER-2 in the Arctic lower stratosphere, POLARIS in the summer of 1997 and SOLVE during the winter of 1999–2000, an unexpected correlation emerged when the data were subjected to analysis by generalized scale invariance. It was between the intermittency of temperature, a number which can be determined for each segment of analysable flight from the temperature measurements, and the average over the flight segment of the photodissociation rate of ozone, which was calculable as a time series along the flight segment by taking the product of the 1Hz measurements of the local ozone concentration and the 1Hz measurements of the ozone photodissociation coefficient. In searching for a physical explanation of this correlation, it was realized that the common link between the quantities was that ozone photodissociation produces photofragments of atomic and molecular oxygen that recoil very fast, while temperature itself is the integral of the translational energy of all air molecules. The next step therefore was to ask if the intermittency of temperature was correlated with the average of the temperature itself over the flight segment: it was. One might think that because ozone is present at about 20km altitude in mixing ratios of about 2−3×10−6, the rapid quenching of the translational energies of the recoiling photofragments by molecular nitrogen and molecular oxygen would prevent any possible effects from showing up in the bulk, observed temperature. However, during the POLARIS mission, it was possible to fly the ER-2 near the terminator, the boundary between day and night, because at Arctic latitudes the planet was rotating slowly enough that it could fly legs in the same, stagnant air mass in both sunlight and darkness. These flights showed that the heating rate was significant, about 0.2Kper hour, and since heating in the stratosphere arises from the absorption of solar radiation by ozone, which leads to photodissociation, there is a prima facie case for considering non-local thermodynamic equilibrium effects from the recoiling fast photofragments. Two arguments may be deployed at this point, both from the theoretical literature; there are as yet no experiments on the translational speed distributions of atmospheric molecules.