Erika Balsom
- Published in print:
- 2017
- Published Online:
- January 2019
- ISBN:
- 9780231176934
- eISBN:
- 9780231543125
- Item type:
- book
- Publisher:
- Columbia University Press
- DOI:
- 10.7312/columbia/9780231176934.001.0001
- Subject:
- Film, Television and Radio, Film
Images have never been as freely circulated as they are today. They have also never been so tightly controlled. As with the birth of photography, digital reproduction has created new possibilities ...
More
Images have never been as freely circulated as they are today. They have also never been so tightly controlled. As with the birth of photography, digital reproduction has created new possibilities for the duplication and consumption of images, offering greater dissemination and access. But digital reproduction has also stoked new anxieties concerning authenticity and ownership. From this contemporary vantage point, After Uniqueness traces the ambivalence of reproducibility through the intersecting histories of experimental cinema and the moving image in art, examining how artists, filmmakers, and theorists have found in the copy a utopian promise or a dangerous inauthenticity—or both at once. From the sale of film in limited editions on the art market to the downloading of bootlegs, from the singularity of live cinema to video art broadcast on television, Erika Balsom investigates how the reproducibility of the moving image has been embraced, rejected, and negotiated by major figures including Stan Brakhage, Leo Castelli, and Gregory Markopoulos. Through a comparative analysis of selected distribution models and key case studies, she demonstrates how the question of image circulation is central to the history of film and video art. After Uniqueness shows that distribution channels are more than neutral pathways; they determine how we encounter, interpret, and write the history of the moving image as an art form.Less
Images have never been as freely circulated as they are today. They have also never been so tightly controlled. As with the birth of photography, digital reproduction has created new possibilities for the duplication and consumption of images, offering greater dissemination and access. But digital reproduction has also stoked new anxieties concerning authenticity and ownership. From this contemporary vantage point, After Uniqueness traces the ambivalence of reproducibility through the intersecting histories of experimental cinema and the moving image in art, examining how artists, filmmakers, and theorists have found in the copy a utopian promise or a dangerous inauthenticity—or both at once. From the sale of film in limited editions on the art market to the downloading of bootlegs, from the singularity of live cinema to video art broadcast on television, Erika Balsom investigates how the reproducibility of the moving image has been embraced, rejected, and negotiated by major figures including Stan Brakhage, Leo Castelli, and Gregory Markopoulos. Through a comparative analysis of selected distribution models and key case studies, she demonstrates how the question of image circulation is central to the history of film and video art. After Uniqueness shows that distribution channels are more than neutral pathways; they determine how we encounter, interpret, and write the history of the moving image as an art form.
Bradley E. Alger
- Published in print:
- 2019
- Published Online:
- February 2021
- ISBN:
- 9780190881481
- eISBN:
- 9780190093761
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190881481.003.0007
- Subject:
- Neuroscience, Techniques
This chapter reviews and evaluates reports that scientists often cannot repeat, or “reproduce” published work. It begins by defining what “reproducibility” means and how reproducibility applies to ...
More
This chapter reviews and evaluates reports that scientists often cannot repeat, or “reproduce” published work. It begins by defining what “reproducibility” means and how reproducibility applies to various kinds of science. The focus then shifts to the Reproducibility Project: Psychology, which was a systematic effort to repeat published findings in psychology, and which gave rise to many of the present concerns about reproducibility. The chapter critically examines the Reproducibility Project and points out how the nature of science and the complexity of nature can stymie the best attempts at reproducibility. The chapter also reviews the statistical criticisms of science that John Ioannidis and Katherine Button and their colleagues have raised. The hypothesis is a central issue because it is inconsistently defined across various branches of science. The statisticians’ strongest attacks are directed against work that differs from most laboratory experimental science. A weak point in the reasoning behind the Reproducibility Project and the statistical arguments is the assumption that a multi-pronged scientific investigation can be legitimately criticized by close examination of one of its components. Experimental science relies on multiple tests and multiple hypotheses to arrive at its conclusions. Reproducibility is a valid concern for science; it is not a “crisis.”Less
This chapter reviews and evaluates reports that scientists often cannot repeat, or “reproduce” published work. It begins by defining what “reproducibility” means and how reproducibility applies to various kinds of science. The focus then shifts to the Reproducibility Project: Psychology, which was a systematic effort to repeat published findings in psychology, and which gave rise to many of the present concerns about reproducibility. The chapter critically examines the Reproducibility Project and points out how the nature of science and the complexity of nature can stymie the best attempts at reproducibility. The chapter also reviews the statistical criticisms of science that John Ioannidis and Katherine Button and their colleagues have raised. The hypothesis is a central issue because it is inconsistently defined across various branches of science. The statisticians’ strongest attacks are directed against work that differs from most laboratory experimental science. A weak point in the reasoning behind the Reproducibility Project and the statistical arguments is the assumption that a multi-pronged scientific investigation can be legitimately criticized by close examination of one of its components. Experimental science relies on multiple tests and multiple hypotheses to arrive at its conclusions. Reproducibility is a valid concern for science; it is not a “crisis.”
Andrew Benjamin
- Published in print:
- 2013
- Published Online:
- September 2014
- ISBN:
- 9780748634347
- eISBN:
- 9780748695287
- Item type:
- book
- Publisher:
- Edinburgh University Press
- DOI:
- 10.3366/edinburgh/9780748634347.001.0001
- Subject:
- Philosophy, General
This book provides a highly original approach to the writings of the twentieth-century German philosopher Walter Benjamin by one of his most distinguished readers. It develops the idea of ‘working ...
More
This book provides a highly original approach to the writings of the twentieth-century German philosopher Walter Benjamin by one of his most distinguished readers. It develops the idea of ‘working with’ Benjamin, seeking both to read his corpus and to put it to work – to show how a reading of Benjamin can open up issues that may not themselves be immediately at stake in his texts. The defining elements in Benjamin’s writings that the author isolates – history, experience, translation, technical reproducibility, and politics – are put to work; that is, their utility is established in engaging the works of others. The question is how utility is understood. As the author argues, utility involves demonstrating the different ways in which Benjamin is a central thinker within the project of understanding the nature of modernity. This is best achieved by noting connections and points of differentiation between his work and the writings of Adorno and Heidegger. However, the more demanding project is that ‘working with’ Benjamin necessitates deploying the implicit assumptions within his writings as well as demanding of his formulations more than is provided by their initial presentation. What is at stake is not the application of Benjamin’s thought. Rather what counts is its use. The book engages with the themes central to Benjamin’s work while at the same time situating those themes within current academic and cultural debates.Less
This book provides a highly original approach to the writings of the twentieth-century German philosopher Walter Benjamin by one of his most distinguished readers. It develops the idea of ‘working with’ Benjamin, seeking both to read his corpus and to put it to work – to show how a reading of Benjamin can open up issues that may not themselves be immediately at stake in his texts. The defining elements in Benjamin’s writings that the author isolates – history, experience, translation, technical reproducibility, and politics – are put to work; that is, their utility is established in engaging the works of others. The question is how utility is understood. As the author argues, utility involves demonstrating the different ways in which Benjamin is a central thinker within the project of understanding the nature of modernity. This is best achieved by noting connections and points of differentiation between his work and the writings of Adorno and Heidegger. However, the more demanding project is that ‘working with’ Benjamin necessitates deploying the implicit assumptions within his writings as well as demanding of his formulations more than is provided by their initial presentation. What is at stake is not the application of Benjamin’s thought. Rather what counts is its use. The book engages with the themes central to Benjamin’s work while at the same time situating those themes within current academic and cultural debates.
R. Barker Bausell
- Published in print:
- 2021
- Published Online:
- February 2021
- ISBN:
- 9780197536537
- eISBN:
- 9780197536568
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780197536537.001.0001
- Subject:
- Psychology, Social Psychology
This book tells the story of how a cadre of dedicated, iconoclastic scientists raised the awareness of a long-recognized preference for publishing positive, eye-catching, but irreproducible results ...
More
This book tells the story of how a cadre of dedicated, iconoclastic scientists raised the awareness of a long-recognized preference for publishing positive, eye-catching, but irreproducible results to the status of a genuine scientific crisis. Most famously encapsulated in 2005 by John Ioannidis’s iconic title, “Why Most Published Research Findings Are False,” awareness of the seriousness of the crisis itself was in full bloom sometime around 2011–2012, when a veritable flood of supporting empirical and methodological work began appearing in the scientific literature detailing both the extent of the crisis and how it could be ameliorated. Perhaps most importantly were a number of mass replications of large sets of published psychology experiments (100 in all) by the Open Science Collaboration, preclinical cancer experiments (53) that a large pharmaceutical company considered sufficiently promising to pursue if the original results were reproducible, and 67 similarly promising studies upon which an even larger pharmaceutical company decided to replicate prior to initiating the expense and time-consuming developmental process. Shockingly, less than 50% of these 220 study results could be replicated, thereby providing unwelcomed evidence that John Ioannidis’s projections (and others performed both earlier and later) that more than half of published scientific results were false and could not be reproduced by other scientists. Fortunately, a plethora of practical, procedural behaviors accompanied these demonstrations and projects that were quite capable of greatly reducing the prevalence of future irreproducible results. Therefore the primary purpose of this book is use these impressive labors of hundreds of methodologically oriented scientists to provide guidance to practicing and aspiring scientists regarding how (a) to change the way in which science has historically been both conducted and reported in order to avoid producing false-positive, irreproducible results in their own work and, (b) ultimately, to change those institutional practices (primarily but not exclusively involving the traditional journal publishing process and the academic reward system) that have unwittingly contributed to the present crisis. For what is actually needed is nothing less than a change in the scientific culture itself to one that will prioritize conducting research correctly in order to get things right rather than simply to get published. Hopefully this book can make a small contribution to that end.Less
This book tells the story of how a cadre of dedicated, iconoclastic scientists raised the awareness of a long-recognized preference for publishing positive, eye-catching, but irreproducible results to the status of a genuine scientific crisis. Most famously encapsulated in 2005 by John Ioannidis’s iconic title, “Why Most Published Research Findings Are False,” awareness of the seriousness of the crisis itself was in full bloom sometime around 2011–2012, when a veritable flood of supporting empirical and methodological work began appearing in the scientific literature detailing both the extent of the crisis and how it could be ameliorated. Perhaps most importantly were a number of mass replications of large sets of published psychology experiments (100 in all) by the Open Science Collaboration, preclinical cancer experiments (53) that a large pharmaceutical company considered sufficiently promising to pursue if the original results were reproducible, and 67 similarly promising studies upon which an even larger pharmaceutical company decided to replicate prior to initiating the expense and time-consuming developmental process. Shockingly, less than 50% of these 220 study results could be replicated, thereby providing unwelcomed evidence that John Ioannidis’s projections (and others performed both earlier and later) that more than half of published scientific results were false and could not be reproduced by other scientists. Fortunately, a plethora of practical, procedural behaviors accompanied these demonstrations and projects that were quite capable of greatly reducing the prevalence of future irreproducible results. Therefore the primary purpose of this book is use these impressive labors of hundreds of methodologically oriented scientists to provide guidance to practicing and aspiring scientists regarding how (a) to change the way in which science has historically been both conducted and reported in order to avoid producing false-positive, irreproducible results in their own work and, (b) ultimately, to change those institutional practices (primarily but not exclusively involving the traditional journal publishing process and the academic reward system) that have unwittingly contributed to the present crisis. For what is actually needed is nothing less than a change in the scientific culture itself to one that will prioritize conducting research correctly in order to get things right rather than simply to get published. Hopefully this book can make a small contribution to that end.
Ayelet Shavit
- Published in print:
- 2017
- Published Online:
- September 2017
- ISBN:
- 9780300209549
- eISBN:
- 9780300228038
- Item type:
- chapter
- Publisher:
- Yale University Press
- DOI:
- 10.12987/yale/9780300209549.003.0018
- Subject:
- History, History of Science, Technology, and Medicine
This epilogue provides a practical flowchart for interpreting the best practices for replication. Taking the specific actions shown in the flowchart will help researchers to bridge, albeit not ...
More
This epilogue provides a practical flowchart for interpreting the best practices for replication. Taking the specific actions shown in the flowchart will help researchers to bridge, albeit not completely and permanently close, the gaps inherent in replication. At each branch point, making the “wrong” decision—for example, ignoring (that is, not recording) or conflating (that is, not recording separately) the relevant details—closes the door to replication. Making the “right” decision, however, at best only clarifies and quantifies how much further away we remain from exact replication. Either way, the hubris implicit in any attempt to perfectly replicate a project is fated to fail.Less
This epilogue provides a practical flowchart for interpreting the best practices for replication. Taking the specific actions shown in the flowchart will help researchers to bridge, albeit not completely and permanently close, the gaps inherent in replication. At each branch point, making the “wrong” decision—for example, ignoring (that is, not recording) or conflating (that is, not recording separately) the relevant details—closes the door to replication. Making the “right” decision, however, at best only clarifies and quantifies how much further away we remain from exact replication. Either way, the hubris implicit in any attempt to perfectly replicate a project is fated to fail.
ANGELO GAVEZZOTTI
- Published in print:
- 2006
- Published Online:
- January 2010
- ISBN:
- 9780198570806
- eISBN:
- 9780191718779
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198570806.003.0010
- Subject:
- Physics, Atomic, Laser, and Optical Physics
Chemical applications of force field simulations, of quantum mechanics, as well as X-ray data processing, lattice dynamics, and molecular dynamics simulations are all made possible by fast and ...
More
Chemical applications of force field simulations, of quantum mechanics, as well as X-ray data processing, lattice dynamics, and molecular dynamics simulations are all made possible by fast and reliable numerical computation. Therefore, electronic computers are a theoretical chemist's vital tool, and very few — if any — quantitative results can be obtained without them. Computers handle a very large and very diversified range of tasks on a surprisingly small fundamental basis: the electric representation of only two numbers, zero and one, called binary digits (or bits). Computers use bits to represent numbers in binary notation. In a very successful metaphoric style, all items of the computer world that have to do with programs are called software, while all the rest (electronic parts, wires, input-output devices) are called hardware. This chapter provides an overview of computers, operating systems, computer programming, bugs, program checking and validation, and reproducibility.Less
Chemical applications of force field simulations, of quantum mechanics, as well as X-ray data processing, lattice dynamics, and molecular dynamics simulations are all made possible by fast and reliable numerical computation. Therefore, electronic computers are a theoretical chemist's vital tool, and very few — if any — quantitative results can be obtained without them. Computers handle a very large and very diversified range of tasks on a surprisingly small fundamental basis: the electric representation of only two numbers, zero and one, called binary digits (or bits). Computers use bits to represent numbers in binary notation. In a very successful metaphoric style, all items of the computer world that have to do with programs are called software, while all the rest (electronic parts, wires, input-output devices) are called hardware. This chapter provides an overview of computers, operating systems, computer programming, bugs, program checking and validation, and reproducibility.
Walter Willett
- Published in print:
- 1998
- Published Online:
- September 2009
- ISBN:
- 9780195122978
- eISBN:
- 9780199864249
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195122978.003.06
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter deals with the approaches used to evaluate dietary questionnaires, the design of validation studies, and the analysis and presentation of data from validation studies. The interpretation ...
More
This chapter deals with the approaches used to evaluate dietary questionnaires, the design of validation studies, and the analysis and presentation of data from validation studies. The interpretation of any study of diet and disease can be substantially enhanced by quantitative information on the validity of the method used to measure dietary intake. Unless the method employed has been previously studied with respect to validity in a similar population, any major dietary study should include a validation component.Less
This chapter deals with the approaches used to evaluate dietary questionnaires, the design of validation studies, and the analysis and presentation of data from validation studies. The interpretation of any study of diet and disease can be substantially enhanced by quantitative information on the validity of the method used to measure dietary intake. Unless the method employed has been previously studied with respect to validity in a similar population, any major dietary study should include a validation component.
Walter Willett
- Published in print:
- 1998
- Published Online:
- September 2009
- ISBN:
- 9780195122978
- eISBN:
- 9780199864249
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195122978.003.10
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter addresses the uses of measures of body size and composition, with special emphasis on epidemiologic applications. It begins with an overview of weight and height, including their ...
More
This chapter addresses the uses of measures of body size and composition, with special emphasis on epidemiologic applications. It begins with an overview of weight and height, including their relationships to nutritional status, their use in epidemiologic studies, and the reproducibility and validity of these measurements. The concept of major body compartments is discussed, and methods of measuring them are considered. The main part of the chapter addresses the assessment of relative body composition, specifically fatness, using densitometry, combinations of weight and height, skinfold thicknesses, and the newer methods of bioelectric resistance and dual energy x-ray absorptiometry. Finally, the evaluation of body fat distribution is reviewed, and the use of such measurements in epidemiologic analyses is examined.Less
This chapter addresses the uses of measures of body size and composition, with special emphasis on epidemiologic applications. It begins with an overview of weight and height, including their relationships to nutritional status, their use in epidemiologic studies, and the reproducibility and validity of these measurements. The concept of major body compartments is discussed, and methods of measuring them are considered. The main part of the chapter addresses the assessment of relative body composition, specifically fatness, using densitometry, combinations of weight and height, skinfold thicknesses, and the newer methods of bioelectric resistance and dual energy x-ray absorptiometry. Finally, the evaluation of body fat distribution is reviewed, and the use of such measurements in epidemiologic analyses is examined.
Barrie M. Margetts and Michael Nelson
- Published in print:
- 1997
- Published Online:
- September 2009
- ISBN:
- 9780192627391
- eISBN:
- 9780191723704
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780192627391.003.0001
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter sets out the basic principles of nutritional epidemiology as they apply to the study of relationships between diet, nutrition, and health outcomes. It describes the main types of ...
More
This chapter sets out the basic principles of nutritional epidemiology as they apply to the study of relationships between diet, nutrition, and health outcomes. It describes the main types of epidemiological studies and their relationships, strengths, and weaknesses; the nature of epidemiological measurements; the relationships between diet and health at international, national, household, and individual level; concepts of validity, reproducibility, calibration, bias, and confounding; and distinguishes between bias, confounding, effect modification, and interaction. Finally, it covers interpretation of findings, systematic reviews and meta-analysis.Less
This chapter sets out the basic principles of nutritional epidemiology as they apply to the study of relationships between diet, nutrition, and health outcomes. It describes the main types of epidemiological studies and their relationships, strengths, and weaknesses; the nature of epidemiological measurements; the relationships between diet and health at international, national, household, and individual level; concepts of validity, reproducibility, calibration, bias, and confounding; and distinguishes between bias, confounding, effect modification, and interaction. Finally, it covers interpretation of findings, systematic reviews and meta-analysis.
Tim J. Cole
- Published in print:
- 1997
- Published Online:
- September 2009
- ISBN:
- 9780192627391
- eISBN:
- 9780191723704
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780192627391.003.0003
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
Precise definitions of study populations and samples are key to the interpretation and generalizability of findings. This chapter describes types of sampling, how to deal with non-response, and ...
More
Precise definitions of study populations and samples are key to the interpretation and generalizability of findings. This chapter describes types of sampling, how to deal with non-response, and validity of measures (including problems relating to bias and variance). Details are then given of how sample size relates to the testing of the null hypothesis, and the meaning and definition of significance level and power. This is followed by detailed techniques for the determination of sample size for different types of epidemiological studies (ecological, cross-sectional, case-control studies, cohort studies, and experimental studies). It defines sample size and power in relation to measures of difference between matched and unmatched samples, correlation, odds ratio, and relative risk.Less
Precise definitions of study populations and samples are key to the interpretation and generalizability of findings. This chapter describes types of sampling, how to deal with non-response, and validity of measures (including problems relating to bias and variance). Details are then given of how sample size relates to the testing of the null hypothesis, and the meaning and definition of significance level and power. This is followed by detailed techniques for the determination of sample size for different types of epidemiological studies (ecological, cross-sectional, case-control studies, cohort studies, and experimental studies). It defines sample size and power in relation to measures of difference between matched and unmatched samples, correlation, odds ratio, and relative risk.
David Clayton and Caroline Gill
- Published in print:
- 1997
- Published Online:
- September 2009
- ISBN:
- 9780192627391
- eISBN:
- 9780191723704
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780192627391.003.0004
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
The key problem in epidemiological studies is the identification of relevant exposures and outcomes, and the relationship (and error modelling) between observed and relevant measures, and between ...
More
The key problem in epidemiological studies is the identification of relevant exposures and outcomes, and the relationship (and error modelling) between observed and relevant measures, and between relevant measures and relevant outcomes. Measurement errors and bias undermine the ability to detect diet-disease relationships, and the problem is particularly acute in nutritional epidemiological studies because of the difficulties of measuring relevant dietary exposures accurately, and the particular problem of differential misclassification (i.e., the differences in bias in the measurements between individuals, or between one subset of a sample and another). Correcting for measurement error is a controversial topic, but the chapter provides illustrations of the extent of attenuation or misrepresentation of diet-disease associations.Less
The key problem in epidemiological studies is the identification of relevant exposures and outcomes, and the relationship (and error modelling) between observed and relevant measures, and between relevant measures and relevant outcomes. Measurement errors and bias undermine the ability to detect diet-disease relationships, and the problem is particularly acute in nutritional epidemiological studies because of the difficulties of measuring relevant dietary exposures accurately, and the particular problem of differential misclassification (i.e., the differences in bias in the measurements between individuals, or between one subset of a sample and another). Correcting for measurement error is a controversial topic, but the chapter provides illustrations of the extent of attenuation or misrepresentation of diet-disease associations.
Michael Nelson and Sheila A. Bingham
- Published in print:
- 1997
- Published Online:
- September 2009
- ISBN:
- 9780192627391
- eISBN:
- 9780191723704
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780192627391.003.0006
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter takes a comprehensive look at the techniques, strengths, and weaknesses of approaches to estimating food consumption and nutrient intake at the household and individual level. At the ...
More
This chapter takes a comprehensive look at the techniques, strengths, and weaknesses of approaches to estimating food consumption and nutrient intake at the household and individual level. At the household level, it provides a detailed description of the methods available and the ways in which the data can be interpreted for epidemiological purposes, including techniques for estimating the distribution of food consumption and nutrient intake in individuals when data are collected at the household level. Similarly, it describes in detail the techniques for estimating food consumption and nutrient intake at the individual level, and focuses strongly on the repeatability and validity of measures, including an analysis of the sources of error such as portion size estimation, day-to-day variations in diet, estimates of the frequency of consumption, and the determination of current and past (relevant) diet. It addresses issues of under-reporting and energy adjustment, and concludes with an analysis of the effects of measurement error on validity.Less
This chapter takes a comprehensive look at the techniques, strengths, and weaknesses of approaches to estimating food consumption and nutrient intake at the household and individual level. At the household level, it provides a detailed description of the methods available and the ways in which the data can be interpreted for epidemiological purposes, including techniques for estimating the distribution of food consumption and nutrient intake in individuals when data are collected at the household level. Similarly, it describes in detail the techniques for estimating food consumption and nutrient intake at the individual level, and focuses strongly on the repeatability and validity of measures, including an analysis of the sources of error such as portion size estimation, day-to-day variations in diet, estimates of the frequency of consumption, and the determination of current and past (relevant) diet. It addresses issues of under-reporting and energy adjustment, and concludes with an analysis of the effects of measurement error on validity.
Chris J. Bates, David I. Thurnham, Sheila A. Bingham, Barrie M. Margetts, and Michael Nelson
- Published in print:
- 1997
- Published Online:
- September 2009
- ISBN:
- 9780192627391
- eISBN:
- 9780191723704
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780192627391.003.0007
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter takes a look at the utility of biochemical measurements in different body tissues for estimating dietary exposures (as opposed to the determination of nutritional status). Validity ...
More
This chapter takes a look at the utility of biochemical measurements in different body tissues for estimating dietary exposures (as opposed to the determination of nutritional status). Validity (accuracy) and reproducibility (precision) are defined in relation to biomarkers in light of natural variations in physiological levels within and between individuals. Definitions are given of measures of nutrients in blood, urine, and other tissues, and compartments (e.g., hair, saliva, adipose tissue, finger nails, toe nails), and the feasibility of predicting intake from each measure. The chapter then describes relevant measures, nutrient by nutrient, for vitamins, minerals, lipids, protein, and energy, and the problems relating to dietary fibre.Less
This chapter takes a look at the utility of biochemical measurements in different body tissues for estimating dietary exposures (as opposed to the determination of nutritional status). Validity (accuracy) and reproducibility (precision) are defined in relation to biomarkers in light of natural variations in physiological levels within and between individuals. Definitions are given of measures of nutrients in blood, urine, and other tissues, and compartments (e.g., hair, saliva, adipose tissue, finger nails, toe nails), and the feasibility of predicting intake from each measure. The chapter then describes relevant measures, nutrient by nutrient, for vitamins, minerals, lipids, protein, and energy, and the problems relating to dietary fibre.
Michael Nelson
- Published in print:
- 1997
- Published Online:
- September 2009
- ISBN:
- 9780192627391
- eISBN:
- 9780191723704
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780192627391.003.0008
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
Validation considers the context within which measures are being made (national, household, individual) and helps to describe the extent to which observed values differ from the truth. This chapter ...
More
Validation considers the context within which measures are being made (national, household, individual) and helps to describe the extent to which observed values differ from the truth. This chapter outlines the general principles that should be applied in the design of validation studies of dietary measurements, and considers some of the statistical techniques that have been developed to overcome the problems which arise from the absence of a reference measure of ‘true’ dietary intakes. The concepts of validity and reproducibility are reviewed. The context of validation is considered (current or past intake; food consumption or nutrient intake; absolute or relative intakes; group versus individual intakes). Validation techniques, including the identification of relevant reference measures, validation procedures, and factors affecting the design of validation studies are considered. Statistical techniques for interpreting findings from validation studies are described in detail.Less
Validation considers the context within which measures are being made (national, household, individual) and helps to describe the extent to which observed values differ from the truth. This chapter outlines the general principles that should be applied in the design of validation studies of dietary measurements, and considers some of the statistical techniques that have been developed to overcome the problems which arise from the absence of a reference measure of ‘true’ dietary intakes. The concepts of validity and reproducibility are reviewed. The context of validation is considered (current or past intake; food consumption or nutrient intake; absolute or relative intakes; group versus individual intakes). Validation techniques, including the identification of relevant reference measures, validation procedures, and factors affecting the design of validation studies are considered. Statistical techniques for interpreting findings from validation studies are described in detail.
Berthold Hoeckner
- Published in print:
- 2019
- Published Online:
- May 2020
- ISBN:
- 9780226649610
- eISBN:
- 9780226649894
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226649894.003.0001
- Subject:
- Music, Popular
The Introduction establishes how studying the nexus of film, music, and memory participates in a shift from traditional ontological and methodological approaches in film studies toward a “field ...
More
The Introduction establishes how studying the nexus of film, music, and memory participates in a shift from traditional ontological and methodological approaches in film studies toward a “field paradigm” that is concerned with “novel interpretation over systematic theory” as well as a “topical emphasis within a particular critical field” (James Buhler after Francesco Casetti). As cinema became aware of its impact on cultural memory, music took on a vital role in the storage, retrieval, and affective experience of personal and collective remembrance. This warrants an extension of Walter Benjamin’s notion of the optical unconscious to that of an optical-acoustic unconscious. If photography and film could reveal something that would otherwise not be visible due to the limits of our perceptual apparatus, the addition of music and sound enabled film to capture, store, and release aspects of reality previously inaccessible to our audiovisual sensorium. This is exemplified in a striking quotation of an iconic moment from Chris Marker’s La Jetée (1962)in Cameron Crowe’s We Bought a Zoo (2011) shows how sound and music can photographic stills into live action, thereby by animating memory images into moving pictures.Less
The Introduction establishes how studying the nexus of film, music, and memory participates in a shift from traditional ontological and methodological approaches in film studies toward a “field paradigm” that is concerned with “novel interpretation over systematic theory” as well as a “topical emphasis within a particular critical field” (James Buhler after Francesco Casetti). As cinema became aware of its impact on cultural memory, music took on a vital role in the storage, retrieval, and affective experience of personal and collective remembrance. This warrants an extension of Walter Benjamin’s notion of the optical unconscious to that of an optical-acoustic unconscious. If photography and film could reveal something that would otherwise not be visible due to the limits of our perceptual apparatus, the addition of music and sound enabled film to capture, store, and release aspects of reality previously inaccessible to our audiovisual sensorium. This is exemplified in a striking quotation of an iconic moment from Chris Marker’s La Jetée (1962)in Cameron Crowe’s We Bought a Zoo (2011) shows how sound and music can photographic stills into live action, thereby by animating memory images into moving pictures.
Berthold Hoeckner
- Published in print:
- 2019
- Published Online:
- May 2020
- ISBN:
- 9780226649610
- eISBN:
- 9780226649894
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226649894.003.0002
- Subject:
- Music, Popular
Chapter 1 demonstrates how vinyl recordings that served as a mnemonic tool for the storage of autobiographical memory have been used as a plot device to illustrate how film created new forms of ...
More
Chapter 1 demonstrates how vinyl recordings that served as a mnemonic tool for the storage of autobiographical memory have been used as a plot device to illustrate how film created new forms of audiovisual memory. Through the technological reproducibility of temporal objects, records became not only instrumental in the transition from live to canned music; they were also deployed in sound films to show how cinema attaches a permanent visual component to auditory recollection, thereby creating a kind of a phono-photograph akin to the workings of Henri Bergson’s memory cone. The chapter’s case studies focus on films that exemplify a new mode of such medium self-consciousness. The Legend of 1900 (1998) dramatizes the momentous shift from episodic memories residing uniquely in the musician’s body to being stored externally in vinyl recordings produced for the masses after the turn of the century. Penny Serenade (1941) uses an album of phonographic records to narrate the story of a marriage, where the records become memory objects whose thingness may retain a connection to the contingency of the remembered event.Less
Chapter 1 demonstrates how vinyl recordings that served as a mnemonic tool for the storage of autobiographical memory have been used as a plot device to illustrate how film created new forms of audiovisual memory. Through the technological reproducibility of temporal objects, records became not only instrumental in the transition from live to canned music; they were also deployed in sound films to show how cinema attaches a permanent visual component to auditory recollection, thereby creating a kind of a phono-photograph akin to the workings of Henri Bergson’s memory cone. The chapter’s case studies focus on films that exemplify a new mode of such medium self-consciousness. The Legend of 1900 (1998) dramatizes the momentous shift from episodic memories residing uniquely in the musician’s body to being stored externally in vinyl recordings produced for the masses after the turn of the century. Penny Serenade (1941) uses an album of phonographic records to narrate the story of a marriage, where the records become memory objects whose thingness may retain a connection to the contingency of the remembered event.
Jeffrey S. Racine
- Published in print:
- 2019
- Published Online:
- January 2019
- ISBN:
- 9780190900663
- eISBN:
- 9780190933647
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190900663.001.0001
- Subject:
- Economics and Finance, Econometrics
This book is designed to facilitate reproducibility in Econometrics. It does so by using open source software (R) and recently developed tools (R Markdown and bookdown) that allow the reader to ...
More
This book is designed to facilitate reproducibility in Econometrics. It does so by using open source software (R) and recently developed tools (R Markdown and bookdown) that allow the reader to engage in reproducible research. Illustrative examples are provided throughout, and a range of topics are covered. Assignments, exams, slides, and a solution manual are available for instructors.Less
This book is designed to facilitate reproducibility in Econometrics. It does so by using open source software (R) and recently developed tools (R Markdown and bookdown) that allow the reader to engage in reproducible research. Illustrative examples are provided throughout, and a range of topics are covered. Assignments, exams, slides, and a solution manual are available for instructors.
D. Brynn Hibbert
- Published in print:
- 2007
- Published Online:
- November 2020
- ISBN:
- 9780195162127
- eISBN:
- 9780197562093
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195162127.003.0009
- Subject:
- Chemistry, Analytical Chemistry
No matter how carefully a laboratory scrutinizes its performance with internal quality control procedures, testing against other laboratories increases confidence in a ...
More
No matter how carefully a laboratory scrutinizes its performance with internal quality control procedures, testing against other laboratories increases confidence in a laboratory’s results and among all the laboratories involved in comparison testing. Although without independent knowledge of the value of the measurand it is possible that all the laboratories involved are producing erroneous results, it is also comforting to know that your laboratory is not too different from its peers. An interlaboratory study is a planned series of analyses of a common test material performed by a number of laboratories, with the goal of evaluating the relative performances of the laboratories, the appropriateness and accuracy of the method used, or the composition and identity of the material being tested. The exact details of the study depend on the nature of the test, but all studies have a common pattern: an organizing laboratory creates and distributes a test material that is to be analyzed to the participants in the study, and the results communicated back to the organizing laboratory. The results are statistically analyzed and a report of the findings circulated. Interlaboratory studies are increasingly popular. Ongoing rounds of interlaboratory studies are conducted by most accreditation bodies; the Key Comparison program of the Consultative Committee of the Amount of Substance (CCQM) is one such interlaboratory study (BIPM 2006). There is a great deal of literature on interlaboratory studies (Hibbert 2005; Horwitz 1995; Hund et al. 2000; Lawn et al. 1997; Maier et al. 1993; Thompson and Wood 1993), and an ISO/IEC guide for the conduct of proficiency testing studies is available (ISO/IEC 1997). There are three principal groups of studies: studies that test laboratories (proficiency tests), studies that test methods, and studies that test materials (table 5.1). Laboratories that participate in method and material studies are chosen for their ability to analyze the particular material using the given method. It is not desirable to discover any lacunae in the participating laboratories, and outliers cause lots of problems. The aim of the study is to obtain information about the method or material, so confidence in the results is of the greatest importance.
Less
No matter how carefully a laboratory scrutinizes its performance with internal quality control procedures, testing against other laboratories increases confidence in a laboratory’s results and among all the laboratories involved in comparison testing. Although without independent knowledge of the value of the measurand it is possible that all the laboratories involved are producing erroneous results, it is also comforting to know that your laboratory is not too different from its peers. An interlaboratory study is a planned series of analyses of a common test material performed by a number of laboratories, with the goal of evaluating the relative performances of the laboratories, the appropriateness and accuracy of the method used, or the composition and identity of the material being tested. The exact details of the study depend on the nature of the test, but all studies have a common pattern: an organizing laboratory creates and distributes a test material that is to be analyzed to the participants in the study, and the results communicated back to the organizing laboratory. The results are statistically analyzed and a report of the findings circulated. Interlaboratory studies are increasingly popular. Ongoing rounds of interlaboratory studies are conducted by most accreditation bodies; the Key Comparison program of the Consultative Committee of the Amount of Substance (CCQM) is one such interlaboratory study (BIPM 2006). There is a great deal of literature on interlaboratory studies (Hibbert 2005; Horwitz 1995; Hund et al. 2000; Lawn et al. 1997; Maier et al. 1993; Thompson and Wood 1993), and an ISO/IEC guide for the conduct of proficiency testing studies is available (ISO/IEC 1997). There are three principal groups of studies: studies that test laboratories (proficiency tests), studies that test methods, and studies that test materials (table 5.1). Laboratories that participate in method and material studies are chosen for their ability to analyze the particular material using the given method. It is not desirable to discover any lacunae in the participating laboratories, and outliers cause lots of problems. The aim of the study is to obtain information about the method or material, so confidence in the results is of the greatest importance.
Erika Balsom
- Published in print:
- 2017
- Published Online:
- January 2019
- ISBN:
- 9780231176934
- eISBN:
- 9780231543125
- Item type:
- chapter
- Publisher:
- Columbia University Press
- DOI:
- 10.7312/columbia/9780231176934.003.0002
- Subject:
- Film, Television and Radio, Film
Film theory has largely overlooked the fact that film and video are founded in an economy of the multiple. In this chapter, I outline a theory of the moving image as a reproducible medium that ...
More
Film theory has largely overlooked the fact that film and video are founded in an economy of the multiple. In this chapter, I outline a theory of the moving image as a reproducible medium that considers the way an image may be copied repeatedly so as to facilitate circulation across distribution networks. By moving into this domain, one confronts not the questions of indexicality, documentary, and realism that so often get asked in relation to the image’s status as a copy of the profilmic; rather, issues of authority, originality, and authenticity become paramount. This chapter unfolds what is at stake in approaching the moving image in this way and will examine how its reproducibility has been conceived of as both a utopian promise and the site of a dangerous inauthenticity since its emergence in the late nineteenth century.Less
Film theory has largely overlooked the fact that film and video are founded in an economy of the multiple. In this chapter, I outline a theory of the moving image as a reproducible medium that considers the way an image may be copied repeatedly so as to facilitate circulation across distribution networks. By moving into this domain, one confronts not the questions of indexicality, documentary, and realism that so often get asked in relation to the image’s status as a copy of the profilmic; rather, issues of authority, originality, and authenticity become paramount. This chapter unfolds what is at stake in approaching the moving image in this way and will examine how its reproducibility has been conceived of as both a utopian promise and the site of a dangerous inauthenticity since its emergence in the late nineteenth century.
Victoria H. F. Scott (ed.)
- Published in print:
- 2020
- Published Online:
- May 2020
- ISBN:
- 9781526117465
- eISBN:
- 9781526150486
- Item type:
- chapter
- Publisher:
- Manchester University Press
- DOI:
- 10.7765/9781526117472.00022
- Subject:
- Art, Art History
Postmodernism is usually framed as a Western movement, with theoretical and philosophical roots in Europe. Victoria H. F. Scott’s chapter links artistic postmodernism to the influence of Maoism in ...
More
Postmodernism is usually framed as a Western movement, with theoretical and philosophical roots in Europe. Victoria H. F. Scott’s chapter links artistic postmodernism to the influence of Maoism in the West, specifically through the dissemination and absorption of the content and form of Maoist propaganda. Taking into consideration the broad significance of Mao for art and culture in the West in the second half of the twentieth century, the chapter comes to terms with the material effects of a global propaganda movement which, combined with the remains of a personality cult, currently transcends the traditional political categories of the Left and the Right.Less
Postmodernism is usually framed as a Western movement, with theoretical and philosophical roots in Europe. Victoria H. F. Scott’s chapter links artistic postmodernism to the influence of Maoism in the West, specifically through the dissemination and absorption of the content and form of Maoist propaganda. Taking into consideration the broad significance of Mao for art and culture in the West in the second half of the twentieth century, the chapter comes to terms with the material effects of a global propaganda movement which, combined with the remains of a personality cult, currently transcends the traditional political categories of the Left and the Right.