Michael Oppenheimer, Naomi Oreskes, Dale Jamieson, Keynyn Brysse, Jessica O'Reilly, Matthew Shindell, and Milena Wazeck
- Published in print:
- 2019
- Published Online:
- September 2019
- ISBN:
- 9780226601960
- eISBN:
- 9780226602158
- Item type:
- book
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226602158.001.0001
- Subject:
- Environmental Science, Environmental Studies
Societies have long turned to experts for advice on controversial matters, but in the past, the arrangements to solicit expert advice were largely ad hoc. In recent years we have witnessed the ...
More
Societies have long turned to experts for advice on controversial matters, but in the past, the arrangements to solicit expert advice were largely ad hoc. In recent years we have witnessed the development of an institutionalized system in which scientists offer knowledge in exchange for influence on the policy process, creating, in effect, a permanent assessment economy. We examine this process of expert assessment through detailed analyses of three groups of large, formal scientific assessments: the U.S. National Acid Precipitation Assessment Program, international assessments of ozone depletion, and assessments examining the potential disintegration of the West Antarctic Ice Sheet. We show that assessments not only summarize existing knowledge, but also can create new knowledge and set research agendas. Assessments can also impede the development of knowledge, particularly if scientists focus unduly on uncertainty or on achieving consensus. The desire to achieve consensus can also weaken assessment outcomes by leading scientists to converge on least common denominator results. Assessments often try to stay on the science side of a poorly defined and intermittently enforced boundary between science and policy because of a concern with objectivity and efficacy. Assessments often try to neutralize bias by being inclusive in terms of nationality, gender, and prior intellectual commitments—adopting what we call a “balance of bias” strategy. We conclude that the assessment process is one of expert discernment, but nevertheless surprisingly sensitive to the institutional arrangements that establish it.Less
Societies have long turned to experts for advice on controversial matters, but in the past, the arrangements to solicit expert advice were largely ad hoc. In recent years we have witnessed the development of an institutionalized system in which scientists offer knowledge in exchange for influence on the policy process, creating, in effect, a permanent assessment economy. We examine this process of expert assessment through detailed analyses of three groups of large, formal scientific assessments: the U.S. National Acid Precipitation Assessment Program, international assessments of ozone depletion, and assessments examining the potential disintegration of the West Antarctic Ice Sheet. We show that assessments not only summarize existing knowledge, but also can create new knowledge and set research agendas. Assessments can also impede the development of knowledge, particularly if scientists focus unduly on uncertainty or on achieving consensus. The desire to achieve consensus can also weaken assessment outcomes by leading scientists to converge on least common denominator results. Assessments often try to stay on the science side of a poorly defined and intermittently enforced boundary between science and policy because of a concern with objectivity and efficacy. Assessments often try to neutralize bias by being inclusive in terms of nationality, gender, and prior intellectual commitments—adopting what we call a “balance of bias” strategy. We conclude that the assessment process is one of expert discernment, but nevertheless surprisingly sensitive to the institutional arrangements that establish it.
Michael Oppenheimer, Naomi Oreskes, Dale Jamieson, Keynyn Brysse, Jessica O’Reilly, Matthew Shindell, and Milena Wazeck
- Published in print:
- 2019
- Published Online:
- September 2019
- ISBN:
- 9780226601960
- eISBN:
- 9780226602158
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226602158.003.0001
- Subject:
- Environmental Science, Environmental Studies
Consensus reports emerged in the mid-twentieth century as a means for scientists to give advice to governments. For the scientists involved, the goal of consensus reflects a belief in the power of ...
More
Consensus reports emerged in the mid-twentieth century as a means for scientists to give advice to governments. For the scientists involved, the goal of consensus reflects a belief in the power of univocality: that a single, consistent message would be more likely to be influential than alternatives, such as expressing majority and minority views. Scientists in the mid-twentieth century perhaps intuitively perceived what social science research has since demonstrated: that expert disagreement, or even the appearance of it, can undermine public confidence in experts and the science they are trying to communicate. However, the concept of consensus as displayed in the three cases studied here is not easily reducible to simple univocality; consensus appears to have taken multiple forms in various institutional contexts. There is no singular established definition of consensus at work within assessments, nor a universally accepted set of rules by which it should be achieved. Modern assessments are distinguished by their large scale and institutionalization; consensus is often a key element, but assessments also introduce scientists to policy concerns, help to guide research in policy-relevant directions, and provide scientists and policymakers with an area of overlapping concern.Less
Consensus reports emerged in the mid-twentieth century as a means for scientists to give advice to governments. For the scientists involved, the goal of consensus reflects a belief in the power of univocality: that a single, consistent message would be more likely to be influential than alternatives, such as expressing majority and minority views. Scientists in the mid-twentieth century perhaps intuitively perceived what social science research has since demonstrated: that expert disagreement, or even the appearance of it, can undermine public confidence in experts and the science they are trying to communicate. However, the concept of consensus as displayed in the three cases studied here is not easily reducible to simple univocality; consensus appears to have taken multiple forms in various institutional contexts. There is no singular established definition of consensus at work within assessments, nor a universally accepted set of rules by which it should be achieved. Modern assessments are distinguished by their large scale and institutionalization; consensus is often a key element, but assessments also introduce scientists to policy concerns, help to guide research in policy-relevant directions, and provide scientists and policymakers with an area of overlapping concern.
Carlo Martini and Jan Sprenger
- Published in print:
- 2017
- Published Online:
- December 2017
- ISBN:
- 9780190680534
- eISBN:
- 9780190680565
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190680534.003.0009
- Subject:
- Philosophy, Philosophy of Science, Metaphysics/Epistemology
Group judgments are often influenced by their members’ individual expertise. It is less clear, though, how individual expertise should affect the group judgments. This chapter surveys a wide range of ...
More
Group judgments are often influenced by their members’ individual expertise. It is less clear, though, how individual expertise should affect the group judgments. This chapter surveys a wide range of models of opinion aggregation and group judgment: models where all group members have the same impact on the group judgment, models that take into account differences in individual accuracy, and models where group members revise their beliefs as a function of their mutual respect. The scope of these models covers the aggregation of propositional attitudes, probability functions, and numerical estimates. By comparing these different kinds of models and contrasting them with findings in psychology, management science, and the expert judgment literature, the chapter provides a better understanding of the role of expertise in group agency, both from a theoretical and from an empirical perspective.Less
Group judgments are often influenced by their members’ individual expertise. It is less clear, though, how individual expertise should affect the group judgments. This chapter surveys a wide range of models of opinion aggregation and group judgment: models where all group members have the same impact on the group judgment, models that take into account differences in individual accuracy, and models where group members revise their beliefs as a function of their mutual respect. The scope of these models covers the aggregation of propositional attitudes, probability functions, and numerical estimates. By comparing these different kinds of models and contrasting them with findings in psychology, management science, and the expert judgment literature, the chapter provides a better understanding of the role of expertise in group agency, both from a theoretical and from an empirical perspective.
Marcel Boumans
- Published in print:
- 2015
- Published Online:
- May 2015
- ISBN:
- 9780199388288
- eISBN:
- 9780199388318
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199388288.001.0001
- Subject:
- Philosophy, Philosophy of Science
Measurement is the assignment of numbers to objects or events according to a rule. The rule should be such that the numbers provide reliable information about the objects or events. But the rules ...
More
Measurement is the assignment of numbers to objects or events according to a rule. The rule should be such that the numbers provide reliable information about the objects or events. But the rules applicable in the field are different from the rules used in the laboratory. Methodologies appropriate for field measurement have to include instructions of how to replace control of the measurand and environment by control of the representing model, and how to deal with unscientific observations. Investigations of several measurement practices in different social field sciences show that for such methodologies expert judgment is indispensable. The statistical model can replace the laboratory to a certain extent, but not completely. The knowledge gap between an empirical model, however accurate it is, and the complex social field phenomenon has to be bridged by the intuitions of a field expert. But expert judgments are subjective and personal, and so tend to disagree and are not equally good. To have measurement outside the laboratory as objective as possible, one needs to find a consensus of the multiple expert judgments in a way that accounts for the performance of the individual experts. But to account for the quality of these performances in social science, instead of evaluating and comparing individual scientists, one has to evaluate and compare the expertise of institutions, each of which employs a team of experts and is the proprietor of the empirical model.Less
Measurement is the assignment of numbers to objects or events according to a rule. The rule should be such that the numbers provide reliable information about the objects or events. But the rules applicable in the field are different from the rules used in the laboratory. Methodologies appropriate for field measurement have to include instructions of how to replace control of the measurand and environment by control of the representing model, and how to deal with unscientific observations. Investigations of several measurement practices in different social field sciences show that for such methodologies expert judgment is indispensable. The statistical model can replace the laboratory to a certain extent, but not completely. The knowledge gap between an empirical model, however accurate it is, and the complex social field phenomenon has to be bridged by the intuitions of a field expert. But expert judgments are subjective and personal, and so tend to disagree and are not equally good. To have measurement outside the laboratory as objective as possible, one needs to find a consensus of the multiple expert judgments in a way that accounts for the performance of the individual experts. But to account for the quality of these performances in social science, instead of evaluating and comparing individual scientists, one has to evaluate and compare the expertise of institutions, each of which employs a team of experts and is the proprietor of the empirical model.
Marcel Boumans
- Published in print:
- 2015
- Published Online:
- May 2015
- ISBN:
- 9780199388288
- eISBN:
- 9780199388318
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199388288.003.0007
- Subject:
- Philosophy, Philosophy of Science
The chapter provides a summary of the previous chapters’ conclusions, a survey of the whole book. Measurement is the assignment of numbers to objects or events according to a rule. The rule should be ...
More
The chapter provides a summary of the previous chapters’ conclusions, a survey of the whole book. Measurement is the assignment of numbers to objects or events according to a rule. The rule should be such that the numbers provide reliable information about the objects or events. But the rules applicable in the field are different from the rules used in the laboratory. Measurement practices in different social field sciences show that for measurement expert judgment is indispensable. But expert judgments tend to disagree and are not equally good. To have measurement outside the laboratory as objective as possible, one needs to find a consensus of the multiple expert judgments in a way that accounts for the performance of the individual experts. But to account for the quality of these performances in social science, instead of individual scientists, one has to evaluate the expertise of institutions.Less
The chapter provides a summary of the previous chapters’ conclusions, a survey of the whole book. Measurement is the assignment of numbers to objects or events according to a rule. The rule should be such that the numbers provide reliable information about the objects or events. But the rules applicable in the field are different from the rules used in the laboratory. Measurement practices in different social field sciences show that for measurement expert judgment is indispensable. But expert judgments tend to disagree and are not equally good. To have measurement outside the laboratory as objective as possible, one needs to find a consensus of the multiple expert judgments in a way that accounts for the performance of the individual experts. But to account for the quality of these performances in social science, instead of individual scientists, one has to evaluate the expertise of institutions.
András Körösényi, Péter Ondré, and András Hajdú
- Published in print:
- 2017
- Published Online:
- June 2017
- ISBN:
- 9780198783848
- eISBN:
- 9780191826498
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198783848.003.0005
- Subject:
- Political Science, Comparative Politics
The central puzzle of this chapter is the meteoric rise and abrupt fall in the popularity of Ferenc Gyurcsány, the Hungarian prime minister between 2004 and 2009. The chapter applies the LCI to ...
More
The central puzzle of this chapter is the meteoric rise and abrupt fall in the popularity of Ferenc Gyurcsány, the Hungarian prime minister between 2004 and 2009. The chapter applies the LCI to explain this riddle by analyzing his prime-ministerial career. The chapter also aims to contribute to the methodological refinement of the LCI. First, it introduces a milestone approach, which sets the data for six crucial moments in Gyurcsány’s political career to make the LCI a dynamic tool for the analysis. Second, in order to improve the reliability of the method and exclude researcher bias, it replaces researcher judgment with expert judgment in the cases of communicative performance and management skills, and with the fulfillment rate of the legislative program in the case of parliamentary effectiveness. The result of the research diverges from our initial expectations, since the aggregate value of the LCI decreased only rather moderately.Less
The central puzzle of this chapter is the meteoric rise and abrupt fall in the popularity of Ferenc Gyurcsány, the Hungarian prime minister between 2004 and 2009. The chapter applies the LCI to explain this riddle by analyzing his prime-ministerial career. The chapter also aims to contribute to the methodological refinement of the LCI. First, it introduces a milestone approach, which sets the data for six crucial moments in Gyurcsány’s political career to make the LCI a dynamic tool for the analysis. Second, in order to improve the reliability of the method and exclude researcher bias, it replaces researcher judgment with expert judgment in the cases of communicative performance and management skills, and with the fulfillment rate of the legislative program in the case of parliamentary effectiveness. The result of the research diverges from our initial expectations, since the aggregate value of the LCI decreased only rather moderately.
P.J. Lee
Jo Anne DeGraffenreid (ed.)
- Published in print:
- 2008
- Published Online:
- November 2020
- ISBN:
- 9780195331905
- eISBN:
- 9780197562550
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195331905.003.0007
- Subject:
- Earth Sciences and Geography, Geophysics: Earth Sciences
Petroleum resource evaluations have been performed by geologists, geophysicists, geochemists, engineers, and statisticians for many decades in an attempt to estimate ...
More
Petroleum resource evaluations have been performed by geologists, geophysicists, geochemists, engineers, and statisticians for many decades in an attempt to estimate resource potential in a given region. Because of differences in the geological and statistical methods used for assessment, and the amount and type of data available, resource evaluations often vary. Accounts of various methods have been compiled by Haun (1975), Grenon (1979), Masters (1985), Rice (1986), and Mast et al. (1989). In addition, Lee and Gill (1999) used the Michigan reef play data to evaluate the merits of the log-geometric method of the U.S. Geological Survey (USGS); the PETRIMES method developed by the Geological Survey of Canada (GSC); the Arps and Roberts method; Bickel, Nair, and Wang’s nonparametric finite population method; Kaufman’s anchored method; and the geo-anchored method of Chen and Sinding–Larson. Information required for petroleum resource evaluation includes all available reservoir data and data derived from the drilling of exploratory and development wells. Other essential geological information comes from regional geological, geophysical, and geochemical studies, as well as from work carried out in analogous basins. Any comprehensive resource evaluation procedure must combine raw data with information acquired from regional analysis and comparative studies. The Hydrocarbon Assessment System Processor (HASP) has been used to blend available exploration data with previously gathered information (Energy, Mines and Resources Canada, 1977; Roy, 1979). HASP expresses combinations of exploration data and expert judgment as probability distributions for specific population attributes (such as pool area, net pay, porosity). Since this procedure was first implemented, demands on evaluation capability have steadily increased as evaluation results were increasingly applied to economic analyses. Traditional methods could no longer meet the new demands. A probabilistic formulation for HASP became necessary and was established by Lee and Wang (1983b). This formulation led to the development of the Petroleum Exploration and Resource Evaluation System, PETRIMES (Lee, 1993a, c, d; Lee and Tzeng, 1993; Lee and Wang, 1983a, b, 1984, 1985, 1986, 1987, 1990). Since then, new capabilities and features have been added to the evaluation system (Lee, 1997, 1998). A Windows version was also created (Lee et al., 1999).
Less
Petroleum resource evaluations have been performed by geologists, geophysicists, geochemists, engineers, and statisticians for many decades in an attempt to estimate resource potential in a given region. Because of differences in the geological and statistical methods used for assessment, and the amount and type of data available, resource evaluations often vary. Accounts of various methods have been compiled by Haun (1975), Grenon (1979), Masters (1985), Rice (1986), and Mast et al. (1989). In addition, Lee and Gill (1999) used the Michigan reef play data to evaluate the merits of the log-geometric method of the U.S. Geological Survey (USGS); the PETRIMES method developed by the Geological Survey of Canada (GSC); the Arps and Roberts method; Bickel, Nair, and Wang’s nonparametric finite population method; Kaufman’s anchored method; and the geo-anchored method of Chen and Sinding–Larson. Information required for petroleum resource evaluation includes all available reservoir data and data derived from the drilling of exploratory and development wells. Other essential geological information comes from regional geological, geophysical, and geochemical studies, as well as from work carried out in analogous basins. Any comprehensive resource evaluation procedure must combine raw data with information acquired from regional analysis and comparative studies. The Hydrocarbon Assessment System Processor (HASP) has been used to blend available exploration data with previously gathered information (Energy, Mines and Resources Canada, 1977; Roy, 1979). HASP expresses combinations of exploration data and expert judgment as probability distributions for specific population attributes (such as pool area, net pay, porosity). Since this procedure was first implemented, demands on evaluation capability have steadily increased as evaluation results were increasingly applied to economic analyses. Traditional methods could no longer meet the new demands. A probabilistic formulation for HASP became necessary and was established by Lee and Wang (1983b). This formulation led to the development of the Petroleum Exploration and Resource Evaluation System, PETRIMES (Lee, 1993a, c, d; Lee and Tzeng, 1993; Lee and Wang, 1983a, b, 1984, 1985, 1986, 1987, 1990). Since then, new capabilities and features have been added to the evaluation system (Lee, 1997, 1998). A Windows version was also created (Lee et al., 1999).