Andrew J. Connolly, Jacob T. VanderPlas, Alexander Gray, Andrew J. Connolly, Jacob T. VanderPlas, and Alexander Gray
- Published in print:
- 2014
- Published Online:
- October 2017
- ISBN:
- 9780691151687
- eISBN:
- 9781400848911
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691151687.003.0009
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
Chapter 6 described techniques for estimating joint probability distributions from multivariate data sets and for identifying the inherent clustering within the properties of sources. This approach ...
More
Chapter 6 described techniques for estimating joint probability distributions from multivariate data sets and for identifying the inherent clustering within the properties of sources. This approach can be viewed as the unsupervised classification of data. If, however, we have labels for some of these data points (e.g., an object is tall, short, red, or blue) we can utilize this information to develop a relationship between the label and the properties of a source. We refer to this as supervised classification, which is the focus of this chapter. The motivation for supervised classification comes from the long history of classification in astronomy. Possibly the most well known of these classification schemes is that defined by Edwin Hubble for the morphological classification of galaxies based on their visual appearance. This chapter discusses generative classification, k-nearest-neighbor classifier, discriminative classification, support vector machines, decision trees, and evaluating classifiers.Less
Chapter 6 described techniques for estimating joint probability distributions from multivariate data sets and for identifying the inherent clustering within the properties of sources. This approach can be viewed as the unsupervised classification of data. If, however, we have labels for some of these data points (e.g., an object is tall, short, red, or blue) we can utilize this information to develop a relationship between the label and the properties of a source. We refer to this as supervised classification, which is the focus of this chapter. The motivation for supervised classification comes from the long history of classification in astronomy. Possibly the most well known of these classification schemes is that defined by Edwin Hubble for the morphological classification of galaxies based on their visual appearance. This chapter discusses generative classification, k-nearest-neighbor classifier, discriminative classification, support vector machines, decision trees, and evaluating classifiers.
Lyn C. Thomas
- Published in print:
- 2009
- Published Online:
- May 2009
- ISBN:
- 9780199232130
- eISBN:
- 9780191715914
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199232130.003.0001
- Subject:
- Mathematics, Applied Mathematics, Mathematical Finance
This chapter outlines what is meant by a credit score, why it is an integral part of the decision process in lending to consumers, and how credit scoring systems are built. After describing the ...
More
This chapter outlines what is meant by a credit score, why it is an integral part of the decision process in lending to consumers, and how credit scoring systems are built. After describing the historical development of consumer credit and credit scoring, decision trees are used to model the credit granting process. In particular, return on capital based models and their connection with the tradition expected profit model are introduced. The chapter defines what is meant by a credit score, why log odds scores have such useful properties, and how one can extend the definition of a credit score to time dependent scores. It then goes through the development process of building a scorecard, discussing sample construction, reject inference, coarse classification, and variable selection. It concludes by looking at the different methodologies for building a scorecard such as logistic regression, linear regression, classification tress, and linear programming.Less
This chapter outlines what is meant by a credit score, why it is an integral part of the decision process in lending to consumers, and how credit scoring systems are built. After describing the historical development of consumer credit and credit scoring, decision trees are used to model the credit granting process. In particular, return on capital based models and their connection with the tradition expected profit model are introduced. The chapter defines what is meant by a credit score, why log odds scores have such useful properties, and how one can extend the definition of a credit score to time dependent scores. It then goes through the development process of building a scorecard, discussing sample construction, reject inference, coarse classification, and variable selection. It concludes by looking at the different methodologies for building a scorecard such as logistic regression, linear regression, classification tress, and linear programming.
Laura F. Martignon, Konstantinos V. Katsikopoulos, and Jan K. Woike
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780195315448
- eISBN:
- 9780199932429
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195315448.003.0106
- Subject:
- Psychology, Cognitive Psychology, Human-Technology Interaction
Naïve, fast, and frugal trees model simple classification strategies that ignore cue dependencies and process cues sequentially, one at a time. At every level of such a tree a classification is made ...
More
Naïve, fast, and frugal trees model simple classification strategies that ignore cue dependencies and process cues sequentially, one at a time. At every level of such a tree a classification is made for one of the considered cue values. This chapter demonstrates that naïve, fast, and frugal trees operate as lexicographic classifiers. On 30 data sets, the performance of such trees is compared with that of two commonly used classification methods: classification and regression trees (CART) and logistic regression. The naïve, fast, and frugal trees are surprisingly robust and their predictive accuracy is comparable to that of savvier competitors, especially when the training set is small. Given that such trees require less time and information and fewer calculations than more computationally complex methods, they represent an attractive option when classifications need to be made quickly and with limited resources.Less
Naïve, fast, and frugal trees model simple classification strategies that ignore cue dependencies and process cues sequentially, one at a time. At every level of such a tree a classification is made for one of the considered cue values. This chapter demonstrates that naïve, fast, and frugal trees operate as lexicographic classifiers. On 30 data sets, the performance of such trees is compared with that of two commonly used classification methods: classification and regression trees (CART) and logistic regression. The naïve, fast, and frugal trees are surprisingly robust and their predictive accuracy is comparable to that of savvier competitors, especially when the training set is small. Given that such trees require less time and information and fewer calculations than more computationally complex methods, they represent an attractive option when classifications need to be made quickly and with limited resources.
Leslie R. Martin, Kelly B. Haskard-Zolnierek, and M. Robin DiMatteo
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780195380408
- eISBN:
- 9780199864454
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195380408.003.0005
- Subject:
- Psychology, Social Psychology
Few people conduct a truly thorough and thoughtful evaluation of the evidence before they make a health-related decision. This chapter describes and evaluates strategies for making decisions based on ...
More
Few people conduct a truly thorough and thoughtful evaluation of the evidence before they make a health-related decision. This chapter describes and evaluates strategies for making decisions based on empirical evidence. It then overviews the elements that are needed in order to understand health and medical risk information (e.g., Bayesian methods, odds ratios, risk ratios, survival analyses, and hazard ratios) as that information is typically presented in medical journals, scientific articles, news reports, and advertisements. The relative power of aggregated data (through meta-analysis) is also discussed. Evidence supporting the crucial role of the patient in decision making, and specific tools that can be used in decision making are presented (e.g., decision trees, PREPARED™), is reviewed.Less
Few people conduct a truly thorough and thoughtful evaluation of the evidence before they make a health-related decision. This chapter describes and evaluates strategies for making decisions based on empirical evidence. It then overviews the elements that are needed in order to understand health and medical risk information (e.g., Bayesian methods, odds ratios, risk ratios, survival analyses, and hazard ratios) as that information is typically presented in medical journals, scientific articles, news reports, and advertisements. The relative power of aggregated data (through meta-analysis) is also discussed. Evidence supporting the crucial role of the patient in decision making, and specific tools that can be used in decision making are presented (e.g., decision trees, PREPARED™), is reviewed.
Grahame R. Dowling
- Published in print:
- 2004
- Published Online:
- October 2011
- ISBN:
- 9780199269617
- eISBN:
- 9780191699429
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199269617.003.0013
- Subject:
- Business and Management, Marketing
This chapter focuses primarily on advertising agencies and market research firms, the two principal outside suppliers of professional services to most marketing managers. The issues that govern the ...
More
This chapter focuses primarily on advertising agencies and market research firms, the two principal outside suppliers of professional services to most marketing managers. The issues that govern the working relationship between the organization and these two agents are similar to those for other service providers. To implement many of the organization's marketing programmes requires working with outside suppliers of services, such as consultants, distributors, advertising agencies, and market research firms. Being outside the organization enables them to look at the marketing issues with more detachment than most insiders. Good working relationships with service suppliers provide leverage for the marketing team's internal capabilities. However, to gain the most benefit from these professional service firms requires the development of a commercial arrangement that is based on sound economic foundations.Less
This chapter focuses primarily on advertising agencies and market research firms, the two principal outside suppliers of professional services to most marketing managers. The issues that govern the working relationship between the organization and these two agents are similar to those for other service providers. To implement many of the organization's marketing programmes requires working with outside suppliers of services, such as consultants, distributors, advertising agencies, and market research firms. Being outside the organization enables them to look at the marketing issues with more detachment than most insiders. Good working relationships with service suppliers provide leverage for the marketing team's internal capabilities. However, to gain the most benefit from these professional service firms requires the development of a commercial arrangement that is based on sound economic foundations.
Brian Lukoff
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780195387476
- eISBN:
- 9780199914517
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195387476.003.0070
- Subject:
- Psychology, Social Psychology
This chapter examines methods that aim to reduce faking. These methods alter the delivery mechanism of the assessment by changing the way test questions are asked (verifiable biodata and situational ...
More
This chapter examines methods that aim to reduce faking. These methods alter the delivery mechanism of the assessment by changing the way test questions are asked (verifiable biodata and situational judgment tests), the person they are asked to (third-party ratings), or the instructions that accompany the assessment itself (warning respondents not to fake). The chapter concludes by describing a new method that combines a real-time faking detection algorithm with warnings based on the results of the algorithm, which can potentially harness the power of warnings while avoiding some of their pitfalls.Less
This chapter examines methods that aim to reduce faking. These methods alter the delivery mechanism of the assessment by changing the way test questions are asked (verifiable biodata and situational judgment tests), the person they are asked to (third-party ratings), or the instructions that accompany the assessment itself (warning respondents not to fake). The chapter concludes by describing a new method that combines a real-time faking detection algorithm with warnings based on the results of the algorithm, which can potentially harness the power of warnings while avoiding some of their pitfalls.
Patrick L. Anderson
- Published in print:
- 2013
- Published Online:
- September 2013
- ISBN:
- 9780804758307
- eISBN:
- 9780804783224
- Item type:
- chapter
- Publisher:
- Stanford University Press
- DOI:
- 10.11126/stanford/9780804758307.003.0012
- Subject:
- Economics and Finance, Financial Economics
This chapter demonstrates the importance of management flexibility regarding the timing, scale, and type of investments, which is the basis for the study of “real options.” The chapter describes an ...
More
This chapter demonstrates the importance of management flexibility regarding the timing, scale, and type of investments, which is the basis for the study of “real options.” The chapter describes an opportunity and its contractual equivalent, an option, the history of option contracts, the classic Black-Scholes-Merton option model of the firm, and the formula for pricing, under ideal conditions, a pure financial call option. From this basis, the author draws the conclusion that the existence of an option premium alone renders invalid the Net Present Value rule for the value of the firm. The author then describes techniques for valuing “real options,” including extensions of financial options methods, Decision Tree Analysis, Monte Carlo, stochastic control, and value functional models, and “good deal” bounds. Finally it describes a recently-proposed synthesis of traditional income methods and real options analysis, which the author calls “expanded net present value” or XNPV.Less
This chapter demonstrates the importance of management flexibility regarding the timing, scale, and type of investments, which is the basis for the study of “real options.” The chapter describes an opportunity and its contractual equivalent, an option, the history of option contracts, the classic Black-Scholes-Merton option model of the firm, and the formula for pricing, under ideal conditions, a pure financial call option. From this basis, the author draws the conclusion that the existence of an option premium alone renders invalid the Net Present Value rule for the value of the firm. The author then describes techniques for valuing “real options,” including extensions of financial options methods, Decision Tree Analysis, Monte Carlo, stochastic control, and value functional models, and “good deal” bounds. Finally it describes a recently-proposed synthesis of traditional income methods and real options analysis, which the author calls “expanded net present value” or XNPV.
Diana B. Petitti
- Published in print:
- 1999
- Published Online:
- September 2009
- ISBN:
- 9780195133646
- eISBN:
- 9780199863761
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195133646.003.16
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
The published report of the results of a meta-analysis, decision analysis, or cost-effectiveness analysis is usually the only information about the study that is readily available to readers. Most ...
More
The published report of the results of a meta-analysis, decision analysis, or cost-effectiveness analysis is usually the only information about the study that is readily available to readers. Most readers do not have the technical expertise to identify all of the assumptions of the study. These methods are complex. For all of these reasons, the description of the study methods and procedures must be comprehensive, and the presentation of the study findings must be clear. This chapter describes the specific information that should be included in the published reports of systematic review and meta-analysis, decision analysis, and cost-effectiveness analysis. The recommendations for reporting of cost-effectiveness analysis follow the published guidelines developed by others The chapter describes some of the graphical techniques that can be used to simplify the presentation of results of the studies that use these methods (forest plots, box plot, radial plot).Less
The published report of the results of a meta-analysis, decision analysis, or cost-effectiveness analysis is usually the only information about the study that is readily available to readers. Most readers do not have the technical expertise to identify all of the assumptions of the study. These methods are complex. For all of these reasons, the description of the study methods and procedures must be comprehensive, and the presentation of the study findings must be clear. This chapter describes the specific information that should be included in the published reports of systematic review and meta-analysis, decision analysis, and cost-effectiveness analysis. The recommendations for reporting of cost-effectiveness analysis follow the published guidelines developed by others The chapter describes some of the graphical techniques that can be used to simplify the presentation of results of the studies that use these methods (forest plots, box plot, radial plot).
Arthur Benjamin, Gary Chartrand, and Ping Zhang
- Published in print:
- 2017
- Published Online:
- May 2018
- ISBN:
- 9780691175638
- eISBN:
- 9781400852000
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691175638.003.0004
- Subject:
- Mathematics, Applied Mathematics
This chapter considers a class of graphs called trees and their construction. Trees are connected graphs containing no cycles. When dealing with trees, a vertex of degree 1 is called a leaf rather ...
More
This chapter considers a class of graphs called trees and their construction. Trees are connected graphs containing no cycles. When dealing with trees, a vertex of degree 1 is called a leaf rather than an end-vertex. The chapter first provides an overview of trees and their leaves, along with the relevant theorems, before discussing a tree-counting problem, introduced by British mathematician Arthur Cayley, involving saturated hydrocarbons. It shows that counting the number of saturated hydrocarbons is the same as counting the number of certain kinds of nonisomorphic trees. It then revisits another Cayley problem, one that involved counting labeled trees, and describes Cayley's Tree Formula and the corresponding proof known as the Prüfer code. It also explores decision trees and concludes by looking at the Minimum Spanning Tree Problem and its solution, Kruskal's Algorithm.Less
This chapter considers a class of graphs called trees and their construction. Trees are connected graphs containing no cycles. When dealing with trees, a vertex of degree 1 is called a leaf rather than an end-vertex. The chapter first provides an overview of trees and their leaves, along with the relevant theorems, before discussing a tree-counting problem, introduced by British mathematician Arthur Cayley, involving saturated hydrocarbons. It shows that counting the number of saturated hydrocarbons is the same as counting the number of certain kinds of nonisomorphic trees. It then revisits another Cayley problem, one that involved counting labeled trees, and describes Cayley's Tree Formula and the corresponding proof known as the Prüfer code. It also explores decision trees and concludes by looking at the Minimum Spanning Tree Problem and its solution, Kruskal's Algorithm.
Therese M. Donovan and Ruth M. Mickey
- Published in print:
- 2019
- Published Online:
- July 2019
- ISBN:
- 9780198841296
- eISBN:
- 9780191876820
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198841296.003.0020
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies
In the “Once-ler Problem,” the decision tree is introduced as a very useful technique that can be used to answer a variety of questions and assist in making decisions. This chapter builds on the ...
More
In the “Once-ler Problem,” the decision tree is introduced as a very useful technique that can be used to answer a variety of questions and assist in making decisions. This chapter builds on the “Lorax Problem” introduced in Chapter 19, where Bayesian networks were introduced. A decision tree is a graphical representation of the alternatives in a decision. It is closely related to Bayesian networks except that the decision problem takes the shape of a tree instead. The tree itself consists of decision nodes, chance nodes, and end nodes, which provide an outcome. In a decision tree, probabilities associated with chance nodes are conditional probabilities, which Bayes’ Theorem can be used to estimate or update. The calculation of expected values (or expected utility) of competing alternative decisions is provided on a step-by-step basis with an example from The Lorax.Less
In the “Once-ler Problem,” the decision tree is introduced as a very useful technique that can be used to answer a variety of questions and assist in making decisions. This chapter builds on the “Lorax Problem” introduced in Chapter 19, where Bayesian networks were introduced. A decision tree is a graphical representation of the alternatives in a decision. It is closely related to Bayesian networks except that the decision problem takes the shape of a tree instead. The tree itself consists of decision nodes, chance nodes, and end nodes, which provide an outcome. In a decision tree, probabilities associated with chance nodes are conditional probabilities, which Bayes’ Theorem can be used to estimate or update. The calculation of expected values (or expected utility) of competing alternative decisions is provided on a step-by-step basis with an example from The Lorax.
Robert G. Reynolds
- Published in print:
- 2000
- Published Online:
- November 2020
- ISBN:
- 9780195131673
- eISBN:
- 9780197561492
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195131673.003.0016
- Subject:
- Archaeology, Archaeological Methodology and Techniques
A growing body of data indicates that armed conflict played a role in the creation of complex societies such as chiefdoms and states (Wright 1984; Spencer 1998). For example, according to Wright ...
More
A growing body of data indicates that armed conflict played a role in the creation of complex societies such as chiefdoms and states (Wright 1984; Spencer 1998). For example, according to Wright (1977:382), "most ethnographically reported chiefdoms seem to be involved in constant warfare," and large chiefdoms grew by absorbing their weaker neighbors. Marcus and Flannery suggest that warfare was often used to create a state out of rival chiefdoms: . . . We do not believe that a chiefdom simply turns into a state. We believe that states arise when one member of a group of chiefdoms begins to take over its neighbors, eventually turning them into subject provinces of a much larger polity. (Marcus and Flannery 1996:157) . . . As an example of this process, the authors cite Kamehameha's creation of a Hawaiian state out of five to seven rival chiefdoms between 1782 and 1810. They suggest that something similar happened in the Valley of Oaxaca, Mexico, when a chiefdom in the Etla region seized the defensible mountain top of Monte Albán and began systematically subduing rival chiefdoms in the southern and eastern parts of the valley. If this is the case, there should be a point in the sequence when considerations of defense began to influence settlement choice. In this chapter, our goal is to provide a preliminary description of our efforts in testing the suitability of this model to the Oaxacan case, and its potential use as the basis for a more general model of state formation. In order to test this hypothesis we need some way to operationalize it in terms of the archaeological record in the Valley of Oaxaca. The key phases of the model can be expressed as follows: 1. An early period in which raiding was minimal, and variables relevant to successful agriculture predominate in settlement choices. 2. A gradual rise in friction between social groups prior to state formation. This friction can be represented by archaeological evidence for raiding, the principle form of warfare in tribes and chiefdoms.
Less
A growing body of data indicates that armed conflict played a role in the creation of complex societies such as chiefdoms and states (Wright 1984; Spencer 1998). For example, according to Wright (1977:382), "most ethnographically reported chiefdoms seem to be involved in constant warfare," and large chiefdoms grew by absorbing their weaker neighbors. Marcus and Flannery suggest that warfare was often used to create a state out of rival chiefdoms: . . . We do not believe that a chiefdom simply turns into a state. We believe that states arise when one member of a group of chiefdoms begins to take over its neighbors, eventually turning them into subject provinces of a much larger polity. (Marcus and Flannery 1996:157) . . . As an example of this process, the authors cite Kamehameha's creation of a Hawaiian state out of five to seven rival chiefdoms between 1782 and 1810. They suggest that something similar happened in the Valley of Oaxaca, Mexico, when a chiefdom in the Etla region seized the defensible mountain top of Monte Albán and began systematically subduing rival chiefdoms in the southern and eastern parts of the valley. If this is the case, there should be a point in the sequence when considerations of defense began to influence settlement choice. In this chapter, our goal is to provide a preliminary description of our efforts in testing the suitability of this model to the Oaxacan case, and its potential use as the basis for a more general model of state formation. In order to test this hypothesis we need some way to operationalize it in terms of the archaeological record in the Valley of Oaxaca. The key phases of the model can be expressed as follows: 1. An early period in which raiding was minimal, and variables relevant to successful agriculture predominate in settlement choices. 2. A gradual rise in friction between social groups prior to state formation. This friction can be represented by archaeological evidence for raiding, the principle form of warfare in tribes and chiefdoms.
Tamar Lasky
- Published in print:
- 2007
- Published Online:
- September 2009
- ISBN:
- 9780195172638
- eISBN:
- 9780199865727
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195172638.003.12
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
Regulatory decisions regarding food safety require analyses of the economic costs of changes at any point along the farm-to-table continuum, and cost analysis of the number of cases of illness caused ...
More
Regulatory decisions regarding food safety require analyses of the economic costs of changes at any point along the farm-to-table continuum, and cost analysis of the number of cases of illness caused or prevented by a given decision. This chapter provides an example of the type of data and approaches used in decision analysis and analyses used to consider options for a safer food supply.Less
Regulatory decisions regarding food safety require analyses of the economic costs of changes at any point along the farm-to-table continuum, and cost analysis of the number of cases of illness caused or prevented by a given decision. This chapter provides an example of the type of data and approaches used in decision analysis and analyses used to consider options for a safer food supply.
Herman Philipse
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780199697533
- eISBN:
- 9780191738470
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199697533.003.0016
- Subject:
- Philosophy, Philosophy of Religion, Metaphysics/Epistemology
After a brief overview of the book, its main conclusions are stated. 1. Theism is not a meaningful theory. So we should become particular semantic atheists. 2. If we assume for the sake of argument ...
More
After a brief overview of the book, its main conclusions are stated. 1. Theism is not a meaningful theory. So we should become particular semantic atheists. 2. If we assume for the sake of argument that theism is a meaningful theory, it has no predictive power with regard to any existing evidence. Because the truth of theism is improbable given the scientific background knowledge concerning the dependence of mental life on brain processes, we should become strong particular atheists. 3. If we assume for the sake of argument that theism not only is meaningful but also has predictive power, we should become strong particular atheists as well, because the empirical arguments against theism outweigh the arguments that support it, and theism is improbable on our background knowledge. If we assume that either (1) or (2, 3) apply mutatis mutandis to all other gods that humanity has worshipped or still reveres, the ultimate conclusion of the book is that if we aim at being reasonable and intellectually conscientious, we should become strong disjunctive universal atheists.Less
After a brief overview of the book, its main conclusions are stated. 1. Theism is not a meaningful theory. So we should become particular semantic atheists. 2. If we assume for the sake of argument that theism is a meaningful theory, it has no predictive power with regard to any existing evidence. Because the truth of theism is improbable given the scientific background knowledge concerning the dependence of mental life on brain processes, we should become strong particular atheists. 3. If we assume for the sake of argument that theism not only is meaningful but also has predictive power, we should become strong particular atheists as well, because the empirical arguments against theism outweigh the arguments that support it, and theism is improbable on our background knowledge. If we assume that either (1) or (2, 3) apply mutatis mutandis to all other gods that humanity has worshipped or still reveres, the ultimate conclusion of the book is that if we aim at being reasonable and intellectually conscientious, we should become strong disjunctive universal atheists.
David O. Brink
- Published in print:
- 2017
- Published Online:
- August 2017
- ISBN:
- 9780198805601
- eISBN:
- 9780191843563
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198805601.003.0010
- Subject:
- Philosophy, Moral Philosophy, Metaphysics/Epistemology
Attempted wrongdoing is wrong and deserves censure and sanction, provided the agent was responsible for her attempt. One conception of attempts, incorporated in the criminal law, treats them as ...
More
Attempted wrongdoing is wrong and deserves censure and sanction, provided the agent was responsible for her attempt. One conception of attempts, incorporated in the criminal law, treats them as bivalent. The important question is at what point in an agent’s planning, preparation, and execution of an offense the attempt is completed. However, bivalence fails to recognize partially complete attempts and is unable to give a satisfying account of the criminal law defense of abandonment. This essay explores an alternative conception of attempts as historical and scalar. On this view, attempts involve the implementation of temporally extended decision trees that pass through many nodes and terminate in a last act. This view rejects bivalence, because at many points within the decision tree there is only a partially complete attempt, and it provides a more satisfying account of abandonment, precisely because it can recognize attempts that are partially complete.Less
Attempted wrongdoing is wrong and deserves censure and sanction, provided the agent was responsible for her attempt. One conception of attempts, incorporated in the criminal law, treats them as bivalent. The important question is at what point in an agent’s planning, preparation, and execution of an offense the attempt is completed. However, bivalence fails to recognize partially complete attempts and is unable to give a satisfying account of the criminal law defense of abandonment. This essay explores an alternative conception of attempts as historical and scalar. On this view, attempts involve the implementation of temporally extended decision trees that pass through many nodes and terminate in a last act. This view rejects bivalence, because at many points within the decision tree there is only a partially complete attempt, and it provides a more satisfying account of abandonment, precisely because it can recognize attempts that are partially complete.
Arlindo Oliveira
- Published in print:
- 2017
- Published Online:
- September 2017
- ISBN:
- 9780262036030
- eISBN:
- 9780262338394
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262036030.003.0005
- Subject:
- Computer Science, Artificial Intelligence
This chapter addresses the question of whether a computer can become intelligent and how to test for that possibility. It introduces the idea of the Turing test, a test developed to determine, in an ...
More
This chapter addresses the question of whether a computer can become intelligent and how to test for that possibility. It introduces the idea of the Turing test, a test developed to determine, in an unbiased way, whether a program running in a computer is, or is not, intelligent. The development of artificial intelligence led, in time, to many applications of computers that are not possible using “non-intelligent” programs. One important area in artificial intelligence is machine learning, the technology that makes possible that computers learn, from existing data, in ways similar to the ways humans learn. A number of approach to perform machine learning is addressed in this chapter, including neural networks, decision trees and Bayesian learning. The chapter concludes by arguing that the brain is, in reality, a very sophisticated statistical machine aimed at improving the chances of survival of its owner.Less
This chapter addresses the question of whether a computer can become intelligent and how to test for that possibility. It introduces the idea of the Turing test, a test developed to determine, in an unbiased way, whether a program running in a computer is, or is not, intelligent. The development of artificial intelligence led, in time, to many applications of computers that are not possible using “non-intelligent” programs. One important area in artificial intelligence is machine learning, the technology that makes possible that computers learn, from existing data, in ways similar to the ways humans learn. A number of approach to perform machine learning is addressed in this chapter, including neural networks, decision trees and Bayesian learning. The chapter concludes by arguing that the brain is, in reality, a very sophisticated statistical machine aimed at improving the chances of survival of its owner.
Therese Donovan and Ruth M. Mickey
- Published in print:
- 2019
- Published Online:
- July 2019
- ISBN:
- 9780198841296
- eISBN:
- 9780191876820
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198841296.001.0001
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies
Bayesian Statistics for Beginners is an entry-level book on Bayesian statistics. It is like no other math book you’ve read. It is written for readers who do not have advanced degrees in mathematics ...
More
Bayesian Statistics for Beginners is an entry-level book on Bayesian statistics. It is like no other math book you’ve read. It is written for readers who do not have advanced degrees in mathematics and who may struggle with mathematical notation, yet need to understand the basics of Bayesian inference for scientific investigations. Intended as a “quick read,” the entire book is written as an informal, humorous conversation between the reader and writer—a natural way to present material for those new to Bayesian inference. The most impressive feature of the book is the sheer length of the journey, from introductory probability to Bayesian inference and applications, including Markov Chain Monte Carlo approaches for parameter estimation, Bayesian belief networks, and decision trees. Detailed examples in each chapter contribute a great deal, where Bayes’ Theorem is at the front and center with transparent, step-by-step calculations. A vast amount of material is covered in a lighthearted manner; the journey is relatively pain-free. The book is intended to jump-start a reader’s understanding of probability, inference, and statistical vocabulary that will set the stage for continued learning. Other features include multiple links to web-based material, an annotated bibliography, and detailed, step-by-step appendices.Less
Bayesian Statistics for Beginners is an entry-level book on Bayesian statistics. It is like no other math book you’ve read. It is written for readers who do not have advanced degrees in mathematics and who may struggle with mathematical notation, yet need to understand the basics of Bayesian inference for scientific investigations. Intended as a “quick read,” the entire book is written as an informal, humorous conversation between the reader and writer—a natural way to present material for those new to Bayesian inference. The most impressive feature of the book is the sheer length of the journey, from introductory probability to Bayesian inference and applications, including Markov Chain Monte Carlo approaches for parameter estimation, Bayesian belief networks, and decision trees. Detailed examples in each chapter contribute a great deal, where Bayes’ Theorem is at the front and center with transparent, step-by-step calculations. A vast amount of material is covered in a lighthearted manner; the journey is relatively pain-free. The book is intended to jump-start a reader’s understanding of probability, inference, and statistical vocabulary that will set the stage for continued learning. Other features include multiple links to web-based material, an annotated bibliography, and detailed, step-by-step appendices.
Francis E. McGovern
- Published in print:
- 2015
- Published Online:
- April 2015
- ISBN:
- 9780199389735
- eISBN:
- 9780199389759
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199389735.003.0002
- Subject:
- Law, Human Rights and Immigration, Public International Law
This chapter examines the design of the UNCC from a variety of perspectives: its historical setting, the alternative design approaches that have been taken in other compensation contexts, its design ...
More
This chapter examines the design of the UNCC from a variety of perspectives: its historical setting, the alternative design approaches that have been taken in other compensation contexts, its design details, and its role in the design of future claims resolution facilities. The progress of the UNCC was speedy by international standards. After its conception in April 1991, the Secretariat of the UNCC was established in July 1991, the first decisions of the Governing Council were made in August, and claim forms were distributed that December. The first Commissioners were appointed in March 1993, and the first report by Commissioners making decisions regarding claims was submitted to the Governing Council in April 1994. The UNCC completed the entire claim review process by June 2005. This chapter also examines the extent to which concepts of legitimacy and rough justice conflict or reinforce each other in the context of the UNCC.Less
This chapter examines the design of the UNCC from a variety of perspectives: its historical setting, the alternative design approaches that have been taken in other compensation contexts, its design details, and its role in the design of future claims resolution facilities. The progress of the UNCC was speedy by international standards. After its conception in April 1991, the Secretariat of the UNCC was established in July 1991, the first decisions of the Governing Council were made in August, and claim forms were distributed that December. The first Commissioners were appointed in March 1993, and the first report by Commissioners making decisions regarding claims was submitted to the Governing Council in April 1994. The UNCC completed the entire claim review process by June 2005. This chapter also examines the extent to which concepts of legitimacy and rough justice conflict or reinforce each other in the context of the UNCC.