N. Thompson Hobbs and Mevin B. Hooten
- Published in print:
- 2015
- Published Online:
- October 2017
- ISBN:
- 9780691159287
- eISBN:
- 9781400866557
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691159287.003.0008
- Subject:
- Biology, Ecology
This chapter shows how to make inferences using MCMC samples. Here, the process of inference begins on the assumption that a single model is being analyzed. The objective is to estimate parameters, ...
More
This chapter shows how to make inferences using MCMC samples. Here, the process of inference begins on the assumption that a single model is being analyzed. The objective is to estimate parameters, latent states, and derived quantities based on that model and the data. These estimates are conditioned on the single model being analyzed. The chapter also returns to an example advanced in the first chapter, to illustrate choices on specific distributions needed to implement the model, to show how informative priors can be useful, and to illustrate some of the inferential procedures described in this chapter—posterior predictive checks, marginal posterior distributions, estimates of derived quantities, and forecasting.Less
This chapter shows how to make inferences using MCMC samples. Here, the process of inference begins on the assumption that a single model is being analyzed. The objective is to estimate parameters, latent states, and derived quantities based on that model and the data. These estimates are conditioned on the single model being analyzed. The chapter also returns to an example advanced in the first chapter, to illustrate choices on specific distributions needed to implement the model, to show how informative priors can be useful, and to illustrate some of the inferential procedures described in this chapter—posterior predictive checks, marginal posterior distributions, estimates of derived quantities, and forecasting.
Christopher Walton
- Published in print:
- 2006
- Published Online:
- November 2020
- ISBN:
- 9780199292486
- eISBN:
- 9780191917691
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199292486.003.0009
- Subject:
- Computer Science, Computer Architecture and Logic Design
In the previous chapter we described three languages for representing knowledge on the Semantic Web: RDF, RDFS, and OWL. These languages enable us to create Web-based knowledge in a standard manner ...
More
In the previous chapter we described three languages for representing knowledge on the Semantic Web: RDF, RDFS, and OWL. These languages enable us to create Web-based knowledge in a standard manner with a common semantics. We now turn our attention to the techniques that can utilize this knowledge in an automated manner. These techniques are fundamental to the construction of the Semantic Web, as without automation we do not gain any real benefit over the current Web. There are currently two views of the Semantic Web that have implications for the kind of automation that we can hope to achieve: 1. An expert system with a distributed knowledge base. 2. A society of agents that solve complex knowledge-based tasks. In the first view, the Semantic Web is essentially treated a single-user application that reasons about some Web-based knowledge. For example, a service that queries the knowledge to answer specific questions. This is a perfectly acceptable view, and its realization is significantly challenging. However, in this book we primarily subscribe to the second view. In this more-generalized view, the knowledge is not treated as a single body, and it is not necessary to obtain a global view of the knowledge. Instead, the knowledge is exchanged and manipulated in a peer-to-peer (P2P) manner between different entities. These entities act on behalf of human users, and require only enough knowledge to perform the task to which they are assigned. The use of entities to solve complex problems on the Web is captured by the notion of an agent. In human terms, an agent is an intermediary who makes a complex organization externally accessible. For example, a travel agent simplifies the problem of booking a holiday. This concept of simplifying the interface to a complex framework is a key goal of the Semantic Web. We would like to make it straightforward for a human to interact with a wide variety of disparate sources of knowledge without becoming mired in the details. To accomplish this, we want to define software agents that act with similar characteristics to human agents.
Less
In the previous chapter we described three languages for representing knowledge on the Semantic Web: RDF, RDFS, and OWL. These languages enable us to create Web-based knowledge in a standard manner with a common semantics. We now turn our attention to the techniques that can utilize this knowledge in an automated manner. These techniques are fundamental to the construction of the Semantic Web, as without automation we do not gain any real benefit over the current Web. There are currently two views of the Semantic Web that have implications for the kind of automation that we can hope to achieve: 1. An expert system with a distributed knowledge base. 2. A society of agents that solve complex knowledge-based tasks. In the first view, the Semantic Web is essentially treated a single-user application that reasons about some Web-based knowledge. For example, a service that queries the knowledge to answer specific questions. This is a perfectly acceptable view, and its realization is significantly challenging. However, in this book we primarily subscribe to the second view. In this more-generalized view, the knowledge is not treated as a single body, and it is not necessary to obtain a global view of the knowledge. Instead, the knowledge is exchanged and manipulated in a peer-to-peer (P2P) manner between different entities. These entities act on behalf of human users, and require only enough knowledge to perform the task to which they are assigned. The use of entities to solve complex problems on the Web is captured by the notion of an agent. In human terms, an agent is an intermediary who makes a complex organization externally accessible. For example, a travel agent simplifies the problem of booking a holiday. This concept of simplifying the interface to a complex framework is a key goal of the Semantic Web. We would like to make it straightforward for a human to interact with a wide variety of disparate sources of knowledge without becoming mired in the details. To accomplish this, we want to define software agents that act with similar characteristics to human agents.
Arindam Bandyopadhyay
- Published in print:
- 2022
- Published Online:
- June 2022
- ISBN:
- 9780192849014
- eISBN:
- 9780191944260
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780192849014.003.0009
- Subject:
- Economics and Finance, Financial Economics
Model validation and calibration chapter demonstrate key statistical tests that are useful to measure predictive power of risk models. It mainly assesses the critical steps, data input quality, and ...
More
Model validation and calibration chapter demonstrate key statistical tests that are useful to measure predictive power of risk models. It mainly assesses the critical steps, data input quality, and discriminatory power of the models in predicting default or loss. Model validation has been a key task for risk-focused management for internal management of risk across various business lines. Reliable rating systems require efficient validation strategies. This chapter explains power curve-fitting techniques to assess discriminatory power of predictive models, method for checking model errors, and estimation of model accuracy in great detail. The separation power check through information value and KS test and their utility in scorecard development has been elaborated. Steps in Hosmer–Lemeshow goodness-of-fit test pertaining to logistic model and other non-parametric validation checks like Akaike information criteria, Bayesian information criterion, Kendal’s tau are described in this chapter. An independent and objective validation of the predictive power and efficacy of valuation and risk models through statistical tests is an integral part of a robust risk management system.Less
Model validation and calibration chapter demonstrate key statistical tests that are useful to measure predictive power of risk models. It mainly assesses the critical steps, data input quality, and discriminatory power of the models in predicting default or loss. Model validation has been a key task for risk-focused management for internal management of risk across various business lines. Reliable rating systems require efficient validation strategies. This chapter explains power curve-fitting techniques to assess discriminatory power of predictive models, method for checking model errors, and estimation of model accuracy in great detail. The separation power check through information value and KS test and their utility in scorecard development has been elaborated. Steps in Hosmer–Lemeshow goodness-of-fit test pertaining to logistic model and other non-parametric validation checks like Akaike information criteria, Bayesian information criterion, Kendal’s tau are described in this chapter. An independent and objective validation of the predictive power and efficacy of valuation and risk models through statistical tests is an integral part of a robust risk management system.