Masashi Sugiyama and Motoaki Kawanabe
- Published in print:
- 2012
- Published Online:
- September 2013
- ISBN:
- 9780262017091
- eISBN:
- 9780262301220
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262017091.003.0001
- Subject:
- Computer Science, Machine Learning
This chapter provides an introduction to covariate shift adaptation toward machine learning in a non-stationary environment. It begins by discussing cover machine learning under covariate shift. It ...
More
This chapter provides an introduction to covariate shift adaptation toward machine learning in a non-stationary environment. It begins by discussing cover machine learning under covariate shift. It then describes the core idea of covariate shift adaptation, using an illustrative example. Next, it formulates the supervised learning problem, which includes regression and classification. It pays particular attention to covariate shift and model misspecification. An overview of the subsequent chapters is also presented.Less
This chapter provides an introduction to covariate shift adaptation toward machine learning in a non-stationary environment. It begins by discussing cover machine learning under covariate shift. It then describes the core idea of covariate shift adaptation, using an illustrative example. Next, it formulates the supervised learning problem, which includes regression and classification. It pays particular attention to covariate shift and model misspecification. An overview of the subsequent chapters is also presented.
Masashi Sugiyama and Motoaki Kawanabe
- Published in print:
- 2012
- Published Online:
- September 2013
- ISBN:
- 9780262017091
- eISBN:
- 9780262301220
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262017091.003.0007
- Subject:
- Computer Science, Machine Learning
This chapter discusses state-of-the-art applications of covariate shift adaptation techniques to various real-world problems. It covers non-stationarity adaptation in brain-computer interfaces; ...
More
This chapter discusses state-of-the-art applications of covariate shift adaptation techniques to various real-world problems. It covers non-stationarity adaptation in brain-computer interfaces; speaker identification through change in voice quality; domain adaptation in natural language processing; age prediction from face images under changing illumination conditions; user adaptation in human activity recognition; and efficient sample reuse in autonomous robot control.Less
This chapter discusses state-of-the-art applications of covariate shift adaptation techniques to various real-world problems. It covers non-stationarity adaptation in brain-computer interfaces; speaker identification through change in voice quality; domain adaptation in natural language processing; age prediction from face images under changing illumination conditions; user adaptation in human activity recognition; and efficient sample reuse in autonomous robot control.
Masashi Sugiyama and Motoaki Kawanabe
- Published in print:
- 2012
- Published Online:
- September 2013
- ISBN:
- 9780262017091
- eISBN:
- 9780262301220
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262017091.003.0006
- Subject:
- Computer Science, Machine Learning
This chapter compares the covariate shift approach with related formulations called sample selection bias. Studies of correcting sample selection bias were initiated by Heckman, who received the ...
More
This chapter compares the covariate shift approach with related formulations called sample selection bias. Studies of correcting sample selection bias were initiated by Heckman, who received the Nobel Prize in economics for this achievement in 2000. Heckman’s correction model is reviewed and its relation to covariate shift adaptation is discussed.Less
This chapter compares the covariate shift approach with related formulations called sample selection bias. Studies of correcting sample selection bias were initiated by Heckman, who received the Nobel Prize in economics for this achievement in 2000. Heckman’s correction model is reviewed and its relation to covariate shift adaptation is discussed.
Masashi Sugiyama and Motoaki Kawanabe
- Published in print:
- 2012
- Published Online:
- September 2013
- ISBN:
- 9780262017091
- eISBN:
- 9780262301220
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262017091.003.0002
- Subject:
- Computer Science, Machine Learning
This chapter discusses function learning methods under covariate shift. Ordinary empirical risk minimization learning is not consistent under covariate shift for misspecified models, and this ...
More
This chapter discusses function learning methods under covariate shift. Ordinary empirical risk minimization learning is not consistent under covariate shift for misspecified models, and this inconsistency issue can be resolved by considering importance-weighted loss functions. Here, various importance-weighted empirical risk minimization methods are introduced, including least squares and Huber’s method for regression, and Fisher discriminant analysis, logistic regression, support vector machines, and boosting for classification. Their adaptive and regularized variants are also described. The numerical behavior of these importance-weighted learning methods is illustrated through experiments.Less
This chapter discusses function learning methods under covariate shift. Ordinary empirical risk minimization learning is not consistent under covariate shift for misspecified models, and this inconsistency issue can be resolved by considering importance-weighted loss functions. Here, various importance-weighted empirical risk minimization methods are introduced, including least squares and Huber’s method for regression, and Fisher discriminant analysis, logistic regression, support vector machines, and boosting for classification. Their adaptive and regularized variants are also described. The numerical behavior of these importance-weighted learning methods is illustrated through experiments.
Masashi Sugiyama and Motoaki Kawanabe
- Published in print:
- 2012
- Published Online:
- September 2013
- ISBN:
- 9780262017091
- eISBN:
- 9780262301220
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262017091.003.0004
- Subject:
- Computer Science, Machine Learning
This chapter discusses the problem of importance estimation. Importance-weighting techniques play essential roles in covariate shift adaptation. However, the importance values are usually unknown a ...
More
This chapter discusses the problem of importance estimation. Importance-weighting techniques play essential roles in covariate shift adaptation. However, the importance values are usually unknown a priori, so they must be estimated from data samples. The chapter introduces importance estimation methods, including importance estimation via kernel density estimation, the kernel mean matching method, a logistic regression approach, the Kullback–Leibler importance estimation procedure, and the least-squares importance fitting methods. The latter methods allow one to estimate the importance weights without performing through density estimation. Since density estimation is known to be difficult, the direct importance estimation approaches would be more accurate and preferable in practice. The numerical behavior of direct importance estimation methods is illustrated through experiments. Characteristics of importance estimation methods are also discussed.Less
This chapter discusses the problem of importance estimation. Importance-weighting techniques play essential roles in covariate shift adaptation. However, the importance values are usually unknown a priori, so they must be estimated from data samples. The chapter introduces importance estimation methods, including importance estimation via kernel density estimation, the kernel mean matching method, a logistic regression approach, the Kullback–Leibler importance estimation procedure, and the least-squares importance fitting methods. The latter methods allow one to estimate the importance weights without performing through density estimation. Since density estimation is known to be difficult, the direct importance estimation approaches would be more accurate and preferable in practice. The numerical behavior of direct importance estimation methods is illustrated through experiments. Characteristics of importance estimation methods are also discussed.
Bickel Amir, Brückner Michael, and Scheffer Tobias
- Published in print:
- 2008
- Published Online:
- August 2013
- ISBN:
- 9780262170055
- eISBN:
- 9780262255103
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262170055.003.0009
- Subject:
- Computer Science, Machine Learning
This chapter derives a discriminative model for learning under differing training and test distributions, and is organized as follows. Section 9.2 formalizes the problem setting. Section 9.3 reviews ...
More
This chapter derives a discriminative model for learning under differing training and test distributions, and is organized as follows. Section 9.2 formalizes the problem setting. Section 9.3 reviews models for different training and test distributions. Section 9.4 introduces the discriminative model, and Section 9.5 describes the joint optimization problem. Primal and kernelized classifiers are derived for various training and test distributions in Sections 9.6 and 9.7. Section 9.8 analyzes the convexity of the integrated optimization problem. Section 9.9 provides empirical results, and Section 9.10 concludes.Less
This chapter derives a discriminative model for learning under differing training and test distributions, and is organized as follows. Section 9.2 formalizes the problem setting. Section 9.3 reviews models for different training and test distributions. Section 9.4 introduces the discriminative model, and Section 9.5 describes the joint optimization problem. Primal and kernelized classifiers are derived for various training and test distributions in Sections 9.6 and 9.7. Section 9.8 analyzes the convexity of the integrated optimization problem. Section 9.9 provides empirical results, and Section 9.10 concludes.