Patrick Dattalo
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780195378351
- eISBN:
- 9780199864645
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195378351.001.0001
- Subject:
- Social Work, Research and Evaluation
Random sampling (RS) and random assignment (RA) are considered by many researchers to be the definitive methodological procedures for maximizing external and internal validity. However, there is a ...
More
Random sampling (RS) and random assignment (RA) are considered by many researchers to be the definitive methodological procedures for maximizing external and internal validity. However, there is a daunting list of legal, ethical, and practical barriers to implementing RS and RA. While there are no easy ways to overcome these barriers, social workers should seek and utilize strategies that minimize sampling and assignment bias. This book is a single source of a diverse set of tools that will maximize a study's validity when RS and RA are neither possible nor practical. Readers are guided in selecting and implementing an appropriate strategy, including exemplar sampling, sequential sampling, randomization tests, multiple imputation, mean-score logistic regression, partial randomization, constructed comparison groups, propensity scores, and instrumental variables methods. Each approach is presented in such a way as to highlight its underlying assumptions, implementation strategies, and strengths and weaknesses.Less
Random sampling (RS) and random assignment (RA) are considered by many researchers to be the definitive methodological procedures for maximizing external and internal validity. However, there is a daunting list of legal, ethical, and practical barriers to implementing RS and RA. While there are no easy ways to overcome these barriers, social workers should seek and utilize strategies that minimize sampling and assignment bias. This book is a single source of a diverse set of tools that will maximize a study's validity when RS and RA are neither possible nor practical. Readers are guided in selecting and implementing an appropriate strategy, including exemplar sampling, sequential sampling, randomization tests, multiple imputation, mean-score logistic regression, partial randomization, constructed comparison groups, propensity scores, and instrumental variables methods. Each approach is presented in such a way as to highlight its underlying assumptions, implementation strategies, and strengths and weaknesses.
Marian Stamp Dawkins
- Published in print:
- 2007
- Published Online:
- January 2008
- ISBN:
- 9780198569350
- eISBN:
- 9780191717512
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198569350.003.0004
- Subject:
- Biology, Animal Biology
The same principles that apply to the design of experiments also apply to the design of an observational study, but instead of manipulating the animals, the observer controls the way he or she takes ...
More
The same principles that apply to the design of experiments also apply to the design of an observational study, but instead of manipulating the animals, the observer controls the way he or she takes observations. The three principles are: independent replication; not confounding variables; and removing known sources of variation by blocking or matching. The chapter shows how these three principles can be applied to the design of observations, and discusses issues that often get in the way of valid designs such as pseudoreplication, experimenter bias, and choice of which animals to observe. Some simple observational designs are introduced.Less
The same principles that apply to the design of experiments also apply to the design of an observational study, but instead of manipulating the animals, the observer controls the way he or she takes observations. The three principles are: independent replication; not confounding variables; and removing known sources of variation by blocking or matching. The chapter shows how these three principles can be applied to the design of observations, and discusses issues that often get in the way of valid designs such as pseudoreplication, experimenter bias, and choice of which animals to observe. Some simple observational designs are introduced.
Ken Binmore
- Published in print:
- 2007
- Published Online:
- May 2007
- ISBN:
- 9780195300574
- eISBN:
- 9780199783748
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195300574.003.0006
- Subject:
- Economics and Finance, Microeconomics
This chapter develops the idea of a mixed strategy using the entry into a sealed-bid auction as a non-trivial example. Reaction curves are first illustrated for the case of pure strategies and then ...
More
This chapter develops the idea of a mixed strategy using the entry into a sealed-bid auction as a non-trivial example. Reaction curves are first illustrated for the case of pure strategies and then applied to computing mixed Nash equilibria. The Hawk-Dove Game is equivalent either to the Prisoner's Dilemma or Chicken, depending on parameter values. The mixed-strategy reaction curves are plotted in each case. The interpretation of mixed Nash equilibria as polymorphic equilibria in a game played by a large population is considered. The matrix algebra necessary for handling mixed strategies is reviewed and illustrated with O'Neill's Card Game. Convexity ideas are reviewed and applied to the geometric representation of mixed strategies. Cooperative and noncooperative payoff regions are introduced and illustrated using Chicken and the Battle of the Sexes. Correlated equilibria are introduced after a discussion of self-policing agreements, cheap talk, and preplay randomization. The possibility of correlation without a referee using techniques from cryptography is discussed.Less
This chapter develops the idea of a mixed strategy using the entry into a sealed-bid auction as a non-trivial example. Reaction curves are first illustrated for the case of pure strategies and then applied to computing mixed Nash equilibria. The Hawk-Dove Game is equivalent either to the Prisoner's Dilemma or Chicken, depending on parameter values. The mixed-strategy reaction curves are plotted in each case. The interpretation of mixed Nash equilibria as polymorphic equilibria in a game played by a large population is considered. The matrix algebra necessary for handling mixed strategies is reviewed and illustrated with O'Neill's Card Game. Convexity ideas are reviewed and applied to the geometric representation of mixed strategies. Cooperative and noncooperative payoff regions are introduced and illustrated using Chicken and the Battle of the Sexes. Correlated equilibria are introduced after a discussion of self-policing agreements, cheap talk, and preplay randomization. The possibility of correlation without a referee using techniques from cryptography is discussed.
DAVID COLQUHOUN
- Published in print:
- 2011
- Published Online:
- January 2013
- ISBN:
- 9780197264843
- eISBN:
- 9780191754050
- Item type:
- chapter
- Publisher:
- British Academy
- DOI:
- 10.5871/bacad/9780197264843.003.0012
- Subject:
- Sociology, Methodology and Statistics
The job of scientists is to try to distinguish what is true from what is false by means of observation and experiment. That job has been made difficult by some philosophers of science who appear to ...
More
The job of scientists is to try to distinguish what is true from what is false by means of observation and experiment. That job has been made difficult by some philosophers of science who appear to give academic respectability to relativist, and even postmodernist, postures. This chapter suggests that the contributions of philosophers to causal understanding have been unhelpful. It puts the case for randomised studies as the safest guarantee of the reliability of scientific evidence. It uses the case of hormone replacement therapy to illustrate the importance of randomisation, and the case of processed meat and cancer to illustrate the problems that arise in the absence of randomised tests. Finally, it discusses the opposition to randomisation that has come from some philosophers of science.Less
The job of scientists is to try to distinguish what is true from what is false by means of observation and experiment. That job has been made difficult by some philosophers of science who appear to give academic respectability to relativist, and even postmodernist, postures. This chapter suggests that the contributions of philosophers to causal understanding have been unhelpful. It puts the case for randomised studies as the safest guarantee of the reliability of scientific evidence. It uses the case of hormone replacement therapy to illustrate the importance of randomisation, and the case of processed meat and cancer to illustrate the problems that arise in the absence of randomised tests. Finally, it discusses the opposition to randomisation that has come from some philosophers of science.
George Davey Smith and Shah Ebrahim
- Published in print:
- 2009
- Published Online:
- May 2010
- ISBN:
- 9780195398441
- eISBN:
- 9780199776023
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195398441.003.0021
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
Conventional risk factor epidemiology — directly studying environmentally modifiable exposures that may influence disease risk — and genetic epidemiology have similarities and differences. The ...
More
Conventional risk factor epidemiology — directly studying environmentally modifiable exposures that may influence disease risk — and genetic epidemiology have similarities and differences. The case-control design is, for example, more popular in genetic epidemiology than it currently is in conventional risk factor epidemiology, and while the importance of sample size is recognized in conventional epidemiology, the huge collaborative ventures currently being undertaken in genetic epidemiology have not been the norm, since special attention has, appropriately, been paid to detailed exposure and outcome measurement. In genetic epidemiology there has recently been much attention paid to false-positive findings generated by multiple hypothesis testing against a background of inadequate statistical power, whereas in risk factor epidemiology, problems generated by confounding and bias have been to the forefront. This chapter deals with Mendelian randomization, a principle that underlies some of the differences between conventional risk factor and genetic epidemiology, and also renders genetic epidemiology a useful tool for improving the identification of environmentally modifiable risk factors that are causally related to disease outcomes, and therefore targets for therapeutic or preventative intervention.Less
Conventional risk factor epidemiology — directly studying environmentally modifiable exposures that may influence disease risk — and genetic epidemiology have similarities and differences. The case-control design is, for example, more popular in genetic epidemiology than it currently is in conventional risk factor epidemiology, and while the importance of sample size is recognized in conventional epidemiology, the huge collaborative ventures currently being undertaken in genetic epidemiology have not been the norm, since special attention has, appropriately, been paid to detailed exposure and outcome measurement. In genetic epidemiology there has recently been much attention paid to false-positive findings generated by multiple hypothesis testing against a background of inadequate statistical power, whereas in risk factor epidemiology, problems generated by confounding and bias have been to the forefront. This chapter deals with Mendelian randomization, a principle that underlies some of the differences between conventional risk factor and genetic epidemiology, and also renders genetic epidemiology a useful tool for improving the identification of environmentally modifiable risk factors that are causally related to disease outcomes, and therefore targets for therapeutic or preventative intervention.
Patrick Dattalo
- Published in print:
- 2009
- Published Online:
- February 2010
- ISBN:
- 9780195378351
- eISBN:
- 9780199864645
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195378351.003.0003
- Subject:
- Social Work, Research and Evaluation
This chapter describes the following alternatives and complements to RS in terms of their assumptions, implementations, strengths, and weaknesses: (1) randomization tests; (2) multiple imputation; ...
More
This chapter describes the following alternatives and complements to RS in terms of their assumptions, implementations, strengths, and weaknesses: (1) randomization tests; (2) multiple imputation; and (3) mean-score logistic regression. Randomization tests are statistical alternatives to RS. Multiple imputation is a statistical supplement RS. Mean-score logistic regression is a statistical alternative or supplement to RS.Less
This chapter describes the following alternatives and complements to RS in terms of their assumptions, implementations, strengths, and weaknesses: (1) randomization tests; (2) multiple imputation; and (3) mean-score logistic regression. Randomization tests are statistical alternatives to RS. Multiple imputation is a statistical supplement RS. Mean-score logistic regression is a statistical alternative or supplement to RS.
Diana C. Mutz
- Published in print:
- 2011
- Published Online:
- October 2017
- ISBN:
- 9780691144511
- eISBN:
- 9781400840489
- Item type:
- chapter
- Publisher:
- Princeton University Press
- DOI:
- 10.23943/princeton/9780691144511.003.0007
- Subject:
- Sociology, Social Research and Statistics
This chapter addresses three common problems that emerge in the analysis stage. These include the misguided practice of doing randomization checks, whether and how to use survey weights, and the use ...
More
This chapter addresses three common problems that emerge in the analysis stage. These include the misguided practice of doing randomization checks, whether and how to use survey weights, and the use and misuse of covariates. Investigators tend to approach the analysis and interpretation of population-based experiments from the perspective of usual practices in whatever their home discipline happens to be. However, as it turns out, some of these choices go hand in hand with common errors and faulty assumptions about the most appropriate way to analyze results from population-based experiments. Usual practices are not always appropriate, particularly with disciplines that favor observational methods. The chapter aims to provide guidance on the best practices and to explain when and why they make a difference.Less
This chapter addresses three common problems that emerge in the analysis stage. These include the misguided practice of doing randomization checks, whether and how to use survey weights, and the use and misuse of covariates. Investigators tend to approach the analysis and interpretation of population-based experiments from the perspective of usual practices in whatever their home discipline happens to be. However, as it turns out, some of these choices go hand in hand with common errors and faulty assumptions about the most appropriate way to analyze results from population-based experiments. Usual practices are not always appropriate, particularly with disciplines that favor observational methods. The chapter aims to provide guidance on the best practices and to explain when and why they make a difference.
ROBERT V. DODGE
- Published in print:
- 2012
- Published Online:
- May 2012
- ISBN:
- 9780199857203
- eISBN:
- 9780199932597
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199857203.003.0020
- Subject:
- Economics and Finance, Behavioural Economics
This chapter considers randomization, beginning with its history as “fairness” in making decisions. It then examines “mixed strategies” and von Neumann's Minimax Theorem. A segment from Poe's The ...
More
This chapter considers randomization, beginning with its history as “fairness” in making decisions. It then examines “mixed strategies” and von Neumann's Minimax Theorem. A segment from Poe's The Purloined Letter makes the point of randomizing one's choice when against an opponent who will outsmart you if you make a decision. A presentation of Schelling's class materials concerns randomizing to outsmart a burglar. The value of randomizing decisions is explained. These illustrations make the concept easy to comprehend without the complex mathematics often involved. The final section introduces the Nash Equilibrium, easily recognized in a 2 × 2 matrix, with examples. The Nash equilibrium often applies to bargaining and a matrix illustrates that Nash equilibria can exist and lead firms to form duopolies. There can be more than one Nash equilibrium. One solution, posited by Schelling, is that certain solutions have prominence and are selected among Nash equilibria. Schelling called them focal points.Less
This chapter considers randomization, beginning with its history as “fairness” in making decisions. It then examines “mixed strategies” and von Neumann's Minimax Theorem. A segment from Poe's The Purloined Letter makes the point of randomizing one's choice when against an opponent who will outsmart you if you make a decision. A presentation of Schelling's class materials concerns randomizing to outsmart a burglar. The value of randomizing decisions is explained. These illustrations make the concept easy to comprehend without the complex mathematics often involved. The final section introduces the Nash Equilibrium, easily recognized in a 2 × 2 matrix, with examples. The Nash equilibrium often applies to bargaining and a matrix illustrates that Nash equilibria can exist and lead firms to form duopolies. There can be more than one Nash equilibrium. One solution, posited by Schelling, is that certain solutions have prominence and are selected among Nash equilibria. Schelling called them focal points.
Neil Duxbury
- Published in print:
- 1999
- Published Online:
- March 2012
- ISBN:
- 9780198268253
- eISBN:
- 9780191683466
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198268253.003.0007
- Subject:
- Law, Philosophy of Law
This book has illustrated the advantages and drawbacks of chance and randomized social decision-making, claiming that arguments concerning such decision-making more often than not require ...
More
This book has illustrated the advantages and drawbacks of chance and randomized social decision-making, claiming that arguments concerning such decision-making more often than not require qualification. Detailed scrutiny of randomized legal decision-making compels us to confront difficult, sometimes uncomfortable, questions concerning the role of reason in law and how we conceptualize justice. Although we are often understandably wary of resorting to lotteries to determine outcomes of legal significance, the idea that randomization might be employed more extensively within legal decision-making contexts ought not to be dismissed cursorily. Rigid application of some particular decision-making criterion to settle disputes might render adjudication less fraught with complexity and ambiguity. Depending on the criterion used, such application might even make the process of adjudication less partial. If the criterion is easy to apply, moreover, the costs of decision-making are likely to be reduced. One criterion which offers all of these qualities is random selection.Less
This book has illustrated the advantages and drawbacks of chance and randomized social decision-making, claiming that arguments concerning such decision-making more often than not require qualification. Detailed scrutiny of randomized legal decision-making compels us to confront difficult, sometimes uncomfortable, questions concerning the role of reason in law and how we conceptualize justice. Although we are often understandably wary of resorting to lotteries to determine outcomes of legal significance, the idea that randomization might be employed more extensively within legal decision-making contexts ought not to be dismissed cursorily. Rigid application of some particular decision-making criterion to settle disputes might render adjudication less fraught with complexity and ambiguity. Depending on the criterion used, such application might even make the process of adjudication less partial. If the criterion is easy to apply, moreover, the costs of decision-making are likely to be reduced. One criterion which offers all of these qualities is random selection.
Carolyn Emery
- Published in print:
- 2009
- Published Online:
- January 2010
- ISBN:
- 9780199561629
- eISBN:
- 9780191722479
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199561629.003.013
- Subject:
- Public Health and Epidemiology, Epidemiology, Public Health
A multitude of study designs are used in order to establish the efficacy of a preventive measure, e.g. randomised controlled trials, time-trend analyses, prospective trials, etc. Each method has its ...
More
A multitude of study designs are used in order to establish the efficacy of a preventive measure, e.g. randomised controlled trials, time-trend analyses, prospective trials, etc. Each method has its benefits and drawbacks. This chapter provides a clear overview of the available designs and shows, using existing literature, the pros and cons of various designs.Less
A multitude of study designs are used in order to establish the efficacy of a preventive measure, e.g. randomised controlled trials, time-trend analyses, prospective trials, etc. Each method has its benefits and drawbacks. This chapter provides a clear overview of the available designs and shows, using existing literature, the pros and cons of various designs.
Curtis L. Meinert and Susan Tonascia
- Published in print:
- 1986
- Published Online:
- September 2009
- ISBN:
- 9780195035681
- eISBN:
- 9780199864478
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195035681.003.0010
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
A valid trial requires a method for assigning patients to a test or control treatment that is free of selection bias. The best method for ensuring bias-free selection is a bona fide randomization ...
More
A valid trial requires a method for assigning patients to a test or control treatment that is free of selection bias. The best method for ensuring bias-free selection is a bona fide randomization scheme. This chapter discusses the principles and practices to be followed in administering the randomization schedule.Less
A valid trial requires a method for assigning patients to a test or control treatment that is free of selection bias. The best method for ensuring bias-free selection is a bona fide randomization scheme. This chapter discusses the principles and practices to be followed in administering the randomization schedule.
- Published in print:
- 2006
- Published Online:
- March 2013
- ISBN:
- 9780226316130
- eISBN:
- 9780226315997
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226315997.003.0010
- Subject:
- Law, Constitutional and Administrative Law
The critiques set forth in this book reflect problems with the actuarial approach more generally—not just with specific types of stereotyping or profiles. This chapter sketches the contours and ...
More
The critiques set forth in this book reflect problems with the actuarial approach more generally—not just with specific types of stereotyping or profiles. This chapter sketches the contours and benefits of a more randomized universe of crime and punishment. Randomization is the only way to achieve a carceral population that reflects the offending population. Randomization in this context is a form of random sampling: random sampling on the highway, for instance, is the only way that the police would obtain an accurate reflection of the offending population. And random sampling is the central virtue behind randomization. What randomization achieves, in essence, is to neutralize the perverse effects of prediction, both in terms of the possible effects on overall crime and of the other social costs.Less
The critiques set forth in this book reflect problems with the actuarial approach more generally—not just with specific types of stereotyping or profiles. This chapter sketches the contours and benefits of a more randomized universe of crime and punishment. Randomization is the only way to achieve a carceral population that reflects the offending population. Randomization in this context is a form of random sampling: random sampling on the highway, for instance, is the only way that the police would obtain an accurate reflection of the offending population. And random sampling is the central virtue behind randomization. What randomization achieves, in essence, is to neutralize the perverse effects of prediction, both in terms of the possible effects on overall crime and of the other social costs.
J. Mark Elwood
- Published in print:
- 2007
- Published Online:
- September 2009
- ISBN:
- 9780198529552
- eISBN:
- 9780191723865
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198529552.003.06
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
Confounding is the most challenging issue in the interpretation of studies. This chapter is divided into three parts. The first part defines confounding and shows what effects it can produce. The ...
More
Confounding is the most challenging issue in the interpretation of studies. This chapter is divided into three parts. The first part defines confounding and shows what effects it can produce. The second part deals with how confounding can be controlled. The third part considers some further applications of the logic of confounding. Self-test questions are provided at the end of the chapter.Less
Confounding is the most challenging issue in the interpretation of studies. This chapter is divided into three parts. The first part defines confounding and shows what effects it can produce. The second part deals with how confounding can be controlled. The third part considers some further applications of the logic of confounding. Self-test questions are provided at the end of the chapter.
Curtis L. Meinert
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780199742967
- eISBN:
- 9780199897278
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199742967.003.0001
- Subject:
- Public Health and Epidemiology, Epidemiology, Public Health
This introductory chapter begins with a brief description of the origins of the term clinical trial and the concept of randomization. It then details the development of modern-day clinical trials, ...
More
This introductory chapter begins with a brief description of the origins of the term clinical trial and the concept of randomization. It then details the development of modern-day clinical trials, pioneered by Sir Austin Bradford Hill. His 1962 book Statistical Methods in Clinical and Preventive Medicine represented an important milestone in the field of clinical trials. This is followed by a discussion of single- vs. multicenter clinical trial and the classification of trials by design.Less
This introductory chapter begins with a brief description of the origins of the term clinical trial and the concept of randomization. It then details the development of modern-day clinical trials, pioneered by Sir Austin Bradford Hill. His 1962 book Statistical Methods in Clinical and Preventive Medicine represented an important milestone in the field of clinical trials. This is followed by a discussion of single- vs. multicenter clinical trial and the classification of trials by design.
Curtis L. Meinert and Susan Tonascia
- Published in print:
- 1986
- Published Online:
- September 2009
- ISBN:
- 9780195035681
- eISBN:
- 9780199864478
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195035681.003.0008
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter discusses the general principles underlying selection of study treatment, the choice of the outcome measure, and the roles of randomization and masking in data collection.
This chapter discusses the general principles underlying selection of study treatment, the choice of the outcome measure, and the roles of randomization and masking in data collection.
Curtis L. Meinert and Susan Tonascia
- Published in print:
- 1986
- Published Online:
- September 2009
- ISBN:
- 9780195035681
- eISBN:
- 9780199864478
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195035681.003.0014
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter focuses on recruitment. Topics discussed include recruitment goals, methods of patient recruitment, troubleshooting, the patient shake-down process, the ethics of recruitment, patient ...
More
This chapter focuses on recruitment. Topics discussed include recruitment goals, methods of patient recruitment, troubleshooting, the patient shake-down process, the ethics of recruitment, patient consent, randomization and initiation of treatment, and the Zelen consent procedure.Less
This chapter focuses on recruitment. Topics discussed include recruitment goals, methods of patient recruitment, troubleshooting, the patient shake-down process, the ethics of recruitment, patient consent, randomization and initiation of treatment, and the Zelen consent procedure.
Curtis L. Meinert and Susan Tonascia
- Published in print:
- 1986
- Published Online:
- September 2009
- ISBN:
- 9780195035681
- eISBN:
- 9780199864478
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195035681.003.0019
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter contains a list of questions and short answers concerning the design, analysis, and interpretation of clinical trials. These include questions concerning the study design, the source of ...
More
This chapter contains a list of questions and short answers concerning the design, analysis, and interpretation of clinical trials. These include questions concerning the study design, the source of study patients, randomization, and masking.Less
This chapter contains a list of questions and short answers concerning the design, analysis, and interpretation of clinical trials. These include questions concerning the study design, the source of study patients, randomization, and masking.
Steve Selvin
- Published in print:
- 2001
- Published Online:
- September 2009
- ISBN:
- 9780195146189
- eISBN:
- 9780199864720
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195146189.003.0003
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter evaluates the results of a randomized trial conducted to test the effectiveness of an experimental treatment in slowing the pathological changes in memory caused by Alzheimer's disease. ...
More
This chapter evaluates the results of a randomized trial conducted to test the effectiveness of an experimental treatment in slowing the pathological changes in memory caused by Alzheimer's disease. Individuals with Alzheimer's disease were randomly divided into two groups. One group received a placebo (twenty-six patients), and the other received a treatment of lecithin (twenty-five patients). Comparison of “before” and “after”measures of memory among Alzheimer's disease patients indicates an important difference between the patients who received lecithin and the patients who received a placebo. Four statistical approaches (t-test, Wilcoxon, bootstrap, and randomization procedures) yield the same inference: the lecithin patients appear to have significantly less deterioration.Less
This chapter evaluates the results of a randomized trial conducted to test the effectiveness of an experimental treatment in slowing the pathological changes in memory caused by Alzheimer's disease. Individuals with Alzheimer's disease were randomly divided into two groups. One group received a placebo (twenty-six patients), and the other received a treatment of lecithin (twenty-five patients). Comparison of “before” and “after”measures of memory among Alzheimer's disease patients indicates an important difference between the patients who received lecithin and the patients who received a placebo. Four statistical approaches (t-test, Wilcoxon, bootstrap, and randomization procedures) yield the same inference: the lecithin patients appear to have significantly less deterioration.
Sheila Bird
- Published in print:
- 2003
- Published Online:
- September 2009
- ISBN:
- 9780198508496
- eISBN:
- 9780191723797
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198508496.003.0006
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter considers the strengths and limitations of a range of evaluation study designs, focusing on the consumer principle of randomization. Although much of the chapter is devoted to randomized ...
More
This chapter considers the strengths and limitations of a range of evaluation study designs, focusing on the consumer principle of randomization. Although much of the chapter is devoted to randomized controlled trials (RCTs), non-randomized studies such as disease registries and cohort studies are included because of their importance in the evaluation of cost-effectiveness. The chapter also considers issues of informed consent that are relevant to choice of experimental design, and the need for database linkage to overcome informative loss to follow-up that might otherwise undermine randomized allocation.Less
This chapter considers the strengths and limitations of a range of evaluation study designs, focusing on the consumer principle of randomization. Although much of the chapter is devoted to randomized controlled trials (RCTs), non-randomized studies such as disease registries and cohort studies are included because of their importance in the evaluation of cost-effectiveness. The chapter also considers issues of informed consent that are relevant to choice of experimental design, and the need for database linkage to overcome informative loss to follow-up that might otherwise undermine randomized allocation.
William A. Silverman
- Published in print:
- 1999
- Published Online:
- September 2009
- ISBN:
- 9780192630889
- eISBN:
- 9780191723568
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780192630889.003.0023
- Subject:
- Public Health and Epidemiology, Public Health, Epidemiology
This chapter presents a 1993 commentary on treatment choices for neonates. It considers a proposed observational study dubbed as a ‘preference trial’: the systematic follow up of patient cohorts ...
More
This chapter presents a 1993 commentary on treatment choices for neonates. It considers a proposed observational study dubbed as a ‘preference trial’: the systematic follow up of patient cohorts where treatment assignments are made according to informed patient choice rather than by randomization. The proposal acknowledges doctors' limited ability to determine what treatments and what outcomes patients value most. Doctors provide continuously updated estimates of probabilities of all outcomes of interest for treatments, and each patient is free to make a choice based on the information provided.Less
This chapter presents a 1993 commentary on treatment choices for neonates. It considers a proposed observational study dubbed as a ‘preference trial’: the systematic follow up of patient cohorts where treatment assignments are made according to informed patient choice rather than by randomization. The proposal acknowledges doctors' limited ability to determine what treatments and what outcomes patients value most. Doctors provide continuously updated estimates of probabilities of all outcomes of interest for treatments, and each patient is free to make a choice based on the information provided.