Reference no: EM132793953
Module - Introduction and Readings
LO 1: Identify key differences between randomized experiments and comparison group designs and acquire practical skills in using designs for program evaluation studies.
LO 2: Recognize key strengths and weaknesses of using randomized controlled trials (RCTs) as an evaluation tool and understand challenges inherent in randomized controlled trials.
LO 3: List case studies, single and multiple case designs, data sources, collection methods, analysis, and choices necessary in for the structuring of case study reports.
LO 4: Discuss the importance of an Institutional Review Board (IRB), the need to monitor recruitment and retention, and incentives used for participant motivation.
LO 5: Design and manage multisite evaluations, including: communication, monitoring, data collection, quality control, and synthesis and analysis of findings.
EVALUATION DESIGN
Module 2 focuses on evaluation design. Topics include: research designs, experimental designs and quasi-experimental designs, using control and comparison groups, randomized control trials, case studies, recruitment and retention of program participants, and multisite evaluations. Here we focus on designing an evaluation plan. To prepare an evaluation plan to include a research design, the evaluator should identify the causal theory whether implicit or explicit in the program design. When the program designers explicitly formalize their causal theory of how the program affects participants, then the work of the evaluator is simplified. When the program designers have an implicit causal theory, then the job of the evaluator is more difficult, because the action theory designers had in mind must be explicated. To explicate the implicit theory, the evaluator must understand the action theory underlying programs.
• Theories are used to describe, explain and predict things like events, decisions, and behaviors. Theories used may be stated formally or else assumed. Causal theory is not the same thing as statistical theory. Conditions for causality are: (1) temporal priority, (2) concomitant variation, and (3) elimination of plausible alternatives. Statistical theory has formalized mathematical properties: the likelihood of random events. The underlying law of probability allows for statistical methods to be applied to causal relationships. In the final analysis, numbers do not speak for themselves. So theory-driven evaluations (Chen, 1990) have an advantage in explicating causal relationships to permit numerical interpretation.
• An evaluation design is a formalized approach to organizing the investigation of a program, by providing structure for hypothesis testing, model estimation, and statistical inferences. The designs for research were pioneered by Campbell and Stanley (1963) and Cook and Campbell (1979). An evaluation design includes research questions that are restated as hypotheses, derived from theory. The testable hypotheses you derive may or may not be fully operationalized, but they should identify the expected causal relationships between and among variables measured for the statistical models to be estimated.
Module 2: Head Start Discussion
A major foundation has established a matching grant program for children in the Head Start program. The parents of children in six preschool centers will receive $2.00 from the foundation for every $1.00 they place in a special educational savings fund for their child. The money in the funds will become available to the children when they reach the age of 18. The foundation would like you to design an evaluation of how well the matching program works in the six centers over a five-year period. Pick a comparison group evaluation research design that you would like to use and prepare a presentation to the foundation directors justifying your choice.
• Once you have posted your reply, please be sure to respond to two classmates
Attachment:- Discussion.rar