Reference no: EM133340615
Assignment:
After READING
The terms "assessment," "examination," "test," and "measurement" are treated as interchangeable in many discussions of education policy. Assessment, conceived broadly, is gathering information about what students know and can do for some educative purpose. Examinations and tests, as the terms will be used here, are particular ways of doing this. Measurement is different. Measurement is situating data from an assessment in a quantitative framework, to characterize the evidence the observations provide for the interpretations and inferences the assessment is meant to support.The most common form of educational assessment is a test, gathering data from 10 to 50 tasks, often multiple-choice items or open-ended exercises scored by humans. Each response gets a numerical score, often 0 or 1, and test scores are the total of the item scores.
The scores are then used to determine grades, make instructional decisions, license professionals, evaluate programs, and the like. Test-takers and score-users may think a lot about what is being assessed, but few spend much time pondering exactly what, if anything, is being "measured."Assessment practices are familiar, but measurement foundations are not. Indeed, debates among philosophers and measurement experts swirl beneath the surface, largely detached from a century of widespread use and methodological advances (Markus & Borsboom, 2013; Michell, 1999).
At issue are the fundamental nature of educational measurement, its grounding in psychology, and the meaning of scores. More advanced measurement models take item-level data as indicators for estimating parameters in more formal mathematical structures. Whereas their meanings and properties are also the subject of some controversy, they reveal shortcomings in the familiar testing and scoring practices sketched above. These practices may serve reasonably well for some inferences, but they can prove misleading for others, with untoward consequences for groups and individuals.
Writing competency, or "written communication skill," is a fundamental learning outcome identified by institutions of higher education. Consequently, general education curricula often include programs such as first-year composition and writing across the curriculum/writing in the disciplines. Since notions of what it means to write have been complicated by socially grounded theories and digital media, these programs have also expanded to include more than just traditional essayistic literacy. Even in institutions that do not have designated writing programs, writing competency is still considered an important goal of undergraduate education. Although the focus on writing occurs primarily in the classroom, writing also plays a significant role in assessment beyond the classroom in composition placement, proficiency exams, and program reviews.
Although many psychometricians consider indirect methods such as short-answer and/or multiple-choice exams to be appropriate for the assessment of writing, writing teachers prefer exams that require students to write (e.g., Broad, 2003; CCCC, 2014; Diederich, 1974; O'Neill et al., 2009). In this chapter, we focus exclusively on this latter type, so-called "direct writing assessment."We begin with the theory and research that informs writing assessment then review the related practices. Sampling or scoring methods should not be the driving force in designing writing assessments. Rather, assessment methods should be determined by language, linguistic, and sociolinguistic theories, as well as psychometric ones.
Imagine a several-year study of an institution and the programs within it that involves a large number of committees articulating goals, making a plan, executing parts of the plan, and writing a report for an accreditation peer review team. After the team leaves, the committees disband and the report is filed away without further thought or consideration as to how it might be used to improve student learning. A significant amount of time and energy was extended for the study and report, but quickly it is forgotten as the institution moves to other initiatives. Unfortunately this scenario is common on many college campuses, as most simply undertake assessment of student learning for compliance purposes with external requirements (Kuh et al., 2014, 2015; Kuh & Ikenberry, 2009).
Be it for an accreditation study or for another compliance-driven reason, information about educational quality is often not used by or reported broadly to different audiences, whether internal or external to the institution. This breakdown of communication with varied audiences is unfortunate because it does not foster the use of assessment information for institutional improvement. Instead, documents and reports, whether created for accreditation, program review, committee work, or some other reporting means, could be used to tell a story about the institutional efforts to explore student learning, inform subsequent iterations of assessment, and lead to the improvement of student learning. Although using assessment information primarily for compliance activities is certainly the case at many institutions, some institutions have begun to revamp assessment processes by asking questions about student learning that are of interest to those within the institution rather than assessing for assessment's sake (Kuh et al., 2015).
- Did anything (concepts, ideas, responses, reactions, etc.) challenge your thinking?
- What concepts did you grapple with and how did it make you feel?
- What have you learned?
- How did you adjust/change your approach to your learning or the concepts?
- How has the learning changed your approach, perceptions, or feeling about the topic?
- What will you do differently in your practice/the field based on what you have learned?
- How will you assess your progress?