Wednesday, August 28, 2013

What are Cut Scores and How Do They Impact My Students?

By Saler Axel, RME Research Assistant

Many researchers and practitioners believe that tests are used for accountability in education now more than ever before. The media often report the percentage of students placed into particular performance standards on high-stakes tests and the resulting impact on students, schools, districts, and states can be considerable. We frequently hear how impactful high-stakes tests can be, but what about those assessments developed by teachers?

Teachers are the most frequent users and producers of tests (Nunally, 1964). Teachers’ assessments account for at least 75 percent of all educational measures (Nunally, 1964). They are responsible for testing students individually and interpreting student-related measurement data (Nunally, 1964; Torgerson & Adams, 1954). If created well, classroom tests can be more useful than a standardized exam, particularly as a measure of content (Worthen et al., 1993). This is great news for those of us that want to formatively gauge students’ understanding of classroom content in a well-constructed and accurate manner! (See Beth Richardson’s blog on test development guidance.)

Beth’s blog highlights the components of a well-developed assessment. After reading it, your next consideration might be: What are cut scores and performance standards? How do I interpret my students’ test scores? What do these assessments tell me about my students? And ultimately, how do these test scores impact my students?

What are cut scores and performance standards?

Examinees are often classified in a pass-fail or “mastery-proficiency-competency” (Berk, 1980) manner. You have likely used these categories before in your own teaching. Researchers call these categories performance standards. Cut scores are the points between each grouping. Performance standards are defined as qualitative distinctions between adjacent levels of what test takers know and what they can do at specified levels (Kane, 2001). Cut scores, defined as quantitative points on a performance continuum, serve as operational versions of the corresponding performance standard (Kane, 2001). When you combine the two concepts, the cut score is a statement of how much knowledge of the content domain an examinee needs to demonstrate to fall within a particular performance standard (Haertel, 1985; Jorgensen & McBee, 2003).

How do I interpret my students’ test scores?

In other words, when you administer an assessment to your students, you are testing their knowledge of a particular construct. This may include your first grade students’ knowledge and ability to use addition properties, such as commutativity and associativity, to add whole numbers. Imagine that this assessment includes three performance standards and two cut scores.

 Students that score below cut score 1 are considered competent users of addition properties. Students that scores between cut score 1 and cut score 2 are considered proficient users of addition properties. Students that score above cut score 2 are considered having mastered addition properties.

What do these assessments tell me about my students?

Performance standards are similar to rubrics. They describe what concepts a student must understand (and demonstrate the knowledge and skills pertaining to it) to place into a particular performance standard and receive a certain test score. Performance standards list characteristics of students’ skills. Using our prior example, the competent user of addition properties to add whole numbers may be able to utilize the properties when prompted, but require scaffolded guidance to implement them. Proficient users may only need prompting but once reminded, need no further assistance. Students that have mastered addition properties may be able to utilize them without prompting or scaffolded guidance.

How do these test scores impact my students?

If a student’s test score is inaccurately interpreted, they may place into a performance standard that does not reflect their true knowledge and skill level. This can cause unintended consequences (AERA, APA, & NCME, 1999) that may include inaccurate course placement or even denied access to special instruction (AERA et al., 1999). As a result, when you take time to create a test, make sure that you have considered all intended and possible unintended consequences that may arise from your students placing into performance standards that do not accurately reflect their true knowledge and skills.

Questions for consideration

Reflect on a test that you have recently administered in your classroom.
  • Did you take the time to really consider the performance standard categories and what their impact on your student might be? 
  • How can you apply your new (or enhanced!) knowledge of performance standards to the next test you administer in your classroom? 
  • What types of things will you do to inform your instruction after calculating their scores?
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association. 

Berk, R. A. (1980). Introduction. In R. A. Berk (Ed.), Criterion-Referenced Measurement: The State of the Art (pp. 3-9). Baltimore, MD: The Johns Hopkins University Press. 

Haertel, E. (1985). Construct validity and criterion-referenced testing. Review of Educational Research, 55(1), 23-46. 

Jorgensen, M. A. & McBee, M. (2003). The new NRT model. Retrieved from Kane, M. T. (2001). So much remains the same: Conception and status of validation in setting standards. In G. J. Cizek (Ed.), Setting performance standards: Concepts, methods, and perspectives (pp. 53-88). Mahwah, NJ: Lawrence Erlbaum Associates. 

Nunnally, J. C. (1964). Educational measurement and evaluation. New York, NY: McGraw-Hill Book Company. 

Torgerson, T. L. & Adams, G. S. (1954). Measurement and evaluation for the elementary-school teacher. New York, NY: The Dryden Press. 

Worthen, B. R., Borg, W. R., & White, K. R. (1993). Measurement and evaluation in the schools. New York, NY: Longman Publishing Group.

No comments:

Post a Comment

Please comment.