Showing posts with label Assessment. Show all posts
Showing posts with label Assessment. Show all posts

Thursday, November 20, 2014

Analyzing Assessment Items

By Dr. Pooja Shivraj, RME Educational Assessment Researcher

Much of the work we do at Research in Mathematics Education involves the development of assessments used by educators to identify students who may be struggling with algebra-readiness knowledge and skills, so that teachers can provide additional instructional support. The research process we use is rigorous and begins with an assessment blueprint, then item writing, internal reviews and external expert reviews, followed by a pilot test and finally the development of the test forms. The pilot test is given to a large number of students in order to determine the validity of the assessment items. Our researchers receive the results of the pilot and perform an extensive statistical analysis to determine if an item is good, psychometrically speaking.

The point of obtaining item statistics is to develop a pool of items that function well from which future tests can be designed. There are two kinds of analyses that can be performed: a Classical Test Theory (CTT) analysis, which is sample-dependent and non-model based, or an Item Response Theory (IRT) analysis, which is sample-independent and model-based. Regardless of the type of analysis performed, three primary statistics are used to determine if an item is psychometrically good. The ranges listed below are the acceptable norms found in the literature.

(1) The item should have a strong correlation between each item score and the total score. In other words, the correlation should show that the test-takers choosing the correct answer on the item are likely to receive a higher score. This statistic is measured by the point-biserial correlation (CTT) or the point-measure correlation (IRT). A good item would have a point-biserial correlation of >0.2 or a point-measure correlation of >0.25.
(2) The difficulty of the item, measured by the proportion of students answering the item correctly (CTT), should be between 30% to 80% of the test-takers. In IRT, the difficulty parameter, b, should be between -4 and +4.
An item characteristic curve depicting the discrimination parameter
(a) and the difficulty parameter (b) in an IRT model
(3) The discrimination of the item, also measured by the point-biserial correlation (CTT) should be higher for the correct response than the distractors. In IRT, the discrimination parameter, a, should be between 0.5 and 1.5. The greater the discrimination, the better the item discriminates between lower ability and higher ability students.

What can you do with items that don't function well? For the items that don't function well, reviewing the data would be the first step. Are the items functioning poorly because the majority of students are choosing the correct answer? Is one distractor not being chosen at all? Are the majority of students choosing a single distractor more often than other options? These data would all be red flags. The next step would be to review the content of all the items that don't function well, especially the items that were flagged in the previous step. What about the content led students to choose or not choose a particular response choice?

Using this process of analyzing data, reviewing items, and adjusting the content of the items, a pool of items that function well can be developed for use in the future.

Note: Many other statistics (e.g., fit statistics in IRT like Chi square, infit, outfit, etc.) could be used to determine if an item functions well in addition to the ones described above that could also provide information at the test level. Please feel free to email me if you would like more information at pshivraj@smu.edu.

Tuesday, September 30, 2014

Closing the Learning Gaps: Strategies to ensure your students will be successful with the new TEKS


By Brea Ratliff, RME Secondary Math Research Coordinator

The Texas Essential Knowledge and Skills (TEKS) are the state standards that identify the information students should learn and the academic proficiencies they should demonstrate in each grade level or course. The newly adopted math TEKS are evidence of increased expectations for mathematics education in the state of Texas. Although several changes have been incorporated into the math TEKS, our students do not have to enter the next grade or course with gaps in their understanding of mathematics. As educators, we are charged with the difficult task of meeting students where they are through our reflective practice, which includes the development of instructional techniques designed to support students as they learn mathematics. The biggest, and perhaps the most important step in this process, is for educators and administrators alike to analyze and become familiar with the new math TEKS. In this blog series, we will examine strategies to help teachers and students experience success with the new math TEKS.

Strategy 1: Identify your resources

The Texas Education Agency (TEA) has published several resources for math teachers and administrators to help them transition into teaching the new math TEKS. These resources can be used to plan lessons, develop an understanding of the knowledge and skills addressed in a particular grade level / course, and foster conversations with parents and other stakeholders in your community about the changes in the state math standards.
  • Side-by-Side TEKS Comparison - this document compares the revised TEKS (adopted in 2012) to the previously adopted TEKS (revised in 2006) and allows the reader to see all of the major changes and shifts made to the math content and mathematical process standards. Documents for grades K through 8, Algebra 1, Geometry, and Algebra 2 are available on Project Share. (Side-by-Side TEKS)
  • Vertical alignment charts - TEA has published four vertical alignment documents, which organize the TEKS by major concepts and show how these ideas are connected across grade levels / courses. These charts can also be accessed on Project Share. (Vertical Alignment)
  • STAAR Mathematics Resources - changes in the math standards have also impacted the state mathematics assessments. The State of Texas Assessments of Academic Readiness (STAAR®) assessments Assessed Curriculum, Blueprints, and Reference Materials documents have been updated to reflect these changes. (STAAR Math Resources)
  • Texas Response to Curriculum Focal Points - Revised in 2013, this document guides mathematics teachers in understanding the topics within each grade level that require the most emphasis, and can be used to inform instructional pacing and lesson development. This document is also available on Project Share. (TXRCFP)
After reviewing these resources, please share any questions or comments you might have with us via email or on Twitter (@RME_SMU). In the next blog, we will examine how these documents can be used to impact math instruction, with a specific emphasis on Number and Operation.

Thursday, April 10, 2014

Math STAAR: Strategies for Success

By Dawn Woods, RME Elementary Mathematics Coordinator

As every math teacher across the state of Texas knows, the State of Texas Assessments of Academic Readiness (STAAR) testing window is upon us. You have worked diligently, teaching vocabulary, concepts & skills, through the lens of mathematical process standards thereby empowering your students to implement mathematics in everyday life, as well as perform on this assessment. The strategies listed in this blog are suggestions that could enhance your students’ success.
  1. Teach goal setting. Research suggests that when students are taught to set specific academic goals they make progress in learning skills and content, discover how to self-regulate learning, and improve their self-efficacy and interest in the task (Bandura & Schunk, 1981). Through this goal setting and self-assessment process, students are enabled to monitor and evaluate their performance during a lesson, unit of instruction, or review of course material thereby increasing student performance and instilling responsibility for their learning. An example of goal setting for STAAR could look like:
  2. Teach “timed” test strategies. A few strategies include:
    • Listen to the test proctor’s directions.
    • Budget time appropriately. Work quickly but do not rush.
    • Work the problems in the test book, not in your head!  Double check if you copied numbers correctly, if the units are similar, and if you applied the appropriate formulas. Use good handwriting so you do not misread your answer.
    • Do not be too happy to see your computed answer as one of the answer choices!  Test makers know what wrong choices could be made and include them in the answers. So check your answer before marking it on the answer sheet!
    • Do not panic. If the question is difficult, return to it later. Maybe another question will job your memory on how to answer the difficult question.
    • Position the answer sheet next to the test booklet so that you can mark answers quickly while checking that the number next to the circle on your answer sheet is the SAME as the number next to the question you are answering.
    • Before turning in your test, double-check your answers.
    • Make sure you bubbled in the answers correctly on your answer sheet.
    •  Don’t be disturbed by other students finishing before you. Extra points are not given for finishing early!
  3. Communicate with parents and students to encourage healthy pre-test behaviors. A few pre-test behaviors include:
    • Relaxing for a few hours before bedtime.
    • Getting enough sleep the night before a test.
    • Eating a healthy breakfast and avoiding foods that could make you groggy or hyper.
    • Don’t stress!  You’ve worked hard and are prepared for the test.
Works Cited:
Bandura, A., & Schunk, D.H. (1981). Cultivating competence, self-efficacy, and intrinsic interest through proximal self-motivation. Journal of Personality and Social Psychology, 41(3), 586-598.

Wednesday, April 2, 2014

Open Ended Assessments: Part 2 - Grade 8 Math

By Brea Ratliff, Secondary Mathematics Research Coordinator

 With the assessment season upon us, many teachers and administrators are looking for strategies to ensure their students are successful with all of the concepts being assessed. This blog describes a few ideas for open-ended assessments that build on this student expectation:

8.7(C) – The student is expected to use pictures or models to demonstrate the Pythagorean theorem.

Level 1 – Assessments designed to develop proficiency in 1 student expectation. Assessments build around one particular skill are often helpful after when introducing a concept, or providing targeted intervention.






Level 2 – Assessments designed to develop proficiency in 2 or more student expectations. The assessments for this level can vary in degree. While some may be designed to assess a combination of content knowledge, others may be written to include the process skills. This assessment addresses a wealth of knowledge and skills, and could possibly be used for several class periods.

8.2(D) - use multiplication by a given constant factor (including unit rate) to represent and solve problems involving proportional relationships including conversions between measurement systems. 

8.6(B) - graph dilations, reflections, and translations on a coordinate plane. 

8.7(C) – The student is expected to use pictures or models to demonstrate the Pythagorean Theorem. 

8.7(D) - The student is expected to locate and name points on a coordinate plane using ordered pairs of rational numbers. 

8.15(all) - The student communicates about Grade 8 mathematics through informal and mathematical language, representations, and models.



Friday, January 24, 2014

Open-ended Assessments: Part 1

By Brea Ratliff, Secondary Mathematics Research Coordinator

As schools prepare for standardized testing this spring, many educators often wonder which instructional strategies will be most effective in terms of ensuring student success.

Beyond the scope of content, state mathematics assessments are measuring students’ ability to problem solve, recognize appropriate conjectures, communicate and analyze knowledge, and understand how mathematical ideas connect.

In short, these tests evaluate whether or not students are able to demonstrate the appropriate processing skills necessary for mathematics at each grade level.

On the STAAR Mathematics assessment for grades 3-8, mathematical processing skills are “incorporated into at least 75% of the test questions…and [are] identified along with content standards” (TEA, 2013). In order for students to perform well on an assessment designed this way, they should show success on formative assessments where they are challenged to apply their content knowledge while confidently using these skills. Over the next several weeks, we will explore a variety of open-ended formative assessments for students in grades 5 and 8, and students who are taking Algebra 1. These assessments can be implemented in a variety of ways. Depending on the makeup of your classroom, they could be a bell ringer, or the performance indicator during or following direct instruction. Many teachers also use them to start meaningful discussions with their students, as well as for an exit ticket or homework assignment. The possibilities are endless.

This week, let’s begin by looking at a few open-ended assessment ideas for 5th grade – all of which build upon student expectation 5.10(C) from the TEKS:

5.10(C) – The student is expected to select and use appropriate units and formulas to measure length, perimeter, area, and volume.

Level 1 – Assessments designed to develop proficiency in one student expectation. Assessments build around one particular skill are often helpful after when introducing a concept, or providing targeted intervention.

 Level 2 – Assessments designed to develop proficiency in two or more student expectations. The assessments for this level can vary in degree. While some may be designed to assess a combination of content skills, others may be written to include process skills. In this next example, we continue to look at 5.10(C), but also student expectations 5.3(A) and 5.3(B):

5.3(A) – The student is expected to use addition and subtraction to solve problems involving whole numbers and decimals.

5.3(B) – The student is expected to use multiplication to solve problems involving whole numbers (no more than three digits times two digits without technology).
For this next example, we assess students’ logical reasoning while addressing student expectation 5.16(A):

5.16(A) – The student is expected to make generalizations from patterns or sets of examples and nonexamples.
This assessment covers two content standards we have already addressed [5.3(A), 5.10(C)], and introduces two standards from Probability and Statistics and Underlying Processes and Mathematical Tools:

5.13(B) – The student is expected to describe characteristics of data presented in tables and graphs including median, mode, and range.

5.14(B) – The student is expected to solve problems that incorporate understanding the problem, making a plan, carrying out the plan, and evaluating the solution for reasonableness.
These are just a few examples to use with your students, but they are guaranteed to challenge your students to apply what they know about length, perimeter, area and volume in a new way.

Texas Education Agency. (2013). STAAR Assessed Curriculum, Grade 5. Retrieved from http://www.tea.state.tx.us/WorkArea/linkit.aspx?LinkIdentifier=id&ItemID=2147488330&libID=2147488329

Wednesday, October 23, 2013

ESTAR and MSTAR Universal Screener Window Extended for Eligible Districts

Research in Mathematics Education partners with the Texas Education Agency and Education Service Center Region 13 on the development of the ESTAR and MSTAR Universal Screeners and soon-to-be launched Diagnostic Assessments. The Universal Screeners are designed to be an efficient method for helping to determine 1) if students are at risk , and 2) the level of support a student may require to be successful in a pre-algebra domain. The end of the fall assessment window is rapidly approaching. Participating schools are encouraged to complete the assessment by October 31. The following message was provided by the state:

"The ESTAR/MSTAR Universal Screener will remain accessible to any district that was in process of uploading students and/or administering the screener before the fall window closed on October 18. All eligible districts are encouraged to complete the testing by Oct. 31. If circumstances prevent your school district from meeting this targeted end-date, then please contactmathtx@esc13.net . Technical assistance will be provided upon request.

Also, please note that growth that is expected between fall and winter will likely be less for those tested at the end of the window. For example, if a student completes the fall screener on October 26 and then begins the winter screener on January 15, the observed growth will likely be less than for a student who completed the fall screener when the window opened on August 26."

Friday, September 20, 2013

ESTAR & MSTAR Assessments - Professional Development

By Savannah Hill, RME Professional Development Coordinator

Today, I want to spend some time talking about some of the professional development opportunities available with the Texas Algebra Ready Initiative. We have spent time before talking about the assessments that are available, but briefly, currently available is the ESTAR and MSTAR Universal Screeners (grades 2-4 and 5-8), coming in January will be the MSTAR Diagnostic Assessments (grades 5-8), followed by the ESTAR Diagnostic Assessments (grades 2-4) next year.

But in order to correctly implement those assessments and interpret the reports given, there is a learning process. Many teachers may not know that professional development is available. It is essential that teachers understand why and how to use the Universal Screener and the MSTAR Diagnostic Assessment and how it can support a Response to Intervention approach. Here is some of the available courses that teachers should take before giving an ESTAR or MSTAR assessment.

ESTAR and MSTAR Universal Screeners: The ESTAR and MSTAR Universal Screener is a formative assessment system administered to students in grades 5-8 to help teachers determine if students are on-track or at-risk for meeting curricular expectations in algebra and algebra-readiness. Currently, a course is available to prepare teachers to administer the ESTAR and MSTAR Universal Screener - Overview of the Universal Screeners. Training on the use of the Universal Screener is available through Project Share. An updated version (v4.0) will be released soon.

MSTAR Diagnostic Assessments (grades 5-8): This assessment, designed to follow the MSTAR Universal Screener, is administered to those students identified as at-risk on the Universal Screener. The Diagnostic Assessment will help identify WHY students are struggling with algebra-related core content, and provide information that can be used to plan supplemental instruction. Two courses will be available: MSTAR Learning Progressions and Overview of the MSTAR Diagnostic Assessments. Information on how to access these courses, which will provide suggestions on how to prepare for administration of the MSTAR Diagnostic Assessments and guidance on how to interpret results following administration, will be made available through various list-servs and Project Share Groups over the coming weeks.

All courses are online and can done individually. PLCs could also use time to review the material from the courses and review for remediation. For more information, contact your local Educational Service Center or visit www.projectsharetexas.org.

Wednesday, August 28, 2013

What are Cut Scores and How Do They Impact My Students?

By Saler Axel, RME Research Assistant

Many researchers and practitioners believe that tests are used for accountability in education now more than ever before. The media often report the percentage of students placed into particular performance standards on high-stakes tests and the resulting impact on students, schools, districts, and states can be considerable. We frequently hear how impactful high-stakes tests can be, but what about those assessments developed by teachers?

Teachers are the most frequent users and producers of tests (Nunally, 1964). Teachers’ assessments account for at least 75 percent of all educational measures (Nunally, 1964). They are responsible for testing students individually and interpreting student-related measurement data (Nunally, 1964; Torgerson & Adams, 1954). If created well, classroom tests can be more useful than a standardized exam, particularly as a measure of content (Worthen et al., 1993). This is great news for those of us that want to formatively gauge students’ understanding of classroom content in a well-constructed and accurate manner! (See Beth Richardson’s blog on test development guidance.)

Beth’s blog highlights the components of a well-developed assessment. After reading it, your next consideration might be: What are cut scores and performance standards? How do I interpret my students’ test scores? What do these assessments tell me about my students? And ultimately, how do these test scores impact my students?

What are cut scores and performance standards?

Examinees are often classified in a pass-fail or “mastery-proficiency-competency” (Berk, 1980) manner. You have likely used these categories before in your own teaching. Researchers call these categories performance standards. Cut scores are the points between each grouping. Performance standards are defined as qualitative distinctions between adjacent levels of what test takers know and what they can do at specified levels (Kane, 2001). Cut scores, defined as quantitative points on a performance continuum, serve as operational versions of the corresponding performance standard (Kane, 2001). When you combine the two concepts, the cut score is a statement of how much knowledge of the content domain an examinee needs to demonstrate to fall within a particular performance standard (Haertel, 1985; Jorgensen & McBee, 2003).

How do I interpret my students’ test scores?

In other words, when you administer an assessment to your students, you are testing their knowledge of a particular construct. This may include your first grade students’ knowledge and ability to use addition properties, such as commutativity and associativity, to add whole numbers. Imagine that this assessment includes three performance standards and two cut scores.

 Students that score below cut score 1 are considered competent users of addition properties. Students that scores between cut score 1 and cut score 2 are considered proficient users of addition properties. Students that score above cut score 2 are considered having mastered addition properties.

What do these assessments tell me about my students?

Performance standards are similar to rubrics. They describe what concepts a student must understand (and demonstrate the knowledge and skills pertaining to it) to place into a particular performance standard and receive a certain test score. Performance standards list characteristics of students’ skills. Using our prior example, the competent user of addition properties to add whole numbers may be able to utilize the properties when prompted, but require scaffolded guidance to implement them. Proficient users may only need prompting but once reminded, need no further assistance. Students that have mastered addition properties may be able to utilize them without prompting or scaffolded guidance.

How do these test scores impact my students?

If a student’s test score is inaccurately interpreted, they may place into a performance standard that does not reflect their true knowledge and skill level. This can cause unintended consequences (AERA, APA, & NCME, 1999) that may include inaccurate course placement or even denied access to special instruction (AERA et al., 1999). As a result, when you take time to create a test, make sure that you have considered all intended and possible unintended consequences that may arise from your students placing into performance standards that do not accurately reflect their true knowledge and skills.

Questions for consideration

Reflect on a test that you have recently administered in your classroom.
  • Did you take the time to really consider the performance standard categories and what their impact on your student might be? 
  • How can you apply your new (or enhanced!) knowledge of performance standards to the next test you administer in your classroom? 
  • What types of things will you do to inform your instruction after calculating their scores?
References 
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association. 

Berk, R. A. (1980). Introduction. In R. A. Berk (Ed.), Criterion-Referenced Measurement: The State of the Art (pp. 3-9). Baltimore, MD: The Johns Hopkins University Press. 

Haertel, E. (1985). Construct validity and criterion-referenced testing. Review of Educational Research, 55(1), 23-46. 

Jorgensen, M. A. & McBee, M. (2003). The new NRT model. Retrieved from http://www.pearsonassessments.com/NR/rdonlyres/3D156C81-BA45-4C8B-96B3-8A177BEE0B34/0/NewNRTModel_Rev1_Final.pdf Kane, M. T. (2001). So much remains the same: Conception and status of validation in setting standards. In G. J. Cizek (Ed.), Setting performance standards: Concepts, methods, and perspectives (pp. 53-88). Mahwah, NJ: Lawrence Erlbaum Associates. 

Nunnally, J. C. (1964). Educational measurement and evaluation. New York, NY: McGraw-Hill Book Company. 

Torgerson, T. L. & Adams, G. S. (1954). Measurement and evaluation for the elementary-school teacher. New York, NY: The Dryden Press. 

Worthen, B. R., Borg, W. R., & White, K. R. (1993). Measurement and evaluation in the schools. New York, NY: Longman Publishing Group.

Thursday, August 22, 2013

How to Write a Smart Test

By Beth Richardson, RME High School Mathematics Coordinator

My career in education began as a high school math teacher. Throughout my teaching, I wrote countless math “questions” to check my students’ understanding, from daily bell-ringers to full-length tests. However, it wasn’t until I became immersed in the world of assessments that I learned some important components of a well-written test. First of all, the “questions” that make up a test are commonly referred to as items by researchers in the field of assessment, which is what I’ll call them from here on.

Test Math Knowledge in Different Ways

There are many different levels in which the brain engages in mathematical concepts. The book Adding It Up: Helping Children Learn Mathematics (2001) identifies five specific types of thinking that together determine a person’s proficiency in math. Here’s a brief explanation of each and example of items with the same skill (slope) assessed at the different proficiency levels:
  • Conceptual Understanding – comprehension of mathematical concepts, operations, and relations
  • Procedural Fluency – carrying out procedures flexibly, accurately, efficiently, and appropriately
  • Strategic Competence – ability to formulate, represent, and solve mathematical problems 
  • Adaptive Reasoning – capacity for logical thought, reflection, explanation, and justification
  • Productive Disposition – habitual inclination to see math as sensible, useful, and worthwhile, coupled with a belief in one’s own efficacy. 
Sample Conceptual
Sample Procedural
Sample Strategic
Sample Adaptive

As teachers, we can only test the first four of these in a traditional test setting. However, productive disposition is something you can learn about each of your students as you interact with them daily.

Multiple-Choice Tests

Basic Multiple-choice Item Components:
  • Skill: Comes from TEKS, district curriculum, etc.
  • Mathematical Proficiency Level: Procedural, Conceptual, Strategic, or Adaptive
  • Stem (Text/Graphic): Make sure the text and graphics you use are purposeful and relevant to the underlying mathematical skill/concept being assessed
  • 4 Response Options: 1 correct response and 3 distractors that are well-thought out - no throw away distractors!
Write Plausible Distractors

For multiple-choice tests, the responses you provide are just as important as the question you ask.

Take the time to write distractors that are based on students’ common mistakes and misconceptions. To help ensure the distractors are plausible, write a rationale for each distractor. Also, avoid using give-away distractors that do not relate to the item. Here’s an example of a spreadsheet that can be used when writing a test. This spreadsheet can easily be copied and changed to create multiple forms of the same test. The specific details in the stem can be changed, but the same distractor rationales can be used. This will allow you to analyze the knowledge of all students even across different test forms. You can also use this spreadsheet for free response items (ex: items 11 and 12).

Where to find the most common mistakes your students will make:

1) Your students: Daily: During class discussion or student activities, take note of how students explain and talk about concepts. Previous Assessments: While grading homework, quizzes, and tests take note of the most common errors your students make and misconceptions your students have about particular operations or topics.

2) Research-based resources: IES Practice Guides; Adding It Up; And many more…

Summing it All Up:
When writing any assessment, it is important to include items that test students’ conceptual understanding, procedural fluency, strategic competence, and adaptive reasoning skills because each of these components is equally important in their overall math proficiency. When writing items for multiple-choice tests, make sure to be purposeful in the response options you include.

National Research Council. (2001). Adding it up: Helping children learn mathematics. J. Kilpatrick, J. Swafford, and B. Findell (Eds.). Mathematics Learning Study Committee, Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.

Siegler, R., Carpenter, T., Fennell, F., Geary, D., Lewis, J., Okamoto, Y., Thompson, L., Wray, J. (2010). Developing effective fractions instruction for kindergarten through 8th grade (NCEE 2010-4039). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/wwc/practiceguide.aspx?sid=15

Friday, June 28, 2013

Math Journal Writing and Blogs as an Assessment Tool

By Dawn Woods, RME Elementary Mathematics Coordinator 

Assessment is more than a test at the end of a unit to gauge learning; it is an integral component of classroom instruction (National Council of Teachers of Mathematics, 2013). Not only does assessment provide information needed to adjust teaching and learning as they are happening, but it becomes a fundamental part of mathematics instruction, done for students instead of to students. However, it is important to consider that some students can produce correct answers on a test item but may not understand the solution or question behind it (Garfield, 1994). Therefore, classroom assessments should follow multiple approaches allowing students to showcase his or her strengths while focusing on how students think about mathematics.

According to Marilyn Burns, mathematics teachers gain a wealth of information by investigating the thinking behind students’ answers, not just when they are wrong but when they are correct (2005). One way to investigate the thinking behind students’ answers is by journal writing. Keeping a journal helps students to think about problem solving in a meaningful context while giving insights into their learning (Barlow & Drake, 2008). Open-ended journal writing enables students to analyze, synthesize, and evaluate the lesson thereby growing in mathematical achievement! Through writing prompts or open-ended questions, math journals can
  • Stretch students thinking. 
  • Help make sense of problems. 
  • Express feelings and thoughts about mathematics. 
  •  Reveal conceptual understanding. 
  • Personalize learning, and 
  • Evaluate progress and recognize strengths.
But why not put a technology twist on the traditional written math journals since technology plays an important role in everyday life of our digital native students? According to the National Council of Teachers of Mathematics, technology is vital in teaching and learning mathematics (2008). Through the use of technology, mathematics education and assessment strategies could be transformed to seize the attention of today’s wired students. Through blogging students can
  • Post solutions to problems. 
  • Post mathematical insights. 
  • Post questions. 
  • Interact with peers and teachers through online discussions. 
  • Embellish online entries with sounds, video, and graphics, creating potential for project-based learning, and 
  • Nurture higher order thinking skills
To sum up, journal writing and math blogs provide an authentic, motivating, assessment tool, where students communicate and explain mathematical concepts through writing, taking ownership of mathematical ideas in a creative and meaningful way.

Resources to Get You Started

Writing in Mathematics Drake, J.M., & Barlow, A.T. (2008). Assessing students’ levels of understanding multiplication through problem writing. Teaching Children Mathematics, 14(5), 272-277.

Kawas, T. (2010). Writing in Mathematics. Retrieved June 24, 2013 from http://mathwire.com/writing/writing1.html

Blogging in Mathematics Tubbs, J. (2007). Blogs in the Mathematics classroom. Retrieved June 24, 2013 fromhttp://futureofmath.misterteacher.com/blogs2.html

Pyon, S.M. (2008). Why math blogs. Teaching Children Mathematics, 14(6), 331-335.

References

Barlow, A.T., & Drake, J.M. (2008). Assessing understanding through problem writing. Mathematics Teaching in the Middle School, 13(6), 326-332.

Burns, M. (2005). Looking at how students reason. Educational Leadership, 63(3), 26-31.

Garfield J. (1994). Beyond testing and grading: Using assessment to improve student learning. Journal of Statistics Education, 2(1). Retrieved June 24, 2013 from http://www.amstat.org/publications/jse/v2n1/garfield.html.

National Council of Teachers of Mathematics (2013). The assessment principle. Retrieved June 24, 2013 from http://www.nctm.org/standards/content.aspx?id=26803

National Council of Teachers of Mathematics (2008). The role of technology in the teaching and learning of mathematics. Retrieved June 24, 2013 from http://www.nctm.org/about/content.aspx?id=14233.

Thursday, June 13, 2013

Writing to Learn Mathematics

By Yetunde Zannou, RME Postdoctoral Research Fellow

As a mathematics teacher, were you ever blind-sided by students’ performance on an assessment when they seemed so knowledgeable in class? I most certainly was! I decided to do an action research project in several of my classes to better understand that gap and how to bridge it.

My initial investigation led me to look more closely at my assessment techniques and what other teachers were doing to “know what their students know.” I decided to deliberately, and regularly, incorporate writing into my mathematics instruction as a tool to help me support students during the learning process, not near the end of a unit. Writing ultimately helped me to help my students learn.

Some Benefits of Writing to Learn Mathematics

Writing to learn mathematics benefits students and teachers. For students, the process of writing can help them think through and explain their reasoning, which may not always happen when the goal is to find a solution. Students learn over time how to be good note-takers, but often struggle to apply their notes meaningfully. Writing about their thinking forces this process to happen. For instance, instead of asking students to find a solution, ask them to evaluate whether a solution is correct or not and to justify their reasoning. Students have to examine a solution carefully, identify whether or not there are any errors, and explain their position. This type of assignment requires students to access their knowledge in a new way, which facilitates real ownership of new knowledge, not just regurgitation.

As a teacher, my students’ writing made clear where they struggled with a concept or procedure. Particularly as someone who understands mathematics and enjoys it, it was difficult to see some of the little things that may confuse students. However, their writing made those visible to me. As a result, I could address common misconceptions in future classes, provide additional specific support for individual students, and strategically reorder problems in consideration of possible challenges.

Practical Application

There is a wealth of information available on using writing to learn mathematics. In the upper grades, students may be unaccustomed to writing about their mathematical knowledge; however, with sufficient guidance, clear examples, and regular opportunities to practice, they can do well. A good initial writing activity is a mathematics biography. I’ve seen this done primarily with numbers or as a history of students’ experiences in mathematics. I used it as a history and gained valuable insight into my students’ feelings about their mathematical ability. It may be helpful to write your own and share! Writing can be used at almost any stage of a lesson or unit. I’ve used writing to get students thinking about previous lessons and connections between concepts at the beginning of class. I’ve also used writing as an on-the-spot assessment, as well as on summative assessments. Here are my top five assessment prompts:
  1. Evaluate a problem and its solution for accuracy and justify your response
  2. Create a problem to certain specifications (solvable or not), include a correct response, and identify possible errors a friend might make
  3. Write a note to a friend who was absent, describing what you learned today. Include an example problem, solution, and detailed steps to solve
  4. Describe how you would solve a problem and your reasoning without including the answer. Great for scale problems
  5. Describe what would happen if… This is a good prompt for geometry, proportions, rate, and many others
At the end of a lesson sometimes I would ask students to describe: one thing you know well, one thing you sort of get, and one thing you don’t get at all. I’d use these to revamp the next day’s lesson.

Making it Work

For writing to learn mathematics to work, you and your students have to “buy in.” Students have to understand the value of writing as a tool to help them “know what they know,” as well as a way to assist you in determining how to proceed. You have to be committed to learning from students in this way. Incorporating writing takes class time and time to evaluate assignments. Here are some suggestions to make it work:
  • Plan writing assessments in advance. Students will take the writing exercises seriously when they see that you do. Avoid using writing as a way to “pass time” or “on the fly.” Even if you decide an on-the-spot writing assessment is needed, you should already have an idea of a prompt to use or a question that could work as a written assignment.
  • Make sure prompts are clear and direct. Writing alone does not guarantee these benefits. Make sure instructions clearly communicate what you want so that students know how to respond and their writing serves as an effective assessment tool.
  • Decide in advance how and when you will evaluate it. Planning to write should accompany a plan for evaluation and action. The likelihood of reviewing a written assignment in a timely manner increases when you know what you’re looking for. If you want to know common errors across classes, sampling entries is better than reading each students.’  I’ve given written assignments in lieu of traditional homework problems at times and discussed them in class the next day before moving on.
  • Provide feedback, individually or to the class. Even if you don’t provide each student with a detailed response, students should be aware that you are reading their answers. This will help them to see the value of their writing efforts and the immediate impact it has on your instruction.
What are some other advantages of using writing to learn and assess in mathematics? What might be challenges to incorporating it into your instruction? What supports are available to make it work?

Friday, April 26, 2013

Focus on Research: A Discussion on Learning Progressions for Instruction and Assessent

By Dr. Deni Basaraba, RME Assessment Coordinator

The need for differentiated instruction to meet the needs of all learners is one source of evidence that students’ learning is not linear and that not all students follow the same learning pathway to mastering content. Learning progressions can be used to describe the successively more sophisticated ways student think about an idea as a student learns, providing a description in words and using examples of what it means to move over time toward a more “expert” understanding of a given topic or content area (Duschl, Schweingruber, & Shouse, 2007).

In addition to including descriptions of students’ understanding as they move from novice to expert understanding, learning progressions also often include descriptions of common misconceptions students may have about the content of interest that may hinder or impede their understanding; these misconceptions can then provide the focus for targeted instruction (Alonzo & Gearhart, 2006).

The complexity associated with learning new content, because it is not linear or the same for every student, is best represented graphically as a complex map or network of connections and interactions rather than a linear path; this complex map allows for the fact that there is no “best” pathway and that some students may take one path in their learning than others to attain proficiency with the same content. A map of a sample learning progression will show not only the development and sophistication of students’ thinking as they move in the learning progression (i.e., increasing in sophistication of their skills and understanding) but will also represents an interaction and integration of knowledge.

In addition to relatedness among constructs in the learning progression, there are also connections of the knowledge and skills between one skill and the next. For example, if the target strategy for a level of a learning progression is the ability to recall multiple
strategies for single-digit addition (e.g., making tens, doubles), the perquisite skill might be a count on strategy whereby students can count on from an initial term (e.g., 5) to make a larger number (e.g., 5, 6, 7, 8). Finally, the most foundational skill in this hypothesized learning progression might be the ability to count all, that is, start from counting at 1 all the way to the desired sum (e.g., When asked what 5 + 3 equals the student starts counting from one – 1, 2, 3, 4, 5, 6, 7, 8).


How can learning progressions inform instruction and assessment?
Learning progressions can be a critical cog in the machinery of instruction and assessment. If, for example, we know that learning progressions provide ordered descriptions’ of students’ understanding, we can then use that information to help identify the “landmarks” or essential knowledge and skills students will need to learn as part of the math content, which can be used to help with instructional planning (e.g., what content to teach and when to teach it).

In addition, because learning progressions often include descriptions of the target knowledge and skills as well as common misconceptions or errors in students’ thinking we hypothesize may be interfering with students’ acquisition of a particular skill or mastery with specific content, learning progressions can provide valuable insights to how students think about the content of the learning progression. Together, these pieces of information can be used to help determine an appropriate sequence for the content of instruction (e.g., focusing first on foundational, prerequisite skills that gradually increase in complexity) as well as to develop classroom-based assessment items that focus on knowledge and skills that have been taught during instruction.

Alonzo, A. C., & Gearhart, M. (2006). Considering learning progressions from a classroom assessment perspective. Measurement: Interdisciplinary Research & Practice, 14(1-2), 99-104.

Duschl, R. A., Schweingruber, H. A., & Shouse, A. W. (Eds.) (2007). Taking science to school: Learning and teaching science in grades K-8. Washington, DC: National Academies Press.

Monday, April 15, 2013

RTI in a Middle School Mathematics Classroom

By Lindsey Perry, RME Research Assistant

Are you looking for tools and resources to help you reach all students, including those who are struggling in mathematics? Are you seeking out professional development to help you grow in your teaching? The Middle-school Students in Texas: Algebra Ready (MSTAR) initiative can help you learn instructional strategies to assist students struggling with mathematics, assess student understanding, and meet the needs of all learners.

The MSTAR initiative, funded by the Texas Legislature and developed by the Texas Education Agency, is a comprehensive project that provides teachers and administrators with assessments, professional development, and intervention lessons to improve grades 5–8 mathematics achievement in Texas and to sustain the implementation of Response to Intervention (RTI).

An important step in the RTI process is assessing student understanding. To do just that, the MSTAR initiative provides teachers with screening and diagnostic instruments, the MSTAR Universal Screener and the MSTAR Diagnostic. The MSTAR Universal Screener assists teachers in determining if a student is at-risk or on-track for meeting grade level algebra-readiness expectations and the level of support the student may need in order to be successful. The MSTAR Universal Screener is administered three times per year in order to monitor student progress and is administered online at mstar.epsilen.com. The spring administration window is April 8 – May 10, 2013. To find out more, visit http://www.txar.org/assessment/mstar_screener.htm or email universalscreener@region10.org.

The MSTAR Diagnostic Assessment is currently in development. The MSTAR Diagnostic should be administered to students who have been identified by the MSTAR Universal Screener as at-risk for meeting algebra-readiness expectations. This instrument provides teachers with information about why students are struggling and the misconceptions students may have. We are currently seeking a small set of classrooms to participate in the MSTAR Diagnostic Beta test. These classes must have already taken MSTAR Universal Screener at least once this year. While this is a beta test, teachers will receive data on how their students performed. If you are interested, please email us at rme@smu.edu.

The MSTAR Initiative also includes numerous online and face-to-face professional development opportunities. Trainings are available that focus on providing all students with quality Tier I instruction (MSTAR Academy I), strategies for Tier II instruction (Academy II), and data-driven decision making (Implementation Tools). Trainings on topics such as addressing the needs of English language learners, addressing the College and Career Readiness Standards, and teaching fraction/decimal relationships are also available, among many others. Many of the trainings are now available online at www.projectsharetexas.org. For more information, contact your Education Service Center or search the Project Share course catalog at http://projectsharetexas.org/about.

The MSTAR Initiative can help you improve your teaching and help you better understand your students’ needs and how to meet those needs. We encourage you to check out the MSTAR assessments and professional development offerings!

For detailed information about the initiative and the Response to Intervention framework, we invite you to click the link for a copy of “Supporting Students’ Algebra Readiness: A Response to Intervention Approach” in Texas Mathematics Teacher.

Wednesday, March 6, 2013

Screener vs. Diagnostic

By Savannah Hill, RME Professional Development Coordinator

One project we are involved with at RME is an initiative with the Texas Education Agency and Education Service Center, Region 13 called Middle School Students in Texas Algebra Ready (MSTAR). It began in the summer of 2010 with the goals of (1) improving overall mathematics instruction, and (2) impacting student achievement. MSTAR is comprised of three lead components structured and integrated to support students and teachers in grades five through eight to achieve mathematics success: the MSTAR Universal Screener, MSTAR Diagnostic Assessment, and MSTAR Professional Development.

After talking with many teachers, we have found there is some confusion on the different ways to utilize the MSTAR Universal Screener and the MSTAR Diagnostic Assessment. The intent of this blog is provide a short description of each of these components and how they should be implemented.

MSTAR Universal Screener 
The MSTAR Universal Screener is designed to be administered to all students and identifies studentsʼ level of risk for not being ready for algebra.  The Universal Screener helps teachers make two important decisions within the Response to Intervention (RTI) process:
  • Identify students on-track or at-risk for meeting expectations in algebra and algebra-readiness.
  • Determine the degree of intensity of instructional support or supplemental intervention needed for students who are at-risk for not meeting expectations in algebra.
Teachers monitor studentsʼ risk status by administering comparable forms of the MSTAR Universal Screener in fall, winter, and early spring.

MSTAR Diagnostic Assessment
The MSTAR Diagnostic Assessment is designed to address those students identified as struggling in Tiers 2 and 3. The diagnostic assessment is given after the MSTAR Universal Screener to those students in Tiers 2 and 3. Its purpose is to:
  • Inform educators where a student is on a learning progression.
  • Identify the underlying misconception(s) that caused the student to answer incorrectly.
  • Identify students current understanding of algebra-related content.
None of the diagnostic assessments are tied to a particular grade level because there may be a 7th grade student who is struggling from misconceptions about 5th grade content. However, when the teacher decides which assessment the student will take, there will be some direction about which assessment may be better for each grade. The reports given will provide information that can be used to plan supplemental instruction. This assessment is not intended to provide screening information.

MSTAR Professional Development
The MSTAR Professional Development provides tools for delivering instruction to all students in achieving algebra readiness and supports informed decision-making based on the results of the MSTAR assessments. The MSTAR Professional Development academies were created to support teachings in preparing students for success in algebra. Trainings are available in face-to-face sessions and/or online. RME researchers, along with TEA, delivered Professional Development in three training sessions for the MSTAR project for the Texas Education Agency in spring and summer 2011 and 2012. The trainings were replicated across the state by certified trainers.

The MSTAR Universal Screener can be accessed through the Project Share Gateway at www.projectsharetexas.org. It can also be accessed directly at http://mstar.epsilen.com. This option will allow you to bypass the Project Share site entirely. Users will see an MSTAR icon after logging in. The same username and password is used for either option. For more information, you can also contact your local Educational Service Center.