The Data Says…What? (Or: Why we struggle to make sense of literacy assessment results)

art-lasovsky-559569-unsplash.jpg
Photo by Art Lasovsky on Unsplash

Our instructional leadership team recently analyzed the last two years of writing assessment data. We use a commercial-based rubric to score student writing in the fall and in the spring. As I presented the team with the results, now visualized in graphs and tables, we tried to make sense of the information.

It didn’t go as well as planned.

To start, we weren’t evaluating every student’s writing; for sake of time and efficiency, we only score half of the high, medium, and low pieces. This was a schoolwide evaluation but did not give teachers specific information to use. Also, the rubric changes as the students get older. Expectations increase even though the main tenets of writing quality stayed the same. Therefore it is hard to compare apples to apples. In addition, the subjective nature of writing, especially in a reader’s response, can cause frustration.

In the end, we decided to select one area of growth we would focus on as a faculty this year, while maintaining the gains already made.

Anytime I wade into the weeds of literacy assessment, I feel like I come out messier than when I entered. I often have more questions than answers. Problems go unresolved. Yet there have to be ways to evaluate our instructional impact on student literacy learning. It’s important that we validate our work and, more importantly, ensure students are growing as readers and writers.

One assessment, tried and true, is the running record for reading comprehension. This is now a standardized assessment through products such as Fountas & Pinnell. It is time-intensive, however, and even the best teachers struggle to give up instructional time to try to manage the other students when administering these assessments. Running records are the mainstay assessment tool for Reading Recovery teachers who work one-on-one with 1st grade students.

Another method for evaluating student literacy skills at the classroom level is observation. This is not as formal as a running record. Teachers can witness a student’s interactions with a text. Do they frustrate easily? How well are they applying their knowledge of text features with a new book? The information is almost exclusively qualitative, which leads to challenges of analyzing the results.

One tool for evaluating students as readers and writers that doesn’t get enough attention (in my opinion anyway) are student surveys. How students feel about literacy and how they see themselves as readers and writers is very telling. The challenge here is there are a lot of tools but not a lot of validity or reliability behind most of them. One tool, Me and My Reading Profile, is an example of a survey that is evidence-based.

To summarize, I don’t have an answer here as much as I wanted to bring up a challenge I think a lot of schools face: how do we really measure literacy success in an educational world that needs to quantify everything? Please share your ideas and experiences in the comments.

Author: Matt Renwick

Matt Renwick is an 18-year public educator who began as a 5th and 6th-grade teacher in Rudolph, WI. He now serves as an elementary principal for the Mineral Point Unified School District (http://mineralpointschools.org/). Matt also teaches online graduate courses in curriculum design and instructional leadership for the University of Wisconsin-Superior. He tweets @ReadByExample and writes for ASCD (www.ascd.org) and Lead Literacy (www.leadliteracy.com).

5 thoughts on “The Data Says…What? (Or: Why we struggle to make sense of literacy assessment results)”

  1. Matt, Provocative and important post! So glad you mentioned including student surveys as part of data gathering. For great ideas and perspective on writing data–especially writing rubrics–I highly recommend Reimagining Writing Assessment: From Scales to Stories by Maja Wilson (Heinemann, 2018.) I have just reread it; it presents assessment in a humane and student-centered manner that focuses on assessment-for-growth that values students’ intentions and decision making in the assessment process. The author also demonstrates how and why relying on writing traits cannot give us an accurate picture of student-as-writer.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s