Data-Driven Decision Making: Who’s the decider?

After I shared out my previous post, describing my confusion about making sense of certain types of data, the International Literacy Association (ILA) replied with a link to a recent report on this topic:

It’s a short whitepaper/brief titled “Beyond the Numbers: Using Data for Instructional Decision Making”. The principal authors, Vicki Park and Amanda Datnow, make a not-so-provocative claim that may still cause consternation in education:

Rather than data driving the decision-making, student learning goals should drive what data are collected and how they are used.

The reason this philosophy might cause unrest with educators is that data-driven decision making is still a mainstay in schools. Response to Intervention is dependent on quantitative-based progress monitoring. School leaders too often discount the anecdotal notes and other qualitative information collected by teachers. Sometimes the term “data-informed” replaces “data-driven”, but the approach largely remains aligned with the latter terminology and practice.

Our school is like many others. We get together three times a year, usually after screeners are administered. We create spreadsheets and make informed decisions on behalf of our students. Yet students nor their parents are involved in the process. Can we truly be informed if we are not also including the kids themselves in some way?

To be fair to ourselves and to other schools, making decisions regarding which students need more support or how teachers will adjust their instruction is relatively new to education. As well, our assessments are not as clean as, say, a blood test you might take at the doctor’s office. Data-driven decision making is hard enough for professional educators. There are concerns that bringing in students and their families might only contribute to the confusion through the fault of no one.

And yet there are teachers out there who are doing just this: positioning students as the lead assessors and decision-makers in their educational journey. For example, Samantha Mosher, a secondary special education teacher, guides her students to develop their own IEP goals as well as how to use various tools to monitor their own progress. The ownership for the work rests largely on the students’ shoulders. Samantha provides the modeling, support, and supervision to ensure each student’s goals and plan are appropriate.

An outcome in releasing the responsibility of making data-informed decisions to students is that Samantha has become more of a learner. As she notes in her blog post:

I was surprised that many students didn’t understand why they got specific accommodations. I expected to have to explain what was possible, but didn’t realized I would have to explain what their accommodations meant.

“Yes, but older students are able to set their own goals and monitor their own progress. My kids are not mature enough yet to manage that responsibility.” I hear you, and I am going to disagree. I can say that because I have seen younger students do this work firsthand. It’s not a completely independent process, but the data-informed decision making is at least co-led by the students.

In my first book on digital portfolios, I profiled the speech and language teacher at my last school, Genesis Cratsenberg. She used Evernote to capture her students reading aloud weekly progress notes to their parents. She would send the text of their reflections along with the audio home via email. Parents and students could hear first hand the growth they were making over time in the authentic context of a personalized student newsletter. It probably won’t surprise you that once Genesis started this practice, students on her caseload exited out of her program at a faster rate. (To read an excerpt from my book describing Genesis’s work, click here.)

I hope this post comes across as food for thought and not finger-wagging. Additionally, I don’t believe we should stop with our current approaches to data analysis. Our hands are sometimes tied when it comes to state and federal rules regarding RtI and special education qualification. At the same time, we are free to expand our understanding and our beliefs about what counts as data and who should be at the table when making these types of decisions.

The Data Says…What? (Or: Why we struggle to make sense of literacy assessment results)

art-lasovsky-559569-unsplash.jpg
Photo by Art Lasovsky on Unsplash

Our instructional leadership team recently analyzed the last two years of writing assessment data. We use a commercial-based rubric to score student writing in the fall and in the spring. As I presented the team with the results, now visualized in graphs and tables, we tried to make sense of the information.

It didn’t go as well as planned.

To start, we weren’t evaluating every student’s writing; for sake of time and efficiency, we only score half of the high, medium, and low pieces. This was a schoolwide evaluation but did not give teachers specific information to use. Also, the rubric changes as the students get older. Expectations increase even though the main tenets of writing quality stayed the same. Therefore it is hard to compare apples to apples. In addition, the subjective nature of writing, especially in a reader’s response, can cause frustration.

In the end, we decided to select one area of growth we would focus on as a faculty this year, while maintaining the gains already made.

Anytime I wade into the weeds of literacy assessment, I feel like I come out messier than when I entered. I often have more questions than answers. Problems go unresolved. Yet there have to be ways to evaluate our instructional impact on student literacy learning. It’s important that we validate our work and, more importantly, ensure students are growing as readers and writers.

One assessment, tried and true, is the running record for reading comprehension. This is now a standardized assessment through products such as Fountas & Pinnell. It is time-intensive, however, and even the best teachers struggle to give up instructional time to try to manage the other students when administering these assessments. Running records are the mainstay assessment tool for Reading Recovery teachers who work one-on-one with 1st grade students.

Another method for evaluating student literacy skills at the classroom level is observation. This is not as formal as a running record. Teachers can witness a student’s interactions with a text. Do they frustrate easily? How well are they applying their knowledge of text features with a new book? The information is almost exclusively qualitative, which leads to challenges of analyzing the results.

One tool for evaluating students as readers and writers that doesn’t get enough attention (in my opinion anyway) are student surveys. How students feel about literacy and how they see themselves as readers and writers is very telling. The challenge here is there are a lot of tools but not a lot of validity or reliability behind most of them. One tool, Me and My Reading Profile, is an example of a survey that is evidence-based.

To summarize, I don’t have an answer here as much as I wanted to bring up a challenge I think a lot of schools face: how do we really measure literacy success in an educational world that needs to quantify everything? Please share your ideas and experiences in the comments.

How we stopped using Accelerated Reader

This post describes how our school stopped using Accelerated Reader. This was not something planned; it seemed to happen naturally through our change process, like an animal shedding its skin. The purpose of this post is not to decry Accelerated Reader, although I do know this reading assessment/incentive program is not viewed favorably in some education circles. We ceased using a few other technologies as well, each for different reasons. The following timeline provides a basic outline of our process that led to this outcome.

  1. We developed collective commitments.

The idea of collective commitments comes from the Professional Learning Community literature, specifically Learning by Doing, 3rd edition. Collective commitments are similar to norms you might find on a team. The difference is collective commitments are focused on student learning. We commit to certain statements about our work on behalf of kids. They serve as concrete guidelines, manifested from our school’s mission and vision, as well as from current thinking we find effective for education.

We first started by reading one of four articles relevant to our work. The staff could choose which one to read. After discussing the contents of the articles in small group and then in whole group, we started crafting the statements. This was a smaller team of self-selected faculty. Staff who did not participate knew they may have to live with the outcomes of this work. Through lots of conversation and wordsmithing, we landed on seven statements that we all felt were important to our future work.

Screen Shot 2017-10-21 at 9.24.16 AM

At the next staff meeting, we shared these commitments, answered any questions about their meaning and intent, and then held an anonymous vote via Google Forms. We weren’t looking for unanimity but consensus. In other words, what does the will of the group say? Although there were a few faculty members that could not find a statement or two to be agreeable, the vast majority of teachers were on board. I shared the results while explaining that these statements were what we all will commit to, regardless of how we might feel about them.

  1. We identified a schoolwide literacy focus.

Using multiple assessments in the fall (STAR, Fountas & Pinnell), we found that our students needed more support in reading, specifically fluency. This meant that students needed to be reading and writing a lot more than they were, and to do so independently. Our instructional leadership team, which is a decision-making body and whose members were selected based on in-house interviews, started making plans to provide professional development for all faculty around the reading-writing connection. (For more information on instructional leadership teams and the reading-writing connection, see Regie Routman’s book Read, Write, Lead).

  1. We investigated the effectiveness of our current programming.

Now that we had collective commitments along with a focus on literacy, I think our lens changed a bit. Maybe I can only speak for myself, but we started to take a more critical look at our current work. What was working and what wasn’t?

Around that time, I discovered a summary report from the What Works Clearinghouse, a part of the Institute of Educational Sciences within the Department of Education. This report described all of the different studies on Accelerated Reader. Using only the research that met their criteria for reliability and validity, they found mixed to low results for schools that used Accelerated Reader.

I shared this summary report with our leadership team. We had a thoughtful conversation about the information, looking at both the pros and cons of this technology tool. However, we didn’t make any decisions to stop using it as a school. I also shared the report with Renaissance Learning, the maker of Accelerated Reader. As you might imagine, they had a more slanted view of this information, in spite of the rigorous approach to evaluating their product.

While we didn’t make a decision at that time based on the research, I think the fact that this report was shared with the faculty and discussed planted the seed for future conversations about the use of this product in our classrooms.

  1. We examined our beliefs about literacy.

The professional development program we selected to address our literacy needs, Regie Routman in Residence: The Reading-Writing Connection, asks educators to examine their beliefs regarding reading and writing instruction. Unlike our collective commitments, we all had to be in agreement regarding a literacy statement to own it and expect everyone to apply that practice in classrooms. We agreed upon three.

Beliefs Poster

This happened toward the end of the school year. It was a nice celebration of our initial efforts in improving literacy instruction. We will examine these beliefs again at the end of this school year, with the hope of agreeing upon a few more after completing this PD program. These beliefs served to align our collective philosophy about what our students truly need to become successful readers and writers. Momentum for change was on our side, which didn’t bode well for outdated practices.

  1. We started budgeting for next year.

It came as a surprise, at least to me, that money would be a primary factor in deciding not to continue using Accelerated Reader in our school.

With a finite budget and an infinite number of teacher resources in which we could utilize in the classroom, I started investigating the use of different technologies currently in the building. I found for Accelerated Reader that a small minority of teachers were actually using the product. This usage was broken down by class. We discovered that we were paying around $20 a year per student.

Given our limited school budget, I asked teachers both on our leadership team and the teachers who used it if they felt this was worth the cost. No one thought it was. (To be clear, the teachers who were using Accelerated Reader are good teachers. Just because they had their students taking AR quizzes does not suggest they were ineffective; quite the opposite. I think it is worth pointing this out as I have seen some shaming of teachers who use AR as a way to persuade them to stop using the tool. It’s not effective.)

With this information, we as a leadership team decided to end our subscription to Accelerated Reader. We made this decision within the context of our collective commitments and our literacy beliefs.

Next Steps

This story does not end with our school ceasing to using Accelerated Reader. For example, we realize we now have an assessment gap for our students and their independent reading. Lately, we have been talking about different digital tools such as Kidblog and Biblionasium as platforms for students to write book reviews and share their reading lives with others. We have also discussed different approaches for teachers to assess their readers more authentically, such as through conferring.

While there is a feeling of uncomfortableness right now, I feel a sense of possibility that maybe wasn’t there when Accelerated Reader was present in our building. As Peter Johnston notes from his book Opening Minds, ““Uncertainty is the foundation for inquiry and research.” I look forward to where this new turn in instruction might lead us.

 

Better Data Days Are Ahead

35936619430 07cd76386b
We’ve all been there, we collect data, make beautiful color coded spreadsheets detailing nearly every data point we could possibly collect on each possible child. We compare district data to state data, nationally norm referenced data against in class assessments. We highlight students’ projected growth in order to make adequate progress for each child. We look at whole class data and determine standards to re-teach. We attend collaboration and intervention meetings in order to discuss students who are receiving services and what progress is being made. We create, update, and review a school data wall. We can name multiple data points on each student in our classes at the snap of a finger. 

Face it, we are inundated with data. But are we always really looking at the data for all children and determining the next steps?
Chapter 6 “Supporting Curriculum and Assessment” made me pause and think about how important it is to take that next step in data. Jen dives deep in this chapter with some really important details to consider as literacy leaders in a building. Not only should we be tracking student achievement for ALL learners, we should carve out time periodically to review this data and determine next steps. Some prompting questions Jen outlines are as follows:

  • What are the strengths and needs of each student?
  • What students are you concerned about?
  • What students have made the most growth?
  • What observations can you make about your overall literacy data?

Jen suggests having these literacy team meetings each fall, winter, and spring to ensure that no student falls through the cracks. Each person has a crucial role in the process; the teacher reflects on each student, the principal reviews the student’s cumulative folder, the assistant principal listens and takes notes for student placement, and the literacy leader takes notes on students who are still at risk of failure.

As a result of reading this chapter, I have had some really great discussions with teachers and my administration about how we can create a better culture of data REVIEW. I am excited that our staff is ready to take the next steps in data review and that we are clearly beyond the idea of just being great collectors of data. 

This is going to be a great year. Teachers are asking for the next step in our data process and are ready to take it on and make it our own, and make it meaningful. I am confident that as a result, our teachers will feel a better sense of direction and purpose. And once again, the work that goes on behind the scenes will play out better in classroom instruction, in our relationships with our students and families, and will result in increased student achievement.

Coaching Work: Curriculum & Assessment by @danamurphy68 #litleaders

In Chapter Six of Becoming a Literacy Leader, Jennifer Allen outlines the various ways she is able to support teachers with curriculum and assessment in her role as an instructional coach. As anyone in the field of education knows, curriculum and assessment are the backbone of the school system. Curriculum drives our teaching and assessment helps us fine-tune it. I’d go as far as to say supporting curriculum and assessment is one of my top three duties as an instructional coach.

Allen dedicates pages 114 – 116 to explaining how she helps prepare assessment materials during each assessment cycle. I nodded to myself as I read, remembering how I spent an entire morning last year in the windowless copy room making copies of our running record forms for the staff. It certainly wasn’t inspiring work, but I agree with Jennifer that preparing assessment materials is important work. When teachers are freed of the tedious jobs of copying or creating spreadsheets or organizing assessment materials, they are free to concentrate on the hard work of administering and analyzing assessments. If I can remove the ‘busywork’ part of assessment administration for them, I don’t mind spending a morning in a windowless copy room. In this way I can provide the time and space for teachers to think deeply about their assessments. If I can do the busywork, they can do the work that really matters.

green-chameleon-21532.jpg

While reading Chapter Six, I thought about how I support curriculum and assessment in my school district. I do many of things Allen wrote about, but what seems most important to me is helping teachers look at student work as formative assessment. On page 110, Allen wrote:

Students should be at the heart of our conversations around curriculum and assessment, and it’s important that we don’t let them define who students are or might become. 

This quote summarizes my driving belief as an instructional coach. It is easy to fall into the trap of believing we (instructional coaches) exist to support the teachers, but the truth is we are ultimately there for the students. In order to keep students at the heart of my work as a coach, I work hard to have student work present during any coaching conversation. This holds true at the end of an assessment cycle as well. It benefits everyone to slow down and take the time to review the assessments (not the scores, the actual assessments). Teachers bring their completed writing prompts or math unit exams or running records, and we use a protocol to talk about the work. There are an abundant amount of protocols available at NSRF. I also highly recommend the Notice and Note protocol from The Practice of Authentic PLCs by Daniel R. Venables. This is my go-to protocol to look at student work with a group of teachers.

Teachers are in the classroom, doing the hard work of implementing curriculum and administering assessments. Our job as literacy leaders is to support them by giving them the time and space to reflect on their hard work.

Do no Harm

When used casually, AR helps students’ reading abilities grow. When used thoughtfully and with proven techniques, it leads to tremendous gains and a lifelong love of reading. – Getting Results with Accelerated Reader, Renaissance Learning

I am currently reading aloud Millions by Frank Cottrell Boyce to my 10 year old son. It is an interesting “what if” story: the main character and his older brother find a bag of money thrown off of a train in England. The problem is that England’s currency is soon transitioning from pounds to the euro. To add a wrinkle to the narrative, the main character’s mother recently passed away. To add another wrinkle, the main character can speak to deceased saints canonized within the Catholic Church. This story is nothing if not interesting and hard to predict.

Reading aloud to my son sometimes leads to conversations about other books. For instance, I asked him about a fantasy series that also seemed to stretch one’s imagination. I thought it was right up his alley. Yet he declined. Pressed to explain why, my son finally admitted that he didn’t want to read that series because he failed an Accelerated Reader quiz after reading the first book. Here is our conversation:

Me: “When did you read the book in that series?”

Son: “Back at my older school.”

Me: “Why did you take a quiz on it?”

Son: “Because we had to take at least one quiz every month.”

Me: “Did you not understand the book?”

Son: “I thought I did. It was hard, but I liked it.”

This is an educational fail. When an assessment such as Accelerated Reader causes a student to not want to read, this should be a cause for concern. To be clear, Accelerated Reader is an assessment tool designed to measure reading comprehension. Yet it is not a valid tool for driving instruction. What Works Clearinghouse, a source for existing research on educational programming, found Accelerated Reader to have “mixed effects on comprehension and no discernible effects on reading fluency for beginning readers.” In other words, if a school were to implement Accelerated Reader, they should expect to find results that were not reliable, with the possibility of no impact on student learning. Consider this as you ponder other approaches to promoting independent reading.

It should also be noted that none of the studies listed took a look at the long term effects of using Accelerated Reader on independent reading. That would make for an interesting study.

I realize that it makes simple sense to quiz a student about their comprehension after reading a book. Why not? The problem is, when a student sees the results of said quiz, they appear to attribute their success or failure to their abilities as a reader. Never mind that the text might have been boring and only selected because of points, that the test questions were poorly written, that the teacher had prescribed the text to be read and tested without any input from the student, or that the test results would be used toward an arbitrary reading goal such as points. Any one of these situations may have skewed the results. In addition, why view not passing an AR quiz as a failure? It might be an opportunity to help the student unpack their reading experience in a constructive way.

What I would say is to take a step back from independent reading, and to appreciate it as a whole. What are we trying to do with this practice? Independent reading, as the phrase conveys, means to develop a habit of and love for lifelong, successful reading. This means the appropriate skills, strategies and dispositions should be developed with and by students. Any assessment that results in a student not wanting to read more interferes with that process and causes more problems than benefits. The Hippocratic Oath in medicine states “Do no harm”. Sounds like wisdom education should heed as well.

Suggestion for further reading: My Memory of The Giver by Dylan Teut

I didn’t meet my reading goal (and is that okay?)

2016 has come to a close. Like any year, there were events to celebrate along with a few experiences we may not care to reminisce over. One event that is somewhere in the middle for me is that fact that I didn’t achieve my reading goal.

For the past two years, I have set a goal for number of books to read from January to December. In 2015 I not only met my goal but surpassed it (50/53). This past year I decided to up the ante – more is better, right? – and set a goal for 60. I ended up reading 55 books this year. Not too shabby, considering my recent move and a new job.

Screen Shot 2017-01-01 at 5.45.48 PM.png

Goodreads, the online community where I along with many other bibliophiles post said goals, seems indifferent to this fact. “Better luck in 2017!” is all the feedback Goodreads offers. I can live with that. The site focused more on all of the books I did read, covers facing out, along with number of pages read and related statistics.Screen Shot 2017-01-01 at 5.43.01 PM.png

I guess I could have pushed through in December and quickly devoured some titles just to meet my goal. They may not have been what I necessarily wanted to read though. Also, I could have thrown in a few more books that my wife and I listened to with our kids while driving. But to be honest, I was half listening and didn’t feel like I could count it.

I’m glad that I didn’t caught up in meeting arbitrary goals. If that had been the case, I may have passed on longer, more complex works of fiction such as All The Light We Cannot See by Anthony Doerr. It’s fiction, yes, but also helped me deepen my understanding of what it means to live in a nation that does not share your beliefs. If I had worried too much about meeting a reading goal, I might not have reread and reread again Last Stop on Market Street by Matthew de la Pena. It still floors me how many ideas and perspectives a reader can glean from such a short text. If I had worried too much about meeting my reading goal, I may have avoided reading reference books about writing, such as Write What Matters by Tom Romano and A Writer’s Guide to Persistence by Jordan Rosenfeld. These are not texts you plow through. Yet I come back to these resources for information and inspiration.

If I was teaching in the classroom again, I think I would adopt a Goodreads-approach to independent reading. Students would still be expected to set some type of goal based on number of books. But it would not be the function of independent reading. We would look at different data about their reading lives, including:

  • Variety of genres explored
  • Complexity of texts from fall to spring
  • Favorite authors, titles and series based on ratings and reviews
  • Classmates whose reading habits influenced their reading lives
  • Books on their to-read list
  • How they feel about reading in general

This data seems a lot more important than the number of books read. I do believe volume in reading is important. But what leads someone to read? We still get reading goals like number of books read confused with purpose. The purpose of a reading goal is to make a more concerted effort to read more and to read daily. The idea is that through habitual reading, we will discover new titles, authors and genres that we come to enjoy and find valuable in our lives. I think about how I got hooked on reading: in the 3rd grade, our teacher read aloud Tales of a Fourth Grade Nothing by Judy Blume. No reading goal, amount of guided reading or immersion into a commercial program did that for me.

As teachers take stock with their students during the school year regarding reading goals, I sincerely hope they look beyond mere numbers and work with their students so they can understand them as readers. Data that only measures quantity and disregards quality tells us very little about who our students are and who they might become as readers.

Screen Shot 2017-01-01 at 5.42.50 PM.png

Suggestion for Further Reading: No AR, No Big Deal by Brandon Blom