How should we measure success in schools?

When I arrived at my current school as principal, one of the first ideas we implemented was a schoolwide writing assessment. At that time, writing was taught once a week as part of a life skill block. My thinking was, if we assess it more formally, then student writing will improve because teachers will see it as a priority and start to teach it more frequently.

And at first, the approach seemed to work. Shortly after announcing the schoolwide writing assessment, one teacher came up to me and asked, “So writing is a focus for the building now?” I was noncommittal in my response but I did not exactly correct her thinking.

The heightened sense of writing as a priority eventually gave way to reality. Specifically, while writing instruction was happening more frequently in classrooms due in part to dedicated professional development, the actual assessment results did not reveal a lot of helpful information. Our leadership spent a whole day in the fall and in the spring calibrating the rubric with samples, assessing writing pieces with partners, and then adding students’ scores to the spreadsheet. For all of this work, we couldn’t make heads or tails if students were collectively growing as writers.

This experience has led me to ask some hard questions. Why is it so hard to measure student success in complex tasks such as writing? Related, why are we as teachers often the only ones who should be making a determination of success? If students should be a part of the assessment process, can the task itself be used for more than just measurement, such as motivating students to learn? 

Defining Success

The dictionary defines success as “the accomplishment of an aim or purpose”. So knowing what the aim is along with the purpose for our work seems important for students and teachers. Mindless assessment practices should be called into question, but not without a clear understanding of success in the broader sense of the word.

In their article “What Do We Measure and Why? Questions About the Uses of Measurement“, Margaret Wheatley and Myron Kellner-Rogers address the general culture of organizations and our infatuation with numbers. They note a disconnect between what is being measured and what the goals are of the group, along with how people move the organization toward these goals. 

“The desire to be good managers has compelled many people to become earnest students of management. But are measures and numbers the right pursuit? Do the right measures make for better managers? Do they make for a stellar organization?”

For our challenge, it felt (to me at least) that we were scoring our students’ writing in order to have a valid and reliable way of measuring their abilities. The result was a collective effort that was misaligned with greater goals.

  • There was misalignment between who was scoring the writing and who wrote it. For me, there was little context as I read and responded to each piece. For example, were they excited about the topic? If not, why not? Could the kids also present their work for an authentic audience? I wanted to pull each student aside to ask him or her more questions about their process that led them to what they produced.
  • There was misalignment between what was being taught and what was being assessed. Just because we had a common rubric did not mean the success criteria were clear for everyone in the school. Part of this issue is with rubrics themselves; they are lengthy and often too generic to use as a teaching tool. 
  • There was misalignment between our school’s mission and vision and how we were trying to realize these big goals.We were using simple instruments to give us information about complex work. Yes, kids had to respond to a task that had the potential to encourage students to write. But writing is more than words on paper. There is research involved, peer feedback and revision, and time to simply think, activities not a part of a writing assessment. 

Wheatley and Kellner-Rogers have also observed cognitive dissonance between the necessary actions people should take to realize an organization’s vision and how leaders choose to evaluate success due to a group’s actions.

“We believe that these behaviors are never produced by measurement. They are performance capabilities that emerge as people feel connected to their work and to each other. They are capacities that emerge as colleagues develop a shared sense of what they hope to create together, and as they operate in an environment where everyone feels welcome to contribute to that shared hope. Each of these qualities and behaviors-commitment, focus, teamwork, learning, quality–is a choice that people make. Depending on how connected they feel to the organization or team, they choose to pay attention, to take responsibility, to innovate, to learn and share their learnings. People can’t be punished or paid into these behaviors. Either they are contributed or withheld by individuals as they choose whether and how they will work with us.”

Described in this way, assessment is framed as an essential part of the learning journey toward success. The qualities of a community of learners should be embedded in the mission and vision of a school. Pulling these qualities out so they are understandable by all requires a description of success along with more authentic approaches for developing assessments that allow students to make their learning and thinking visible to all.

Engaging in this work is not necessarily more difficult. As Wheatley and Kellner-Rogers note about organizations that make assessment work for them, “their process was creative, experimental, and the measures they developed were often non-traditional.” It may instead require a shift in how we view success for all students, especially from their points of view. Maybe the word “measure” is itself too basic a label for what we are trying to really do: help students see themselves as genuinely successful and know how they arrived at this point. 

Assessing and Celebrating School Culture #WSRA19

Below is a short article from my staff newsletter I wrote yesterday. We are the midway point of the school year, and I wanted to highlight our successes as a school culture by documenting evidence of our work in writing. I’ll be speaking more about this topic at the Wisconsin State Reading Association Convention next week in Milwaukee. If you are also attending, I hope we are able to connect! -Matt

I’ve asked a few staff members for feedback about my plan for publishing To the Point every other week. My theory on this is that our communication as a staff, both formal and informal, is strong. Information shared seems to be frequent and accurate. That is a major reason for my staff newsletters which also helps with not having more than one short staff meeting a month.

This thinking became apparent as I have prepared for a session I am leading at the Wisconsin State Reading Association Convention next week: “How to Build a Literacy Culture”. As I go through artifacts of our success and growth to present to others, I could confirm many indicators of a healthy and thriving school culture beyond only communication (these characteristics come from Literacy Essentials by Regie Routman). 

  • Trusting – We focus on our strengths first and follow through on our tasks before facilitating feedback about areas for growth.
  • Collaborative – Our different school teams work together to guide the school toward goals; instructional coaching is common.
  • Intellectual – We have thirteen shared beliefs about the reading-writing connection and reading to understand.
  • Responsible – The goals for the school are limited, focused, student-driven and clear, i.e. “A Community of Readers”. 
  • Equitable – We have high expectations for our students and provide additional support when necessary.
  • Joyful – Celebration and appreciation are interwoven in our interactions with each other and with the community.

It’s an honor to be able to highlight our collective work for others and share our journey toward success. Sustaining a school culture is an ongoing process that is far from perfect and is sometimes challenging. Yet the results we see with our students makes all the difference.

Leadership as Process

It is October, which means it is school learning objective time. Principals are diligently crafting statements that are S.M.A.R.T. “By the end of the school year,…” and then we make a prediction about the future. In April, we revisit these statements and see if our crystal balls were correct.

I must admit that my goals are usually not fully met. I aim too high, at least by educator evaluation standards. These systems are set up to shoot for just above the status quo instead of for the stars. Great for reporting out. Yet I don’t want to lower my expectations.

Setting objectives and goals are a good thing. We should have something tangible to strive for and know that we have a target to hit. My challenge with this annual exercise is how heavily we focus on a product while largely ignoring the process to get there.

Left alone, schools can purchase a resource or adopt a commercial curriculum that is aligned to the standards. But are they also aligned with our specific students’ needs? Do the practices and resources we implement engage our population of kids? Maybe we are marching toward a specific destination, but are we taking the best pathway to get there?

Having a plan and implementing a plan are two different things. Like an effective classroom teacher, we have to be responsive to the climate and the culture of a school. That means we should be aware of our environment, accept our current status, and then move forward together.

For example, when I arrived at my current elementary school, there was some interest in going schoolwide with the Lucy Calkins Units of Study for reading and for writing. Professionally, I find a lot of positive qualities about the program. Also in the periphery was a desire to get a more consistent literacy curriculum. Our scores reflected a need for instructional consistency and coherence.

If we have an outcome-focused leadership style, then it makes a lot of sense to purchase a program that promises exactly what is being requested. But that means we are investing in stuff instead of investing in teachers. So we declined. The teacher-leaders and I weren’t saying no to one program or passing the buck on making a hard decision. What we wanted instead was a clear plan to become better as practitioners.

This meant first revisiting our identities as educators. What does it mean as a teacher and a professional if the lessons were scripted for us? Are we not worthy of the trust and responsibility that is essential for the many decisions we make every day? This led to examining our beliefs about the foundation of literacy, the reading-writing connection. We found unanimity on only two specific areas out of 21 statements. Instead of treating this as a failure, we saw these two areas of agreement as a starting point for success. We nurtured this beginning and started growing ourselves to become the faculty we were meant to be for our students. After two years of work, we found nine areas of agreement on these same statements.

Screen Shot 2018-10-15 at 4.28.33 PM.png

There are no ratings or other evaluation scores attached to these statements. I am not sure how to quantify our growth as a faculty, and I am pretty sure I wouldn’t want to if I knew how. Instead, we changed how we saw ourselves and how we viewed our students as readers, writers, and thinkers. This is not an objective or goal that is suggested by our evaluation program, but maybe it should be.

I get to this point in a post and I feel like we are bragging. We are not. While I believe our teachers are special, there are great educators in every school. The difference, I think, is that we chose to focus more on the process of becoming better and less on the outcomes that were largely out of our hands. This reduced our anxiety with regard to test scores and public perception of our school. Anyone can do this work.

Data-Driven Decision Making: Who’s the decider?

After I shared out my previous post, describing my confusion about making sense of certain types of data, the International Literacy Association (ILA) replied with a link to a recent report on this topic:

It’s a short whitepaper/brief titled “Beyond the Numbers: Using Data for Instructional Decision Making”. The principal authors, Vicki Park and Amanda Datnow, make a not-so-provocative claim that may still cause consternation in education:

Rather than data driving the decision-making, student learning goals should drive what data are collected and how they are used.

The reason this philosophy might cause unrest with educators is that data-driven decision making is still a mainstay in schools. Response to Intervention is dependent on quantitative-based progress monitoring. School leaders too often discount the anecdotal notes and other qualitative information collected by teachers. Sometimes the term “data-informed” replaces “data-driven”, but the approach largely remains aligned with the latter terminology and practice.

Our school is like many others. We get together three times a year, usually after screeners are administered. We create spreadsheets and make informed decisions on behalf of our students. Yet students nor their parents are involved in the process. Can we truly be informed if we are not also including the kids themselves in some way?

To be fair to ourselves and to other schools, making decisions regarding which students need more support or how teachers will adjust their instruction is relatively new to education. As well, our assessments are not as clean as, say, a blood test you might take at the doctor’s office. Data-driven decision making is hard enough for professional educators. There are concerns that bringing in students and their families might only contribute to the confusion through the fault of no one.

And yet there are teachers out there who are doing just this: positioning students as the lead assessors and decision-makers in their educational journey. For example, Samantha Mosher, a secondary special education teacher, guides her students to develop their own IEP goals as well as how to use various tools to monitor their own progress. The ownership for the work rests largely on the students’ shoulders. Samantha provides the modeling, support, and supervision to ensure each student’s goals and plan are appropriate.

An outcome in releasing the responsibility of making data-informed decisions to students is that Samantha has become more of a learner. As she notes in her blog post:

I was surprised that many students didn’t understand why they got specific accommodations. I expected to have to explain what was possible, but didn’t realized I would have to explain what their accommodations meant.

“Yes, but older students are able to set their own goals and monitor their own progress. My kids are not mature enough yet to manage that responsibility.” I hear you, and I am going to disagree. I can say that because I have seen younger students do this work firsthand. It’s not a completely independent process, but the data-informed decision making is at least co-led by the students.

In my first book on digital portfolios, I profiled the speech and language teacher at my last school, Genesis Cratsenberg. She used Evernote to capture her students reading aloud weekly progress notes to their parents. She would send the text of their reflections along with the audio home via email. Parents and students could hear first hand the growth they were making over time in the authentic context of a personalized student newsletter. It probably won’t surprise you that once Genesis started this practice, students on her caseload exited out of her program at a faster rate. (To read an excerpt from my book describing Genesis’s work, click here.)

I hope this post comes across as food for thought and not finger-wagging. Additionally, I don’t believe we should stop with our current approaches to data analysis. Our hands are sometimes tied when it comes to state and federal rules regarding RtI and special education qualification. At the same time, we are free to expand our understanding and our beliefs about what counts as data and who should be at the table when making these types of decisions.

The Data Says…What? (Or: Why we struggle to make sense of literacy assessment results)

art-lasovsky-559569-unsplash.jpg
Photo by Art Lasovsky on Unsplash

Our instructional leadership team recently analyzed the last two years of writing assessment data. We use a commercial-based rubric to score student writing in the fall and in the spring. As I presented the team with the results, now visualized in graphs and tables, we tried to make sense of the information.

It didn’t go as well as planned.

To start, we weren’t evaluating every student’s writing; for sake of time and efficiency, we only score half of the high, medium, and low pieces. This was a schoolwide evaluation but did not give teachers specific information to use. Also, the rubric changes as the students get older. Expectations increase even though the main tenets of writing quality stayed the same. Therefore it is hard to compare apples to apples. In addition, the subjective nature of writing, especially in a reader’s response, can cause frustration.

In the end, we decided to select one area of growth we would focus on as a faculty this year, while maintaining the gains already made.

Anytime I wade into the weeds of literacy assessment, I feel like I come out messier than when I entered. I often have more questions than answers. Problems go unresolved. Yet there have to be ways to evaluate our instructional impact on student literacy learning. It’s important that we validate our work and, more importantly, ensure students are growing as readers and writers.

One assessment, tried and true, is the running record for reading comprehension. This is now a standardized assessment through products such as Fountas & Pinnell. It is time-intensive, however, and even the best teachers struggle to give up instructional time to try to manage the other students when administering these assessments. Running records are the mainstay assessment tool for Reading Recovery teachers who work one-on-one with 1st grade students.

Another method for evaluating student literacy skills at the classroom level is observation. This is not as formal as a running record. Teachers can witness a student’s interactions with a text. Do they frustrate easily? How well are they applying their knowledge of text features with a new book? The information is almost exclusively qualitative, which leads to challenges of analyzing the results.

One tool for evaluating students as readers and writers that doesn’t get enough attention (in my opinion anyway) are student surveys. How students feel about literacy and how they see themselves as readers and writers is very telling. The challenge here is there are a lot of tools but not a lot of validity or reliability behind most of them. One tool, Me and My Reading Profile, is an example of a survey that is evidence-based.

To summarize, I don’t have an answer here as much as I wanted to bring up a challenge I think a lot of schools face: how do we really measure literacy success in an educational world that needs to quantify everything? Please share your ideas and experiences in the comments.

Silent Reading vs. Independent Reading: What’s the Difference? (plus digital tools to assess IR)

During a past professional development workshop, the consultant informed us at one point to end independent reading in our classrooms. “It doesn’t work.” (discrete sideway glances at each other) “Really. Have students read with a partner or facilitate choral reading. Students reading by themselves does not increase reading achievement.”

I think I know what the consultant was trying to convey: having students select books and then read silently without any guidance from the teacher is not an effective practice. Some students will utilize this time effectively, but in my experience as a classroom teacher and principal, it is the students that need our guidance the least that do well with silent reading. For students who have not developed a reading habit, or lack the skills to effectively engage in reading independently for an extended period of time, this may be a waste of time.

The problem with stating that students should not be reading independently in school is people confuse silent reading with independent reading (IR). I could see some principals globbing onto this misconception as fodder for restricting teachers from using IR and keep them following the canned program religiously. The fact is, these two practices are very different. In their excellent resource No More Independent Reading Without Support (Heinemann, 2013), Debbie Miller and Barbara Moss provide a helpful comparison:

Silent Reading

  • Lack of a clear focus – kids grab a book and read (pg. 2)
  • Teachers read silently along with the students (pg. 3)
  • No accountability regarding what students read (pg. 8)

Independent Reading (pg. 16)

  • Classroom time to read
  • Students choose what to read
  • Explicit instruction about what, why, and how readers read
  • Reading a large number of books and variety of texts through the year
  • Access to texts
  • Teacher monitoring, assessing, and support during IR
  • Students talk about what they read

You could really make the case that independent reading is not independent at all: it is silent reading with scaffolds, and independence is the goal. The rest of the book goes into all of the research that supports independent reading, along with ideas and examples for implementing it in classrooms. The authors also cite the Common Core Anchor Standard that addresses independent reading:

CCSS.ELA-LITERACY.CCRA.R.10
Read and comprehend complex literary and informational texts independently and proficiently.
Maybe this information will be helpful, in case you ever have a principal or consultant question your practice. 🙂

Assessing Independent Reading

The challenge then is: how do I assess independent reading? Many teachers use a paper-based conferring notebook. If that works for them, that’s great. My opinion is, this is an opportunity to leverage technology to effectively identify trends and patterns in students’ independent reading habits and skills, which can inform future instruction. Next is a list of tools that I have observed teachers using for assessing independent reading.

This is an iPad application that allows the user to draw, type, and add images to a single document. The teacher can use a stylus (I recommend the Apple Pencil) to handwrite their notes. Each student can be assigned their own folder within Notability. In addition, a teacher can record audio and add it to a note, such as a student reading aloud a page from their book. This information can be backed up to Google Drive, Evernote, and other cloud storage options.

In my last school, one of the teachers swore by this tool. “If you don’t pay for it,” she stated one day, “I’ll pay for it out of my own pocket.” Enough said! Teachers who use the Daily 5 workshop approach would find CC Pensieve familiar. It uses the same tenets of reading and writing to document student conferences and set literacy goals. Students can also be grouped in the software based on specific strategies and needs.

Teachers can set up a digital form to capture any type of information. The information goes to a spreadsheet. This allows the teacher to sort columns in order to drive instruction regarding students’ reading habits and skills. Also, the quantitative results are automatically graphed to look for classroom trends and patterns. We set up a Google Form in one grade level in our school:

I’ve written a lot about using Evernote as a teaching tool in the past. It is probably the tool I would use to document classroom formative assessment. Each note can house images, text, audio, and links, similar to Notability. These notes can be shared out as a URL with parents via email so they can see how their child is progressing as a reader. Check out this article I wrote for Middleweb on how a speech teacher used Evernote.

The previous digital tools for assessing independent reading are largely teacher-directed. The next three are more student-led. One of my favorite educational technologies is Kidblog. Classrooms can connect with other classrooms to comment on each other’s posts. Teachers can have students post book reviews, book trailers, and creative multimedia projects from other applications.

Whereas Kidblog is pretty wide open in how it can be used, Biblionasium is a more focused tool. It can serve as an online book club for students. Students can make to-read lists, write reviews and rate books, and recommend titles to friends. Like Kidblog, Biblionaisum is a smart way to connect reading with writing in an authentic way.

This social media site is for book lovers. Although 13 is the minimum age to join, parents need to provide consent if a child is under 18. Besides rating and reviewing books, Goodreads allows readers to create book groups with discussion boards around specific topics – an option for teachers to promote discussion and digital citizenship. Students can also post their original creative writing on Goodreads by genre. Check out this post I wrote about how to get students started.

What is your current understanding of independent reading? What tools do you find effective in assessing students during this time? Please share in the comments.

How we stopped using Accelerated Reader

This post describes how our school stopped using Accelerated Reader. This was not something planned; it seemed to happen naturally through our change process, like an animal shedding its skin. The purpose of this post is not to decry Accelerated Reader, although I do know this reading assessment/incentive program is not viewed favorably in some education circles. We ceased using a few other technologies as well, each for different reasons. The following timeline provides a basic outline of our process that led to this outcome.

  1. We developed collective commitments.

The idea of collective commitments comes from the Professional Learning Community literature, specifically Learning by Doing, 3rd edition. Collective commitments are similar to norms you might find on a team. The difference is collective commitments are focused on student learning. We commit to certain statements about our work on behalf of kids. They serve as concrete guidelines, manifested from our school’s mission and vision, as well as from current thinking we find effective for education.

We first started by reading one of four articles relevant to our work. The staff could choose which one to read. After discussing the contents of the articles in small group and then in whole group, we started crafting the statements. This was a smaller team of self-selected faculty. Staff who did not participate knew they may have to live with the outcomes of this work. Through lots of conversation and wordsmithing, we landed on seven statements that we all felt were important to our future work.

Screen Shot 2017-10-21 at 9.24.16 AM

At the next staff meeting, we shared these commitments, answered any questions about their meaning and intent, and then held an anonymous vote via Google Forms. We weren’t looking for unanimity but consensus. In other words, what does the will of the group say? Although there were a few faculty members that could not find a statement or two to be agreeable, the vast majority of teachers were on board. I shared the results while explaining that these statements were what we all will commit to, regardless of how we might feel about them.

  1. We identified a schoolwide literacy focus.

Using multiple assessments in the fall (STAR, Fountas & Pinnell), we found that our students needed more support in reading, specifically fluency. This meant that students needed to be reading and writing a lot more than they were, and to do so independently. Our instructional leadership team, which is a decision-making body and whose members were selected based on in-house interviews, started making plans to provide professional development for all faculty around the reading-writing connection. (For more information on instructional leadership teams and the reading-writing connection, see Regie Routman’s book Read, Write, Lead).

  1. We investigated the effectiveness of our current programming.

Now that we had collective commitments along with a focus on literacy, I think our lens changed a bit. Maybe I can only speak for myself, but we started to take a more critical look at our current work. What was working and what wasn’t?

Around that time, I discovered a summary report from the What Works Clearinghouse, a part of the Institute of Educational Sciences within the Department of Education. This report described all of the different studies on Accelerated Reader. Using only the research that met their criteria for reliability and validity, they found mixed to low results for schools that used Accelerated Reader.

I shared this summary report with our leadership team. We had a thoughtful conversation about the information, looking at both the pros and cons of this technology tool. However, we didn’t make any decisions to stop using it as a school. I also shared the report with Renaissance Learning, the maker of Accelerated Reader. As you might imagine, they had a more slanted view of this information, in spite of the rigorous approach to evaluating their product.

While we didn’t make a decision at that time based on the research, I think the fact that this report was shared with the faculty and discussed planted the seed for future conversations about the use of this product in our classrooms.

  1. We examined our beliefs about literacy.

The professional development program we selected to address our literacy needs, Regie Routman in Residence: The Reading-Writing Connection, asks educators to examine their beliefs regarding reading and writing instruction. Unlike our collective commitments, we all had to be in agreement regarding a literacy statement to own it and expect everyone to apply that practice in classrooms. We agreed upon three.

Beliefs Poster

This happened toward the end of the school year. It was a nice celebration of our initial efforts in improving literacy instruction. We will examine these beliefs again at the end of this school year, with the hope of agreeing upon a few more after completing this PD program. These beliefs served to align our collective philosophy about what our students truly need to become successful readers and writers. Momentum for change was on our side, which didn’t bode well for outdated practices.

  1. We started budgeting for next year.

It came as a surprise, at least to me, that money would be a primary factor in deciding not to continue using Accelerated Reader in our school.

With a finite budget and an infinite number of teacher resources in which we could utilize in the classroom, I started investigating the use of different technologies currently in the building. I found for Accelerated Reader that a small minority of teachers were actually using the product. This usage was broken down by class. We discovered that we were paying around $20 a year per student.

Given our limited school budget, I asked teachers both on our leadership team and the teachers who used it if they felt this was worth the cost. No one thought it was. (To be clear, the teachers who were using Accelerated Reader are good teachers. Just because they had their students taking AR quizzes does not suggest they were ineffective; quite the opposite. I think it is worth pointing this out as I have seen some shaming of teachers who use AR as a way to persuade them to stop using the tool. It’s not effective.)

With this information, we as a leadership team decided to end our subscription to Accelerated Reader. We made this decision within the context of our collective commitments and our literacy beliefs.

Next Steps

This story does not end with our school ceasing to using Accelerated Reader. For example, we realize we now have an assessment gap for our students and their independent reading. Lately, we have been talking about different digital tools such as Kidblog and Biblionasium as platforms for students to write book reviews and share their reading lives with others. We have also discussed different approaches for teachers to assess their readers more authentically, such as through conferring.

While there is a feeling of uncomfortableness right now, I feel a sense of possibility that maybe wasn’t there when Accelerated Reader was present in our building. As Peter Johnston notes from his book Opening Minds, ““Uncertainty is the foundation for inquiry and research.” I look forward to where this new turn in instruction might lead us.