Leadership as Process

It is October, which means it is school learning objective time. Principals are diligently crafting statements that are S.M.A.R.T. “By the end of the school year,…” and then we make a prediction about the future. In April, we revisit these statements and see if our crystal balls were correct.

I must admit that my goals are usually not fully met. I aim too high, at least by educator evaluation standards. These systems are set up to shoot for just above the status quo instead of for the stars. Great for reporting out. Yet I don’t want to lower my expectations.

Setting objectives and goals are a good thing. We should have something tangible to strive for and know that we have a target to hit. My challenge with this annual exercise is how heavily we focus on a product while largely ignoring the process to get there.

Left alone, schools can purchase a resource or adopt a commercial curriculum that is aligned to the standards. But are they also aligned with our specific students’ needs? Do the practices and resources we implement engage our population of kids? Maybe we are marching toward a specific destination, but are we taking the best pathway to get there?

Having a plan and implementing a plan are two different things. Like an effective classroom teacher, we have to be responsive to the climate and the culture of a school. That means we should be aware of our environment, accept our current status, and then move forward together.

For example, when I arrived at my current elementary school, there was some interest in going schoolwide with the Lucy Calkins Units of Study for reading and for writing. Professionally, I find a lot of positive qualities about the program. Also in the periphery was a desire to get a more consistent literacy curriculum. Our scores reflected a need for instructional consistency and coherence.

If we have an outcome-focused leadership style, then it makes a lot of sense to purchase a program that promises exactly what is being requested. But that means we are investing in stuff instead of investing in teachers. So we declined. The teacher-leaders and I weren’t saying no to one program or passing the buck on making a hard decision. What we wanted instead was a clear plan to become better as practitioners.

This meant first revisiting our identities as educators. What does it mean as a teacher and a professional if the lessons were scripted for us? Are we not worthy of the trust and responsibility that is essential for the many decisions we make every day? This led to examining our beliefs about the foundation of literacy, the reading-writing connection. We found unanimity on only two specific areas out of 21 statements. Instead of treating this as a failure, we saw these two areas of agreement as a starting point for success. We nurtured this beginning and started growing ourselves to become the faculty we were meant to be for our students. After two years of work, we found nine areas of agreement on these same statements.

Screen Shot 2018-10-15 at 4.28.33 PM.png

There are no ratings or other evaluation scores attached to these statements. I am not sure how to quantify our growth as a faculty, and I am pretty sure I wouldn’t want to if I knew how. Instead, we changed how we saw ourselves and how we viewed our students as readers, writers, and thinkers. This is not an objective or goal that is suggested by our evaluation program, but maybe it should be.

I get to this point in a post and I feel like we are bragging. We are not. While I believe our teachers are special, there are great educators in every school. The difference, I think, is that we chose to focus more on the process of becoming better and less on the outcomes that were largely out of our hands. This reduced our anxiety with regard to test scores and public perception of our school. Anyone can do this work.

Data-Driven Decision Making: Who’s the decider?

After I shared out my previous post, describing my confusion about making sense of certain types of data, the International Literacy Association (ILA) replied with a link to a recent report on this topic:

It’s a short whitepaper/brief titled “Beyond the Numbers: Using Data for Instructional Decision Making”. The principal authors, Vicki Park and Amanda Datnow, make a not-so-provocative claim that may still cause consternation in education:

Rather than data driving the decision-making, student learning goals should drive what data are collected and how they are used.

The reason this philosophy might cause unrest with educators is that data-driven decision making is still a mainstay in schools. Response to Intervention is dependent on quantitative-based progress monitoring. School leaders too often discount the anecdotal notes and other qualitative information collected by teachers. Sometimes the term “data-informed” replaces “data-driven”, but the approach largely remains aligned with the latter terminology and practice.

Our school is like many others. We get together three times a year, usually after screeners are administered. We create spreadsheets and make informed decisions on behalf of our students. Yet students nor their parents are involved in the process. Can we truly be informed if we are not also including the kids themselves in some way?

To be fair to ourselves and to other schools, making decisions regarding which students need more support or how teachers will adjust their instruction is relatively new to education. As well, our assessments are not as clean as, say, a blood test you might take at the doctor’s office. Data-driven decision making is hard enough for professional educators. There are concerns that bringing in students and their families might only contribute to the confusion through the fault of no one.

And yet there are teachers out there who are doing just this: positioning students as the lead assessors and decision-makers in their educational journey. For example, Samantha Mosher, a secondary special education teacher, guides her students to develop their own IEP goals as well as how to use various tools to monitor their own progress. The ownership for the work rests largely on the students’ shoulders. Samantha provides the modeling, support, and supervision to ensure each student’s goals and plan are appropriate.

An outcome in releasing the responsibility of making data-informed decisions to students is that Samantha has become more of a learner. As she notes in her blog post:

I was surprised that many students didn’t understand why they got specific accommodations. I expected to have to explain what was possible, but didn’t realized I would have to explain what their accommodations meant.

“Yes, but older students are able to set their own goals and monitor their own progress. My kids are not mature enough yet to manage that responsibility.” I hear you, and I am going to disagree. I can say that because I have seen younger students do this work firsthand. It’s not a completely independent process, but the data-informed decision making is at least co-led by the students.

In my first book on digital portfolios, I profiled the speech and language teacher at my last school, Genesis Cratsenberg. She used Evernote to capture her students reading aloud weekly progress notes to their parents. She would send the text of their reflections along with the audio home via email. Parents and students could hear first hand the growth they were making over time in the authentic context of a personalized student newsletter. It probably won’t surprise you that once Genesis started this practice, students on her caseload exited out of her program at a faster rate. (To read an excerpt from my book describing Genesis’s work, click here.)

I hope this post comes across as food for thought and not finger-wagging. Additionally, I don’t believe we should stop with our current approaches to data analysis. Our hands are sometimes tied when it comes to state and federal rules regarding RtI and special education qualification. At the same time, we are free to expand our understanding and our beliefs about what counts as data and who should be at the table when making these types of decisions.

The Data Says…What? (Or: Why we struggle to make sense of literacy assessment results)

art-lasovsky-559569-unsplash.jpg
Photo by Art Lasovsky on Unsplash

Our instructional leadership team recently analyzed the last two years of writing assessment data. We use a commercial-based rubric to score student writing in the fall and in the spring. As I presented the team with the results, now visualized in graphs and tables, we tried to make sense of the information.

It didn’t go as well as planned.

To start, we weren’t evaluating every student’s writing; for sake of time and efficiency, we only score half of the high, medium, and low pieces. This was a schoolwide evaluation but did not give teachers specific information to use. Also, the rubric changes as the students get older. Expectations increase even though the main tenets of writing quality stayed the same. Therefore it is hard to compare apples to apples. In addition, the subjective nature of writing, especially in a reader’s response, can cause frustration.

In the end, we decided to select one area of growth we would focus on as a faculty this year, while maintaining the gains already made.

Anytime I wade into the weeds of literacy assessment, I feel like I come out messier than when I entered. I often have more questions than answers. Problems go unresolved. Yet there have to be ways to evaluate our instructional impact on student literacy learning. It’s important that we validate our work and, more importantly, ensure students are growing as readers and writers.

One assessment, tried and true, is the running record for reading comprehension. This is now a standardized assessment through products such as Fountas & Pinnell. It is time-intensive, however, and even the best teachers struggle to give up instructional time to try to manage the other students when administering these assessments. Running records are the mainstay assessment tool for Reading Recovery teachers who work one-on-one with 1st grade students.

Another method for evaluating student literacy skills at the classroom level is observation. This is not as formal as a running record. Teachers can witness a student’s interactions with a text. Do they frustrate easily? How well are they applying their knowledge of text features with a new book? The information is almost exclusively qualitative, which leads to challenges of analyzing the results.

One tool for evaluating students as readers and writers that doesn’t get enough attention (in my opinion anyway) are student surveys. How students feel about literacy and how they see themselves as readers and writers is very telling. The challenge here is there are a lot of tools but not a lot of validity or reliability behind most of them. One tool, Me and My Reading Profile, is an example of a survey that is evidence-based.

To summarize, I don’t have an answer here as much as I wanted to bring up a challenge I think a lot of schools face: how do we really measure literacy success in an educational world that needs to quantify everything? Please share your ideas and experiences in the comments.

Silent Reading vs. Independent Reading: What’s the Difference? (plus digital tools to assess IR)

During a past professional development workshop, the consultant informed us at one point to end independent reading in our classrooms. “It doesn’t work.” (discrete sideway glances at each other) “Really. Have students read with a partner or facilitate choral reading. Students reading by themselves does not increase reading achievement.”

I think I know what the consultant was trying to convey: having students select books and then read silently without any guidance from the teacher is not an effective practice. Some students will utilize this time effectively, but in my experience as a classroom teacher and principal, it is the students that need our guidance the least that do well with silent reading. For students who have not developed a reading habit, or lack the skills to effectively engage in reading independently for an extended period of time, this may be a waste of time.

The problem with stating that students should not be reading independently in school is people confuse silent reading with independent reading (IR). I could see some principals globbing onto this misconception as fodder for restricting teachers from using IR and keep them following the canned program religiously. The fact is, these two practices are very different. In their excellent resource No More Independent Reading Without Support (Heinemann, 2013), Debbie Miller and Barbara Moss provide a helpful comparison:

Silent Reading

  • Lack of a clear focus – kids grab a book and read (pg. 2)
  • Teachers read silently along with the students (pg. 3)
  • No accountability regarding what students read (pg. 8)

Independent Reading (pg. 16)

  • Classroom time to read
  • Students choose what to read
  • Explicit instruction about what, why, and how readers read
  • Reading a large number of books and variety of texts through the year
  • Access to texts
  • Teacher monitoring, assessing, and support during IR
  • Students talk about what they read

You could really make the case that independent reading is not independent at all: it is silent reading with scaffolds, and independence is the goal. The rest of the book goes into all of the research that supports independent reading, along with ideas and examples for implementing it in classrooms. The authors also cite the Common Core Anchor Standard that addresses independent reading:

CCSS.ELA-LITERACY.CCRA.R.10
Read and comprehend complex literary and informational texts independently and proficiently.
Maybe this information will be helpful, in case you ever have a principal or consultant question your practice. 🙂

Assessing Independent Reading

The challenge then is: how do I assess independent reading? Many teachers use a paper-based conferring notebook. If that works for them, that’s great. My opinion is, this is an opportunity to leverage technology to effectively identify trends and patterns in students’ independent reading habits and skills, which can inform future instruction. Next is a list of tools that I have observed teachers using for assessing independent reading.

This is an iPad application that allows the user to draw, type, and add images to a single document. The teacher can use a stylus (I recommend the Apple Pencil) to handwrite their notes. Each student can be assigned their own folder within Notability. In addition, a teacher can record audio and add it to a note, such as a student reading aloud a page from their book. This information can be backed up to Google Drive, Evernote, and other cloud storage options.

In my last school, one of the teachers swore by this tool. “If you don’t pay for it,” she stated one day, “I’ll pay for it out of my own pocket.” Enough said! Teachers who use the Daily 5 workshop approach would find CC Pensieve familiar. It uses the same tenets of reading and writing to document student conferences and set literacy goals. Students can also be grouped in the software based on specific strategies and needs.

Teachers can set up a digital form to capture any type of information. The information goes to a spreadsheet. This allows the teacher to sort columns in order to drive instruction regarding students’ reading habits and skills. Also, the quantitative results are automatically graphed to look for classroom trends and patterns. We set up a Google Form in one grade level in our school:

I’ve written a lot about using Evernote as a teaching tool in the past. It is probably the tool I would use to document classroom formative assessment. Each note can house images, text, audio, and links, similar to Notability. These notes can be shared out as a URL with parents via email so they can see how their child is progressing as a reader. Check out this article I wrote for Middleweb on how a speech teacher used Evernote.

The previous digital tools for assessing independent reading are largely teacher-directed. The next three are more student-led. One of my favorite educational technologies is Kidblog. Classrooms can connect with other classrooms to comment on each other’s posts. Teachers can have students post book reviews, book trailers, and creative multimedia projects from other applications.

Whereas Kidblog is pretty wide open in how it can be used, Biblionasium is a more focused tool. It can serve as an online book club for students. Students can make to-read lists, write reviews and rate books, and recommend titles to friends. Like Kidblog, Biblionaisum is a smart way to connect reading with writing in an authentic way.

This social media site is for book lovers. Although 13 is the minimum age to join, parents need to provide consent if a child is under 18. Besides rating and reviewing books, Goodreads allows readers to create book groups with discussion boards around specific topics – an option for teachers to promote discussion and digital citizenship. Students can also post their original creative writing on Goodreads by genre. Check out this post I wrote about how to get students started.

What is your current understanding of independent reading? What tools do you find effective in assessing students during this time? Please share in the comments.

How we stopped using Accelerated Reader

This post describes how our school stopped using Accelerated Reader. This was not something planned; it seemed to happen naturally through our change process, like an animal shedding its skin. The purpose of this post is not to decry Accelerated Reader, although I do know this reading assessment/incentive program is not viewed favorably in some education circles. We ceased using a few other technologies as well, each for different reasons. The following timeline provides a basic outline of our process that led to this outcome.

  1. We developed collective commitments.

The idea of collective commitments comes from the Professional Learning Community literature, specifically Learning by Doing, 3rd edition. Collective commitments are similar to norms you might find on a team. The difference is collective commitments are focused on student learning. We commit to certain statements about our work on behalf of kids. They serve as concrete guidelines, manifested from our school’s mission and vision, as well as from current thinking we find effective for education.

We first started by reading one of four articles relevant to our work. The staff could choose which one to read. After discussing the contents of the articles in small group and then in whole group, we started crafting the statements. This was a smaller team of self-selected faculty. Staff who did not participate knew they may have to live with the outcomes of this work. Through lots of conversation and wordsmithing, we landed on seven statements that we all felt were important to our future work.

Screen Shot 2017-10-21 at 9.24.16 AM

At the next staff meeting, we shared these commitments, answered any questions about their meaning and intent, and then held an anonymous vote via Google Forms. We weren’t looking for unanimity but consensus. In other words, what does the will of the group say? Although there were a few faculty members that could not find a statement or two to be agreeable, the vast majority of teachers were on board. I shared the results while explaining that these statements were what we all will commit to, regardless of how we might feel about them.

  1. We identified a schoolwide literacy focus.

Using multiple assessments in the fall (STAR, Fountas & Pinnell), we found that our students needed more support in reading, specifically fluency. This meant that students needed to be reading and writing a lot more than they were, and to do so independently. Our instructional leadership team, which is a decision-making body and whose members were selected based on in-house interviews, started making plans to provide professional development for all faculty around the reading-writing connection. (For more information on instructional leadership teams and the reading-writing connection, see Regie Routman’s book Read, Write, Lead).

  1. We investigated the effectiveness of our current programming.

Now that we had collective commitments along with a focus on literacy, I think our lens changed a bit. Maybe I can only speak for myself, but we started to take a more critical look at our current work. What was working and what wasn’t?

Around that time, I discovered a summary report from the What Works Clearinghouse, a part of the Institute of Educational Sciences within the Department of Education. This report described all of the different studies on Accelerated Reader. Using only the research that met their criteria for reliability and validity, they found mixed to low results for schools that used Accelerated Reader.

I shared this summary report with our leadership team. We had a thoughtful conversation about the information, looking at both the pros and cons of this technology tool. However, we didn’t make any decisions to stop using it as a school. I also shared the report with Renaissance Learning, the maker of Accelerated Reader. As you might imagine, they had a more slanted view of this information, in spite of the rigorous approach to evaluating their product.

While we didn’t make a decision at that time based on the research, I think the fact that this report was shared with the faculty and discussed planted the seed for future conversations about the use of this product in our classrooms.

  1. We examined our beliefs about literacy.

The professional development program we selected to address our literacy needs, Regie Routman in Residence: The Reading-Writing Connection, asks educators to examine their beliefs regarding reading and writing instruction. Unlike our collective commitments, we all had to be in agreement regarding a literacy statement to own it and expect everyone to apply that practice in classrooms. We agreed upon three.

Beliefs Poster

This happened toward the end of the school year. It was a nice celebration of our initial efforts in improving literacy instruction. We will examine these beliefs again at the end of this school year, with the hope of agreeing upon a few more after completing this PD program. These beliefs served to align our collective philosophy about what our students truly need to become successful readers and writers. Momentum for change was on our side, which didn’t bode well for outdated practices.

  1. We started budgeting for next year.

It came as a surprise, at least to me, that money would be a primary factor in deciding not to continue using Accelerated Reader in our school.

With a finite budget and an infinite number of teacher resources in which we could utilize in the classroom, I started investigating the use of different technologies currently in the building. I found for Accelerated Reader that a small minority of teachers were actually using the product. This usage was broken down by class. We discovered that we were paying around $20 a year per student.

Given our limited school budget, I asked teachers both on our leadership team and the teachers who used it if they felt this was worth the cost. No one thought it was. (To be clear, the teachers who were using Accelerated Reader are good teachers. Just because they had their students taking AR quizzes does not suggest they were ineffective; quite the opposite. I think it is worth pointing this out as I have seen some shaming of teachers who use AR as a way to persuade them to stop using the tool. It’s not effective.)

With this information, we as a leadership team decided to end our subscription to Accelerated Reader. We made this decision within the context of our collective commitments and our literacy beliefs.

Next Steps

This story does not end with our school ceasing to using Accelerated Reader. For example, we realize we now have an assessment gap for our students and their independent reading. Lately, we have been talking about different digital tools such as Kidblog and Biblionasium as platforms for students to write book reviews and share their reading lives with others. We have also discussed different approaches for teachers to assess their readers more authentically, such as through conferring.

While there is a feeling of uncomfortableness right now, I feel a sense of possibility that maybe wasn’t there when Accelerated Reader was present in our building. As Peter Johnston notes from his book Opening Minds, ““Uncertainty is the foundation for inquiry and research.” I look forward to where this new turn in instruction might lead us.

 

Better Data Days Are Ahead

35936619430 07cd76386b
We’ve all been there, we collect data, make beautiful color coded spreadsheets detailing nearly every data point we could possibly collect on each possible child. We compare district data to state data, nationally norm referenced data against in class assessments. We highlight students’ projected growth in order to make adequate progress for each child. We look at whole class data and determine standards to re-teach. We attend collaboration and intervention meetings in order to discuss students who are receiving services and what progress is being made. We create, update, and review a school data wall. We can name multiple data points on each student in our classes at the snap of a finger. 

Face it, we are inundated with data. But are we always really looking at the data for all children and determining the next steps?
Chapter 6 “Supporting Curriculum and Assessment” made me pause and think about how important it is to take that next step in data. Jen dives deep in this chapter with some really important details to consider as literacy leaders in a building. Not only should we be tracking student achievement for ALL learners, we should carve out time periodically to review this data and determine next steps. Some prompting questions Jen outlines are as follows:

  • What are the strengths and needs of each student?
  • What students are you concerned about?
  • What students have made the most growth?
  • What observations can you make about your overall literacy data?

Jen suggests having these literacy team meetings each fall, winter, and spring to ensure that no student falls through the cracks. Each person has a crucial role in the process; the teacher reflects on each student, the principal reviews the student’s cumulative folder, the assistant principal listens and takes notes for student placement, and the literacy leader takes notes on students who are still at risk of failure.

As a result of reading this chapter, I have had some really great discussions with teachers and my administration about how we can create a better culture of data REVIEW. I am excited that our staff is ready to take the next steps in data review and that we are clearly beyond the idea of just being great collectors of data. 

This is going to be a great year. Teachers are asking for the next step in our data process and are ready to take it on and make it our own, and make it meaningful. I am confident that as a result, our teachers will feel a better sense of direction and purpose. And once again, the work that goes on behind the scenes will play out better in classroom instruction, in our relationships with our students and families, and will result in increased student achievement.

Coaching Work: Curriculum & Assessment by @danamurphy68 #litleaders

In Chapter Six of Becoming a Literacy Leader, Jennifer Allen outlines the various ways she is able to support teachers with curriculum and assessment in her role as an instructional coach. As anyone in the field of education knows, curriculum and assessment are the backbone of the school system. Curriculum drives our teaching and assessment helps us fine-tune it. I’d go as far as to say supporting curriculum and assessment is one of my top three duties as an instructional coach.

Allen dedicates pages 114 – 116 to explaining how she helps prepare assessment materials during each assessment cycle. I nodded to myself as I read, remembering how I spent an entire morning last year in the windowless copy room making copies of our running record forms for the staff. It certainly wasn’t inspiring work, but I agree with Jennifer that preparing assessment materials is important work. When teachers are freed of the tedious jobs of copying or creating spreadsheets or organizing assessment materials, they are free to concentrate on the hard work of administering and analyzing assessments. If I can remove the ‘busywork’ part of assessment administration for them, I don’t mind spending a morning in a windowless copy room. In this way I can provide the time and space for teachers to think deeply about their assessments. If I can do the busywork, they can do the work that really matters.

green-chameleon-21532.jpg

While reading Chapter Six, I thought about how I support curriculum and assessment in my school district. I do many of things Allen wrote about, but what seems most important to me is helping teachers look at student work as formative assessment. On page 110, Allen wrote:

Students should be at the heart of our conversations around curriculum and assessment, and it’s important that we don’t let them define who students are or might become. 

This quote summarizes my driving belief as an instructional coach. It is easy to fall into the trap of believing we (instructional coaches) exist to support the teachers, but the truth is we are ultimately there for the students. In order to keep students at the heart of my work as a coach, I work hard to have student work present during any coaching conversation. This holds true at the end of an assessment cycle as well. It benefits everyone to slow down and take the time to review the assessments (not the scores, the actual assessments). Teachers bring their completed writing prompts or math unit exams or running records, and we use a protocol to talk about the work. There are an abundant amount of protocols available at NSRF. I also highly recommend the Notice and Note protocol from The Practice of Authentic PLCs by Daniel R. Venables. This is my go-to protocol to look at student work with a group of teachers.

Teachers are in the classroom, doing the hard work of implementing curriculum and administering assessments. Our job as literacy leaders is to support them by giving them the time and space to reflect on their hard work.