Literacy Leadership: Expecting (and Embracing) Conflict

Our school is currently examining our beliefs about reading instruction. Faculty members respond with either “agree” or “disagree” to over twenty related statements. Examples include: “Leveling books in the classroom library is a good idea,” and “Students need to do lots of independent reading of self-selected texts.”  (These statements come from Regie Routman’s book Read, Write, Lead: Breakthrough Strategies for Schoolwide Literacy Success.)

So far, half the teachers have taken the beliefs survey. Out of the over twenty statements, we are completely in agreement on five statements. My prediction is this number will be reduced after everyone has taken the survey.

Screen Shot 2018-10-17 at 7.30.49 PM.png

This is not a bad thing.

I’ve come to learn professional conflict can be a source of professional learning. I’m not referring to in-fighting over petty reasons. Instead, I refer to the deeper philosophical debates that should be occurring but are often pushed aside for fear of having a hard yet necessary conversation.

Conflict in the context of our instructional beliefs is the misalignment between our current values and practices and our colleagues’s. This awareness of our current situation is a good thing. Now we have information to act upon, as long as we accept our current reality. To address this misalignment, we need to start engaging in professional conversations around these important topics in safe and productive ways

Take the topic of reading levels, depicted in the previous image. It’s a constant source of disagreement in elementary schools. You see we are pretty divided already on this issue. The first question I might ask to start a conversation around reading levels is, “Why do you think the results are the way they are?” By asking wondering questions, we open up the floor to different possibilities. I am not taking sides on levels. I am curious.

Now imagine what the responses might be.

  • From a teacher who supports levels as a way to assess student reading progress, they can point to the fact that younger readers make so much growth in a short amount of time that teachers need a reliable evaluation tool to inform instruction. Likewise, if students are not making growth at the primary level, we need to be responsive and implement a reading intervention to address any deficits.
  • From a teacher who does not support levels as a way to assess student reading progress, they might point to past experiences in which students were treated as a level, such as organizing the classroom library only within a leveling system. Or, they feel that levels for older students are not as helpful as conferring notes, student self-assessments, and performance tasks such as book trailers.

Who is right, and who is wrong? I believe both perspectives make a strong case. This leads to a potential second question that guides a discussion to consider a third option. As an example, “What if designated reading levels were only helpful at certain grade levels?”, or “Might there be a better way to phrase this statement to both recognize the benefits of this approach and point out its limits?” This line of inquiry may lead to a revision of the statement, such as:

Designated levels can be an accurate way to assess student reading progress at the primary level and inform authentic instruction.

If a faculty can agree on this revision, then we can own it. (By the way, a professional conversation like this can happen during a staff meeting or professional learning communities.) If the revision is not acceptable to all, it can be brought back to an instructional leadership team for further revision.

The benefits of embracing conflict within structured professional dialogue are many. First, we air out our issues in a safe and productive way. Second, we start to develop a common language. For example, maybe some staff members are unfamiliar with benchmark books as an assessment tool. Teachers with this knowledge can explain this concept; unhealthy conflict is often the product of lack of communication and making false assumptions. Third, when we agree upon a belief then we own it. There’s no opting out in the building. The faculty is free to call out each other when these beliefs are not translated to practice. But this doesn’t happen often because we own the belief. Teachers are more empowered to act on it and seek out support if needed. Finally, a school leader has modeled what it means to have a professional conversation that is productive and doesn’t end in hurt feelings.

What are your thoughts on the role of conflict in leading a literacy initiative and/or a school in general? Please share in the comments.

 

Leadership as Process

It is October, which means it is school learning objective time. Principals are diligently crafting statements that are S.M.A.R.T. “By the end of the school year,…” and then we make a prediction about the future. In April, we revisit these statements and see if our crystal balls were correct.

I must admit that my goals are usually not fully met. I aim too high, at least by educator evaluation standards. These systems are set up to shoot for just above the status quo instead of for the stars. Great for reporting out. Yet I don’t want to lower my expectations.

Setting objectives and goals are a good thing. We should have something tangible to strive for and know that we have a target to hit. My challenge with this annual exercise is how heavily we focus on a product while largely ignoring the process to get there.

Left alone, schools can purchase a resource or adopt a commercial curriculum that is aligned to the standards. But are they also aligned with our specific students’ needs? Do the practices and resources we implement engage our population of kids? Maybe we are marching toward a specific destination, but are we taking the best pathway to get there?

Having a plan and implementing a plan are two different things. Like an effective classroom teacher, we have to be responsive to the climate and the culture of a school. That means we should be aware of our environment, accept our current status, and then move forward together.

For example, when I arrived at my current elementary school, there was some interest in going schoolwide with the Lucy Calkins Units of Study for reading and for writing. Professionally, I find a lot of positive qualities about the program. Also in the periphery was a desire to get a more consistent literacy curriculum. Our scores reflected a need for instructional consistency and coherence.

If we have an outcome-focused leadership style, then it makes a lot of sense to purchase a program that promises exactly what is being requested. But that means we are investing in stuff instead of investing in teachers. So we declined. The teacher-leaders and I weren’t saying no to one program or passing the buck on making a hard decision. What we wanted instead was a clear plan to become better as practitioners.

This meant first revisiting our identities as educators. What does it mean as a teacher and a professional if the lessons were scripted for us? Are we not worthy of the trust and responsibility that is essential for the many decisions we make every day? This led to examining our beliefs about the foundation of literacy, the reading-writing connection. We found unanimity on only two specific areas out of 21 statements. Instead of treating this as a failure, we saw these two areas of agreement as a starting point for success. We nurtured this beginning and started growing ourselves to become the faculty we were meant to be for our students. After two years of work, we found nine areas of agreement on these same statements.

Screen Shot 2018-10-15 at 4.28.33 PM.png

There are no ratings or other evaluation scores attached to these statements. I am not sure how to quantify our growth as a faculty, and I am pretty sure I wouldn’t want to if I knew how. Instead, we changed how we saw ourselves and how we viewed our students as readers, writers, and thinkers. This is not an objective or goal that is suggested by our evaluation program, but maybe it should be.

I get to this point in a post and I feel like we are bragging. We are not. While I believe our teachers are special, there are great educators in every school. The difference, I think, is that we chose to focus more on the process of becoming better and less on the outcomes that were largely out of our hands. This reduced our anxiety with regard to test scores and public perception of our school. Anyone can do this work.

Reading by Example Newsletter, 10-13-18: Data-Informed Instruction

This week’s newsletter focuses on the use of data in the classroom to inform teaching and learning.

  1. What do you do when the data isn’t making any sense? Our instructional leadership team and I encountered this challenge in this post.
  2. One literacy assessment mentioned in the post is Fountas & Pinnell. This reference reminds me of an important blog post the two educators wrote, titled A Level is a Teacher’s Tool, NOT a Child’s Label.
  3. For a more authentic approach to evaluating student writing schoolwide, check out the Educational Leadership article Looking at Student Work for a practical assessment process. (ASCD membership required.)

To read the rest of the newsletter, click here and sign up for free today!

Data-Driven Decision Making: Who’s the decider?

After I shared out my previous post, describing my confusion about making sense of certain types of data, the International Literacy Association (ILA) replied with a link to a recent report on this topic:

It’s a short whitepaper/brief titled “Beyond the Numbers: Using Data for Instructional Decision Making”. The principal authors, Vicki Park and Amanda Datnow, make a not-so-provocative claim that may still cause consternation in education:

Rather than data driving the decision-making, student learning goals should drive what data are collected and how they are used.

The reason this philosophy might cause unrest with educators is that data-driven decision making is still a mainstay in schools. Response to Intervention is dependent on quantitative-based progress monitoring. School leaders too often discount the anecdotal notes and other qualitative information collected by teachers. Sometimes the term “data-informed” replaces “data-driven”, but the approach largely remains aligned with the latter terminology and practice.

Our school is like many others. We get together three times a year, usually after screeners are administered. We create spreadsheets and make informed decisions on behalf of our students. Yet students nor their parents are involved in the process. Can we truly be informed if we are not also including the kids themselves in some way?

To be fair to ourselves and to other schools, making decisions regarding which students need more support or how teachers will adjust their instruction is relatively new to education. As well, our assessments are not as clean as, say, a blood test you might take at the doctor’s office. Data-driven decision making is hard enough for professional educators. There are concerns that bringing in students and their families might only contribute to the confusion through the fault of no one.

And yet there are teachers out there who are doing just this: positioning students as the lead assessors and decision-makers in their educational journey. For example, Samantha Mosher, a secondary special education teacher, guides her students to develop their own IEP goals as well as how to use various tools to monitor their own progress. The ownership for the work rests largely on the students’ shoulders. Samantha provides the modeling, support, and supervision to ensure each student’s goals and plan are appropriate.

An outcome in releasing the responsibility of making data-informed decisions to students is that Samantha has become more of a learner. As she notes in her blog post:

I was surprised that many students didn’t understand why they got specific accommodations. I expected to have to explain what was possible, but didn’t realized I would have to explain what their accommodations meant.

“Yes, but older students are able to set their own goals and monitor their own progress. My kids are not mature enough yet to manage that responsibility.” I hear you, and I am going to disagree. I can say that because I have seen younger students do this work firsthand. It’s not a completely independent process, but the data-informed decision making is at least co-led by the students.

In my first book on digital portfolios, I profiled the speech and language teacher at my last school, Genesis Cratsenberg. She used Evernote to capture her students reading aloud weekly progress notes to their parents. She would send the text of their reflections along with the audio home via email. Parents and students could hear first hand the growth they were making over time in the authentic context of a personalized student newsletter. It probably won’t surprise you that once Genesis started this practice, students on her caseload exited out of her program at a faster rate. (To read an excerpt from my book describing Genesis’s work, click here.)

I hope this post comes across as food for thought and not finger-wagging. Additionally, I don’t believe we should stop with our current approaches to data analysis. Our hands are sometimes tied when it comes to state and federal rules regarding RtI and special education qualification. At the same time, we are free to expand our understanding and our beliefs about what counts as data and who should be at the table when making these types of decisions.

The Data Says…What? (Or: Why we struggle to make sense of literacy assessment results)

art-lasovsky-559569-unsplash.jpg
Photo by Art Lasovsky on Unsplash

Our instructional leadership team recently analyzed the last two years of writing assessment data. We use a commercial-based rubric to score student writing in the fall and in the spring. As I presented the team with the results, now visualized in graphs and tables, we tried to make sense of the information.

It didn’t go as well as planned.

To start, we weren’t evaluating every student’s writing; for sake of time and efficiency, we only score half of the high, medium, and low pieces. This was a schoolwide evaluation but did not give teachers specific information to use. Also, the rubric changes as the students get older. Expectations increase even though the main tenets of writing quality stayed the same. Therefore it is hard to compare apples to apples. In addition, the subjective nature of writing, especially in a reader’s response, can cause frustration.

In the end, we decided to select one area of growth we would focus on as a faculty this year, while maintaining the gains already made.

Anytime I wade into the weeds of literacy assessment, I feel like I come out messier than when I entered. I often have more questions than answers. Problems go unresolved. Yet there have to be ways to evaluate our instructional impact on student literacy learning. It’s important that we validate our work and, more importantly, ensure students are growing as readers and writers.

One assessment, tried and true, is the running record for reading comprehension. This is now a standardized assessment through products such as Fountas & Pinnell. It is time-intensive, however, and even the best teachers struggle to give up instructional time to try to manage the other students when administering these assessments. Running records are the mainstay assessment tool for Reading Recovery teachers who work one-on-one with 1st grade students.

Another method for evaluating student literacy skills at the classroom level is observation. This is not as formal as a running record. Teachers can witness a student’s interactions with a text. Do they frustrate easily? How well are they applying their knowledge of text features with a new book? The information is almost exclusively qualitative, which leads to challenges of analyzing the results.

One tool for evaluating students as readers and writers that doesn’t get enough attention (in my opinion anyway) are student surveys. How students feel about literacy and how they see themselves as readers and writers is very telling. The challenge here is there are a lot of tools but not a lot of validity or reliability behind most of them. One tool, Me and My Reading Profile, is an example of a survey that is evidence-based.

To summarize, I don’t have an answer here as much as I wanted to bring up a challenge I think a lot of schools face: how do we really measure literacy success in an educational world that needs to quantify everything? Please share your ideas and experiences in the comments.

Read by Example Newsletter, 10-6-18: Work/Life Balance

Jon Kabat-Zinn (1).jpg

In this week’s newsletter, we explore the concept of work-life balance.

  1. Do checklists drain you? Consider an “un-checklist”, described in this post, in which you add daily experiences to a list that documents an interesting life.
  2. Commit30 is my favorite planner. My wife introduced it to me. Each month, you commit to one habit in an area you want to improve. (This month is reading widely.) I can integrate work and home instead of always trying to find balance.
  3. The concept of work/life integration vs. balance originates from the research by Dr. Ellen Langer, author of Mindfulness and The Power of Mindful Learning. You can find links to both books on the blog’s Recommended Reading page.
  4. School/literacy leadership can be lonely. To combat isolation, I recommended five applications for creating a sense of connectedness in this post.
  5. The concept of connectedness can be explored in Parker Palmer’s article Thirteen Ways to Look at Community (Center for Courage and Renewal).
  6. Of all the applications, the most important one to me is Twitter. It’s what got me started on becoming a connected educator. Colleagues and I wrote an ASCD Express article on this topic, which includes several “edu-tweeps” to follow…

To read the rest of this newsletter, sign up here for free. Thanks for following!

-Matt

Why Friday Should Be Your New Monday

This morning, a student arriving at school was wearing a shirt with the following phrase on the front:

Got That Friday Feeling

I laughed and then went about my day.

Fridays always seem a bit lighter and loose. For examples, jeans replace khakis. These quiet yet clear transitions to the weekend are normal. Yet can they also cause us to not appreciate the present? We are mentally on Saturday time even before Friday begins.

Related, is this why people generally struggle more with Mondays? As I consider this question, the theory does make some sense. For example, because we prioritize our weekends (as we should), we may also become frustrated with the lack of transition to Monday. All of that paperwork left on our desk isn’t filing itself. It’s like we are almost starting behind when we come back from two days off.

So I humbly suggest turning your Fridays into your Mondays. Not all day Friday. Only part, likely the afternoon. By cleaning up loose ends from the current week, we are also preparing for the following week. Here are a few steps I find helpful. Some of these ideas come from or are adapted from The Together Leader: Get Organized for Your Success – and Sanity! by Maia Heyck-Merlin (there is also a teacher version of this resource).

  1. Clear off all of your extra paperwork and scan it (or file it if you must). I use Scannable to create PDFs of documents with my phone. They are saved in Evernote, a digital file organizer that acts as my second brain.
  2. Clean up as many emails as possible from the inbox. I will move important conversations that I have responded to in a categorized folder. The rest I delete. Typically I don’t get to “inbox zero”, but then again my email is not my to-do list…
  3. …which happens to be Things, an iOS application. I have it on my MacBook Air, iPad, and iPhone. I add tasks that come up during the day to this app which syncs across all devices. During my Friday/Monday time, I move any tasks that didn’t get completed to a future date. There is more to Things than just to-dos, such as project management and creating checklists for regularly scheduled activities.
  4. I journal daily. It helps me get my thoughts out of my head and onto paper so I don’t dwell on them over the weekend. If you have not journaled before, consider Fridays as a good day for that. I follow some general prompts when I need direction:
    • What went well this week? What are you happy about?
    • Where did you come up short? Why do you think that is?
    • How is this week’s work connected to our school goals?
    • What needs to happen next week to sustain the momentum?
  5. Now that my mind is clearer and my priorities are more in order, I can start scheduling for the following week. I add my big rocks, my priorities, first: daily instructional walks, parent/staff meetings, professional learning team time, a weekly touch base with our instructional coach and my assistant, and deadlines for any big projects. I have a print planner as well. I write these out from my digital calendar to confirm the accuracy of dates. (Some people may not need this confirmation. I am not one of those people.)

With my desk cleared and my mind uncluttered, I am more able to enjoy the weekend. There is less that is mentally weighing on my mind as I enjoy time with family and friends. For sure, I cannot turn off my work brain; I always have lingering projects and tasks that will need to be continued when I come back Monday. Yet even when I am not 100% successful in preparing for Monday, the time and effort spent on Friday does help.