Reading by Example Newsletter, 10-13-18: Data-Informed Instruction

This week’s newsletter focuses on the use of data in the classroom to inform teaching and learning.

  1. What do you do when the data isn’t making any sense? Our instructional leadership team and I encountered this challenge in this post.
  2. One literacy assessment mentioned in the post is Fountas & Pinnell. This reference reminds me of an important blog post the two educators wrote, titled A Level is a Teacher’s Tool, NOT a Child’s Label.
  3. For a more authentic approach to evaluating student writing schoolwide, check out the Educational Leadership article Looking at Student Work for a practical assessment process. (ASCD membership required.)

To read the rest of the newsletter, click here and sign up for free today!

Data-Driven Decision Making: Who’s the decider?

After I shared out my previous post, describing my confusion about making sense of certain types of data, the International Literacy Association (ILA) replied with a link to a recent report on this topic:

It’s a short whitepaper/brief titled “Beyond the Numbers: Using Data for Instructional Decision Making”. The principal authors, Vicki Park and Amanda Datnow, make a not-so-provocative claim that may still cause consternation in education:

Rather than data driving the decision-making, student learning goals should drive what data are collected and how they are used.

The reason this philosophy might cause unrest with educators is that data-driven decision making is still a mainstay in schools. Response to Intervention is dependent on quantitative-based progress monitoring. School leaders too often discount the anecdotal notes and other qualitative information collected by teachers. Sometimes the term “data-informed” replaces “data-driven”, but the approach largely remains aligned with the latter terminology and practice.

Our school is like many others. We get together three times a year, usually after screeners are administered. We create spreadsheets and make informed decisions on behalf of our students. Yet students nor their parents are involved in the process. Can we truly be informed if we are not also including the kids themselves in some way?

To be fair to ourselves and to other schools, making decisions regarding which students need more support or how teachers will adjust their instruction is relatively new to education. As well, our assessments are not as clean as, say, a blood test you might take at the doctor’s office. Data-driven decision making is hard enough for professional educators. There are concerns that bringing in students and their families might only contribute to the confusion through the fault of no one.

And yet there are teachers out there who are doing just this: positioning students as the lead assessors and decision-makers in their educational journey. For example, Samantha Mosher, a secondary special education teacher, guides her students to develop their own IEP goals as well as how to use various tools to monitor their own progress. The ownership for the work rests largely on the students’ shoulders. Samantha provides the modeling, support, and supervision to ensure each student’s goals and plan are appropriate.

An outcome in releasing the responsibility of making data-informed decisions to students is that Samantha has become more of a learner. As she notes in her blog post:

I was surprised that many students didn’t understand why they got specific accommodations. I expected to have to explain what was possible, but didn’t realized I would have to explain what their accommodations meant.

“Yes, but older students are able to set their own goals and monitor their own progress. My kids are not mature enough yet to manage that responsibility.” I hear you, and I am going to disagree. I can say that because I have seen younger students do this work firsthand. It’s not a completely independent process, but the data-informed decision making is at least co-led by the students.

In my first book on digital portfolios, I profiled the speech and language teacher at my last school, Genesis Cratsenberg. She used Evernote to capture her students reading aloud weekly progress notes to their parents. She would send the text of their reflections along with the audio home via email. Parents and students could hear first hand the growth they were making over time in the authentic context of a personalized student newsletter. It probably won’t surprise you that once Genesis started this practice, students on her caseload exited out of her program at a faster rate. (To read an excerpt from my book describing Genesis’s work, click here.)

I hope this post comes across as food for thought and not finger-wagging. Additionally, I don’t believe we should stop with our current approaches to data analysis. Our hands are sometimes tied when it comes to state and federal rules regarding RtI and special education qualification. At the same time, we are free to expand our understanding and our beliefs about what counts as data and who should be at the table when making these types of decisions.

Data – Doin’ It for the Kids

I am a self-proclaimed data nerd. I admit that I have played around on Excel more than once, and I create spreadsheets just for fun. What can I say, manipulating and looking at data is pretty awesome!

Data has been on my mind a lot lately. My school is currently looking at different ways we assess students and collect student data. There is a lot of focus on Collecting The Data, and Having The Data to back up various decisions related to students, including whether or not they should receive intervention services.

I highlighted a LOT in the section of this book “Applying Responsible Assessment”, p311. I kept reading sentences and thinking “THIS is what I’ve been trying to get across to my administration!!!” The last sentence on page 311 reads “…standardized tests are big business, with publishers lobbying hard for their adoption.” I am SO skeptical about using ONE textbook or ONE assessment to determine students’ growth or knowledge. In my last post, I wrote about frontloading and how background knowledge plays such a crucial part in students’ learning. Using one test or “curriculum” limits what our students are being exposed to, as well as costing tens of thousands of dollars for often scripted lessons with assessments that don’t always tell us what we need to know.

One of my latest pet projects is trying to get a diverse set of guided reading books to use for benchmarks so that my kiddos have a more fair shot at success.

But back to data.

Ms. Routman wrote on page 312, “Question any assessment that does not ultimately benefit the learner.” How many assessments do I see given, only to flesh out a data wall or provide more “data points” for a progress monitoring form? I LOVE that in the next section, formative assessments are given the spotlight. Anecdotal notes! Conferences! Teacher-constructed quizzes! Gasp – all things that educated professionals know how to do, really well!

But can we be trusted to do that?

Sometimes I get the feeling that my anecdotal data isn’t enough – that my observations are less than acceptable, because a number can’t be attached to it. My notes can’t neatly fit onto a sticky note to fit on the Data Wall, something else discussed in this book. One of the challenges for my intervention department this year has been figuring out how best to organize student data; do we enter it into a spreadsheet, do we keep hard copies, do we share in on The Drive?

I come back to the the sentence on page 322, “…but the key is the data must ultimately lead to improving learning…” THAT is the statement that I feel should be guiding discussions about student data. After all, the students are why we teach. Sometimes it doesn’t seem that way, particularly when it’s PSSA time, or the “Bigwigs” are coming to visit, or the charter is up for renewal. But it really is all about the kids.

Last week I created a spreadsheet to do Miscue Analyses on benchmark assessments. It figures out the percent of word ending miscues due to a missed inflectional ending, and the percentage of times a student self-corrected a meaning-changing miscue. I’m very proud of it, and it’s been very helpful for me in determining what I need to work on in my small groups, as well as to figure out how far “below grade level” my students actually are. (I’m less concerned about my students missing a few inflectional endings than I am if they are unable to decode long vowel patterns in 6th grade.)

For last year’s book review blog, I titled my data post “Data, Not Just Another 4-Letter Word”. I still feel that way. Data is awesome. It’s so helpful, collected in a meaningful, deliberate way. And just like anything in education, it all comes back to the “why” – we do it for the kids.

Better Data Days Are Ahead

35936619430 07cd76386b
We’ve all been there, we collect data, make beautiful color coded spreadsheets detailing nearly every data point we could possibly collect on each possible child. We compare district data to state data, nationally norm referenced data against in class assessments. We highlight students’ projected growth in order to make adequate progress for each child. We look at whole class data and determine standards to re-teach. We attend collaboration and intervention meetings in order to discuss students who are receiving services and what progress is being made. We create, update, and review a school data wall. We can name multiple data points on each student in our classes at the snap of a finger. 

Face it, we are inundated with data. But are we always really looking at the data for all children and determining the next steps?
Chapter 6 “Supporting Curriculum and Assessment” made me pause and think about how important it is to take that next step in data. Jen dives deep in this chapter with some really important details to consider as literacy leaders in a building. Not only should we be tracking student achievement for ALL learners, we should carve out time periodically to review this data and determine next steps. Some prompting questions Jen outlines are as follows:

  • What are the strengths and needs of each student?
  • What students are you concerned about?
  • What students have made the most growth?
  • What observations can you make about your overall literacy data?

Jen suggests having these literacy team meetings each fall, winter, and spring to ensure that no student falls through the cracks. Each person has a crucial role in the process; the teacher reflects on each student, the principal reviews the student’s cumulative folder, the assistant principal listens and takes notes for student placement, and the literacy leader takes notes on students who are still at risk of failure.

As a result of reading this chapter, I have had some really great discussions with teachers and my administration about how we can create a better culture of data REVIEW. I am excited that our staff is ready to take the next steps in data review and that we are clearly beyond the idea of just being great collectors of data. 

This is going to be a great year. Teachers are asking for the next step in our data process and are ready to take it on and make it our own, and make it meaningful. I am confident that as a result, our teachers will feel a better sense of direction and purpose. And once again, the work that goes on behind the scenes will play out better in classroom instruction, in our relationships with our students and families, and will result in increased student achievement.

Coaching Work: Curriculum & Assessment by @danamurphy68 #litleaders

In Chapter Six of Becoming a Literacy Leader, Jennifer Allen outlines the various ways she is able to support teachers with curriculum and assessment in her role as an instructional coach. As anyone in the field of education knows, curriculum and assessment are the backbone of the school system. Curriculum drives our teaching and assessment helps us fine-tune it. I’d go as far as to say supporting curriculum and assessment is one of my top three duties as an instructional coach.

Allen dedicates pages 114 – 116 to explaining how she helps prepare assessment materials during each assessment cycle. I nodded to myself as I read, remembering how I spent an entire morning last year in the windowless copy room making copies of our running record forms for the staff. It certainly wasn’t inspiring work, but I agree with Jennifer that preparing assessment materials is important work. When teachers are freed of the tedious jobs of copying or creating spreadsheets or organizing assessment materials, they are free to concentrate on the hard work of administering and analyzing assessments. If I can remove the ‘busywork’ part of assessment administration for them, I don’t mind spending a morning in a windowless copy room. In this way I can provide the time and space for teachers to think deeply about their assessments. If I can do the busywork, they can do the work that really matters.

green-chameleon-21532.jpg

While reading Chapter Six, I thought about how I support curriculum and assessment in my school district. I do many of things Allen wrote about, but what seems most important to me is helping teachers look at student work as formative assessment. On page 110, Allen wrote:

Students should be at the heart of our conversations around curriculum and assessment, and it’s important that we don’t let them define who students are or might become. 

This quote summarizes my driving belief as an instructional coach. It is easy to fall into the trap of believing we (instructional coaches) exist to support the teachers, but the truth is we are ultimately there for the students. In order to keep students at the heart of my work as a coach, I work hard to have student work present during any coaching conversation. This holds true at the end of an assessment cycle as well. It benefits everyone to slow down and take the time to review the assessments (not the scores, the actual assessments). Teachers bring their completed writing prompts or math unit exams or running records, and we use a protocol to talk about the work. There are an abundant amount of protocols available at NSRF. I also highly recommend the Notice and Note protocol from The Practice of Authentic PLCs by Daniel R. Venables. This is my go-to protocol to look at student work with a group of teachers.

Teachers are in the classroom, doing the hard work of implementing curriculum and administering assessments. Our job as literacy leaders is to support them by giving them the time and space to reflect on their hard work.

Learning Management Systems: Who are they for?

A learning management system, or “LMS” is defined as “a digital learning system” that “manages all of the aspects of the learning process” (Amit K, 2015). A teacher can use an LMS for a variety of classroom functions, including communicating the learning objectives, organizing the learning timelines, telling the learners exactly what they need to learn and when, delivering the content straight to the learners, streamlining communications between instructor(s) and learners, and providing ongoing resources.

An LMS can also help the learner track their own progress, identifying what they have learned already and what they need to learn (Amit K). There are many options for learners to share their representations of their understandings within an LMS, including video, audio, images and text. In addition, discussion boards and assessment tools are available for teachers and students in most systems.

This definition and description of your typical LMS leads to an important question: Who is the learning management system for?

If an LMS is for the teacher, then I think they will find the previously listed features to be of great benefit to their practice. As an example, no longer do they have to collect papers, lug them home and grade them by hand. Now, students can submit their work electronically through the LMS. The teacher can assess learning online. The excuse “My dog ate my homework” ceases to exist. Google Classroom, Schoology and Edmodo fall into this category.

Also, teachers can use the LMS tools to create quizzes that could serve as a formative assessment of the lesson presented that day. Data is immediately available regarding who understands the content and who needs further support. This quick turnaround can help a teacher be more responsive to student’s academic needs. There are obvious benefits for a teacher who elects to use an LMS for these reasons.

If, on the other hand, an LMS is for the students, then we have a bit more work to do. With a teacher-centric LMS, not much really changes regarding how a classroom operates. The teacher assigns content and activities, the students complete it, and the teacher assesses. The adage “old wine in new bottles” might apply here.

With students in mind when integrating an LMS in school, the whole idea of instruction has to shift. We are now exploring concepts such as personalized learning, which “puts students in charge of selecting their projects and setting their pace” (Singer, 2016), and connected learning, which ties together students’ interests, peer networks and school accomplishments (Ito et al, 2013). In this scenario, it is not the students who need to make a shift but the teachers. Examples of more student-centered LMSs include Epiphany Learning and Project Foundry.

The role that teachers have traditionally filled looks very different than what a more student-centered, digitally-enhanced learning environment might resemble. I don’t believe either focus – the teacher or the student – is an ineffective approach for using a learning management system. The benefits in each scenario are promising. Yet we know that the more students can have ownership over the learning experience, there is an increased likelihood of greater achievement gains and higher engagement in school.

References

Amit K, S. (2016). Choosing the Right Learning Management System: Factors and Elements. eLearning Industry. Available: https://elearningindustry.com/choosing-right- learning-management- system-factors-elements

Ito, M., Gutiérrez, K., Livingstone, S., Penuel, B., Rhodes, J., Salen, K., Schor, J., Sefton-Green, J., Watkins, S.C. (2013). Connected Learning: An Agenda for Research and Design. Media and Learning Research Hub. Whitepaper, available: http://dmlhub.net/publications/connected-learning- agenda-for- research-and-design/

Singer, N., Isaac, M. (2016). Facebook Helps Develop Software That Puts Students in Charge of Their Lesson Plans. The New York Times. Available: http://nyti.ms/2b3LNzv

Data-Driven or Data-Informed? Thoughts on trust and evaluation in education

Data-informed or data-driven? This is a question I have wrestled with as a school administrator for some time. What I have found is that the usefulness of student data to inform instruction and accountability rests on the level of trust that exists within the school walls.

First there is trust in the data itself. Are the results of these assessment tools reliable (consistency of results administered over time and students) and valid (accuracy in the results of the assessments to measure student learning)? These are good initial inquiries, but should only be a starting point.

Security of student information should also be a priority when electing to house student data with third parties. One question I have started asking vendors that develop modern assessment tools include “Where do you house our student data?”, “What do you do with this data beyond allowing us to organize and analyze it?”, and “Who owns the student data?”. In a commentary for The New York Times, Julia Angwin highlights situations in which the algorithms used to make “data-driven decisions” regarding probability of recidivism in the criminal justice system were too often biased in their results (2016). Could a similar situation happen in education? Relying merely on the output that a computer program produces leads one to question the validity and reliability of this type of data-driven decision making.

A second issue regarding trust in schools related to data is how student learning results are being used as a tool to evaluate teachers and principals. All educators are rightfully skeptical when accountability systems ask for student learning results to be counted toward their performance ratings and, in some cases, level of pay and future employment with an organization.

This is not to suggest that student assessment data should be off the table when conversations occur regarding the effectiveness of a teacher and his or her impact on their students’ learning. The challenge, though, is ensuring that there is a clear correlation between the teacher’s actions and student learning. One model for data-driven decision making “provides a social and technical system to helps schools link summative achievement test data with the kinds of formative data that helps teachers improve student learning across schools” (Halverson et al, 162). Using a systematic approach like this, in which educators are expected to work together using multiple assessments to make instructional decisions, can simultaneously hold educators collectively accountable while ensuring that students are receiving better teaching.

Unfortunately, this is not the reality in many schools. Administrators too often adhere to the “data-driven” mentality with a literal and absolute mindset. Specifically, if something cannot be quantified, such as teacher observations and noncognitive information, school leaders may dismiss these results as less valuable than what a more quantitative tool might offer. Professional trust can tank in these situations.

That is why it is critical that professional development plans provide educators with training to build assessment literacy with every teacher. A faculty should be well versed in the differences between formative and summative assessments, informal and formal measurements, deciding which data points are more reliable than others, and how to triangulate data in order to analyze results and make a more informed decision regarding student learning.

Since analytics requires data analysis, institutions will need to invest in effective training to produce skilled analytics staff. Obtaining or developing skilled staff may present the largest barrier and the greatest cost to any academic analytics initiative (Baer & Campbell, 2012).

Building this assessment literacy can result in a level of trust in oneself as a professional to make informed instructional decisions on behalf of kids. If a faculty can ensure that the data they are using is a) valid and reliable, b) used to improve student learning and instructional practice, and c) considers multiple forms of data used wisely, then I am all for data-driven decision making as a model for school improvement. Trust will rise and student achievement may follow. If not, an unfortunate outcome might be the data cart coming before the pedagogical horse.

References

Angwin, J. (2016). Make Algorithms Accountable. The New York Times. Available: http://www.nytimes.com/2016/08/01/opinion/make-algorithms-accountable.html?_r=0

Baer, L.L. & Campbell, J. (2012). From Metrics to Analytics, Reporting to Action: Analytics’ Role in Changing the Learning Environment. Educause. Available: https://net.educause.edu/ir/library/pdf/pub72034.pdf

Halverson, R., Gregg, J., Prichett, R., & Thomas, C. (2007). The New Instructional Leadership: Creating Data-Driven Instructional Systems in Schools. Journal of School Leadership. Volume 17, pgs 159-194.

This is a reation paper I wrote for a graduate course I am currently taking (Technology and School Leadership). Feel free to respond in the comments to extend this thinking.