We’ve all been there, we collect data, make beautiful color coded spreadsheets detailing nearly every data point we could possibly collect on each possible child. We compare district data to state data, nationally norm referenced data against in class assessments. We highlight students’ projected growth in order to make adequate progress for each child. We look at whole class data and determine standards to re-teach. We attend collaboration and intervention meetings in order to discuss students who are receiving services and what progress is being made. We create, update, and review a school data wall. We can name multiple data points on each student in our classes at the snap of a finger.
Face it, we are inundated with data. But are we always really looking at the data for all children and determining the next steps?
Chapter 6 “Supporting Curriculum and Assessment” made me pause and think about how important it is to take that next step in data. Jen dives deep in this chapter with some really important details to consider as literacy leaders in a building. Not only should we be tracking student achievement for ALL learners, we should carve out time periodically to review this data and determine next steps. Some prompting questions Jen outlines are as follows:
What are the strengths and needs of each student?
What students are you concerned about?
What students have made the most growth?
What observations can you make about your overall literacy data?
Jen suggests having these literacy team meetings each fall, winter, and spring to ensure that no student falls through the cracks. Each person has a crucial role in the process; the teacher reflects on each student, the principal reviews the student’s cumulative folder, the assistant principal listens and takes notes for student placement, and the literacy leader takes notes on students who are still at risk of failure.
As a result of reading this chapter, I have had some really great discussions with teachers and my administration about how we can create a better culture of data REVIEW. I am excited that our staff is ready to take the next steps in data review and that we are clearly beyond the idea of just being great collectors of data.
This is going to be a great year. Teachers are asking for the next step in our data process and are ready to take it on and make it our own, and make it meaningful. I am confident that as a result, our teachers will feel a better sense of direction and purpose. And once again, the work that goes on behind the scenes will play out better in classroom instruction, in our relationships with our students and families, and will result in increased student achievement.
In Chapter Six of Becoming a Literacy Leader, Jennifer Allen outlines the various ways she is able to support teachers with curriculum and assessment in her role as an instructional coach. As anyone in the field of education knows, curriculum and assessment are the backbone of the school system. Curriculum drives our teaching and assessment helps us fine-tune it. I’d go as far as to say supporting curriculum and assessment is one of my top three duties as an instructional coach.
Allen dedicates pages 114 – 116 to explaining how she helps prepare assessment materials during each assessment cycle. I nodded to myself as I read, remembering how I spent an entire morning last year in the windowless copy room making copies of our running record forms for the staff. It certainly wasn’t inspiring work, but I agree with Jennifer that preparing assessment materials is important work. When teachers are freed of the tedious jobs of copying or creating spreadsheets or organizing assessment materials, they are free to concentrate on the hard work of administering and analyzing assessments. If I can remove the ‘busywork’ part of assessment administration for them, I don’t mind spending a morning in a windowless copy room. In this way I can provide the time and space for teachers to think deeply about their assessments. If I can do the busywork, they can do the work that really matters.
While reading Chapter Six, I thought about how I support curriculum and assessment in my school district. I do many of things Allen wrote about, but what seems most important to me is helping teachers look at student work as formative assessment. On page 110, Allen wrote:
Students should be at the heart of our conversations around curriculum and assessment, and it’s important that we don’t let them define who students are or might become.
This quote summarizes my driving belief as an instructional coach. It is easy to fall into the trap of believing we (instructional coaches) exist to support the teachers, but the truth is we are ultimately there for the students. In order to keep students at the heart of my work as a coach, I work hard to have student work present during any coaching conversation. This holds true at the end of an assessment cycle as well. It benefits everyone to slow down and take the time to review the assessments (not the scores, the actual assessments). Teachers bring their completed writing prompts or math unit exams or running records, and we use a protocol to talk about the work. There are an abundant amount of protocols available at NSRF. I also highly recommend the Notice and Note protocol from The Practice of Authentic PLCs by Daniel R. Venables. This is my go-to protocol to look at student work with a group of teachers.
Teachers are in the classroom, doing the hard work of implementing curriculum and administering assessments. Our job as literacy leaders is to support them by giving them the time and space to reflect on their hard work.
A learning management system, or “LMS” is defined as “a digital learning system” that “manages all of the aspects of the learning process” (Amit K, 2015). A teacher can use an LMS for a variety of classroom functions, including communicating the learning objectives, organizing the learning timelines, telling the learners exactly what they need to learn and when, delivering the content straight to the learners, streamlining communications between instructor(s) and learners, and providing ongoing resources.
An LMS can also help the learner track their own progress, identifying what they have learned already and what they need to learn (Amit K). There are many options for learners to share their representations of their understandings within an LMS, including video, audio, images and text. In addition, discussion boards and assessment tools are available for teachers and students in most systems.
This definition and description of your typical LMS leads to an important question: Who is the learning management system for?
If an LMS is for the teacher, then I think they will find the previously listed features to be of great benefit to their practice. As an example, no longer do they have to collect papers, lug them home and grade them by hand. Now, students can submit their work electronically through the LMS. The teacher can assess learning online. The excuse “My dog ate my homework” ceases to exist. Google Classroom, Schoology and Edmodo fall into this category.
Also, teachers can use the LMS tools to create quizzes that could serve as a formative assessment of the lesson presented that day. Data is immediately available regarding who understands the content and who needs further support. This quick turnaround can help a teacher be more responsive to student’s academic needs. There are obvious benefits for a teacher who elects to use an LMS for these reasons.
If, on the other hand, an LMS is for the students, then we have a bit more work to do. With a teacher-centric LMS, not much really changes regarding how a classroom operates. The teacher assigns content and activities, the students complete it, and the teacher assesses. The adage “old wine in new bottles” might apply here.
With students in mind when integrating an LMS in school, the whole idea of instruction has to shift. We are now exploring concepts such as personalized learning, which “puts students in charge of selecting their projects and setting their pace” (Singer, 2016), and connected learning, which ties together students’ interests, peer networks and school accomplishments (Ito et al, 2013). In this scenario, it is not the students who need to make a shift but the teachers. Examples of more student-centered LMSs include Epiphany Learning and Project Foundry.
The role that teachers have traditionally filled looks very different than what a more student-centered, digitally-enhanced learning environment might resemble. I don’t believe either focus – the teacher or the student – is an ineffective approach for using a learning management system. The benefits in each scenario are promising. Yet we know that the more students can have ownership over the learning experience, there is an increased likelihood of greater achievement gains and higher engagement in school.
Data-informed or data-driven? This is a question I have wrestled with as a school administrator for some time. What I have found is that the usefulness of student data to inform instruction and accountability rests on the level of trust that exists within the school walls.
First there is trust in the data itself. Are the results of these assessment tools reliable (consistency of results administered over time and students) and valid (accuracy in the results of the assessments to measure student learning)? These are good initial inquiries, but should only be a starting point.
Security of student information should also be a priority when electing to house student data with third parties. One question I have started asking vendors that develop modern assessment tools include “Where do you house our student data?”, “What do you do with this data beyond allowing us to organize and analyze it?”, and “Who owns the student data?”. In a commentary for TheNew York Times, Julia Angwin highlights situations in which the algorithms used to make “data-driven decisions” regarding probability of recidivism in the criminal justice system were too often biased in their results (2016). Could a similar situation happen in education? Relying merely on the output that a computer program produces leads one to question the validity and reliability of this type of data-driven decision making.
A second issue regarding trust in schools related to data is how student learning results are being used as a tool to evaluate teachers and principals. All educators are rightfully skeptical when accountability systems ask for student learning results to be counted toward their performance ratings and, in some cases, level of pay and future employment with an organization.
This is not to suggest that student assessment data should be off the table when conversations occur regarding the effectiveness of a teacher and his or her impact on their students’ learning. The challenge, though, is ensuring that there is a clear correlation between the teacher’s actions and student learning. One model for data-driven decision making “provides a social and technical system to helps schools link summative achievement test data with the kinds of formative data that helps teachers improve student learning across schools” (Halverson et al, 162). Using a systematic approach like this, in which educators are expected to work together using multiple assessments to make instructional decisions, can simultaneously hold educators collectively accountable while ensuring that students are receiving better teaching.
Unfortunately, this is not the reality in many schools. Administrators too often adhere to the “data-driven” mentality with a literal and absolute mindset. Specifically, if something cannot be quantified, such as teacher observations and noncognitive information, school leaders may dismiss these results as less valuable than what a more quantitative tool might offer. Professional trust can tank in these situations.
That is why it is critical that professional development plans provide educators with training to build assessment literacy with every teacher. A faculty should be well versed in the differences between formative and summative assessments, informal and formal measurements, deciding which data points are more reliable than others, and how to triangulate data in order to analyze results and make a more informed decision regarding student learning.
Since analytics requires data analysis, institutions will need to invest in effective training to produce skilled analytics staff. Obtaining or developing skilled staff may present the largest barrier and the greatest cost to any academic analytics initiative (Baer & Campbell, 2012).
Building this assessment literacy can result in a level of trust in oneself as a professional to make informed instructional decisions on behalf of kids. If a faculty can ensure that the data they are using is a) valid and reliable, b) used to improve student learning and instructional practice, and c) considers multiple forms of data used wisely, then I am all for data-driven decision making as a model for school improvement. Trust will rise and student achievement may follow. If not, an unfortunate outcome might be the data cart coming before the pedagogical horse.
Halverson, R., Gregg, J., Prichett, R., & Thomas, C. (2007). The New Instructional Leadership: Creating Data-Driven Instructional Systems in Schools. Journal of School Leadership. Volume 17, pgs 159-194.
This is a reation paper I wrote for a graduate course I am currently taking (Technology and School Leadership). Feel free to respond in the comments to extend this thinking.
I followed this link retweeted by Frederick Hess, contributor to Education Week, to a US News & World Report opinion piece titled More Money, Same Problems. It was written by Gerard Robinson (the source of the tweet) and Benjamin Scafidi. Robinson is a fellow at the American Enterprise Institute, “a conservative think tank” (Source: Wikipedia). Scafidi is a professor of economics at Kennesaw State University.
The authors acknowledge that “public education is important to the economic and social well-being of our nation”. They go on to point out that there are some students who are successful in public education and far too many who are not. You have no argument from me. Robinson and Scafidi also concede that an adequate level of “resources matter to education”.
Their commentary then gets into the the problems that they believe plague public education:
– While student school enrollment increased 96% since 1950, public school staffing increased 386%.
– Since 1992, public school national math scores have shown little growth (click to their source).
– Today’s graduation rates are only slightly above what they were in 1970.
Robinson and Scafidi follow up with their ideas for improving student outcomes in public education:
– Better involvement from parents
– State control of failing public schools
– Charter schools (a result of state takeovers)
While I appreciate their passion for providing a better experience for students who do not have access to a high quality public education, I take issue with their ideas for improvement.
First, parent involvement. While it can have an impact on student learning when the involvement is positive, it is often not something we as public educators can control in our settings. My experience tells me that the best public schools focus the majority of their efforts and resources on the limited time that they actually have with students. Dr. John Hattie’s research on what works regarding instruction places family involvement on the lower end of the effective educational approach spectrum. It can be effective, but there is a ceiling.
So what’s on the higher end of the spectrum? Everything that Robinson and Scafidi failed to mention, including:
In fact, one of the least effective practices for improving student learning outcomes are…charter schools. According to Hattie, charter schools have around the same effect size as ensuring students had appropriate amounts of sleep and altering classroom/school schedules. My time is important, so I will let charter school and school choice proponents wrestle with these findings.
What I do want to point out is that the most effective instructional strategies require generous amounts of school funding. Here’s why: Teaching is one of the most challenging professions. To do it well, educators need consistent and effective training in the areas of curriculum, assessment and instructional strategies. This requires funding and support for job-embedded professional development. Dollars should be allocated for training, time, resources, and opportunities to apply these new skills in a low risk/high success environment. If this sounds like a lot of money for this type of work, please remember that teaching is a profession. I am sure you would agree that our students are worth it.
Citing graduation rates and flatlining test scores might serve to perpetuate the opinion that public education is broken. However, this argument is a generalization of our system as a whole. Yes, there are ineffective schools and there are effective schools. No one would dispute this. Yet each school is an individual learning community. They each have specific strengths and needs, and should be assessed with valid and reliable measures. To paint a broad stroke over public education with data that is questionable at best (see here and here) is a disservice to the hard work and dedication that all public educators put in every day on behalf of our students.
I won’t argue that public education needs to improve. We do. It is the work that we should be engaging in every day. The least that people outside public education can do is to ensure that they consider multiple perspectives on a position they support and provide valid and reliable evidence to back it up.
By becoming question-askers and problem-solvers, students and teachers work together to construct curriculum from their context and lived experiences.
– Nancy Fitchman Dana
Over 20 teachers recently celebrated their learning as part of their work with an action research course. They presented their findings to over 50 colleagues, friends, and family members at a local convention center. I was really impressed with how teachers saw data as a critical part of their research. Organizing and analyzing student assessment results was viewed as a necessary part of their practice, instead of simply a district expectation.
Equally impressive was how some of the teachers shared data that suggested their interventions did not have an impact on student learning. One teacher, who explored student-driven learning in her middle school, shared survey results that revealed little growth in her students’ dispositions toward school. What the teacher found out was she had not provided her students the necessary amount of ownership during class.
Another teacher did find some positive results from her research on the benefits of reflection during readers workshop. Students wrote in response journals and engaged in authentic literature circles to unpack their thinking about their books they were reading. At the end of the school year, the teacher was starting to observe her students leading their own literature conversations with enthusiasm. This teacher is excited about having some of these same students in 2016-2017, as she is looping up. “I am really looking forward to seeing how these kids grow within the next year.”
A third teacher shared her findings regarding how teaching students how to speak and listen will increase their comprehension of reading and their love for literacy. One of her data points – student surveys – was not favorable toward this intervention. Yet her other two pieces of data (anecdotal evidence, volume of reading) showed positive gains. Therefore, she made a professional judgment that her students did grow as readers and thinkers. This teacher is also reflecting on the usefulness of this survey for next year.
In these three examples, I couldn’t help but notice some unique outcomes of this action research course:
Teachers were proudly sharing their failures.
With the first teacher who focused on student-driven learning, she developed a greater understanding about her practice than probably possible in a more traditional professional learning experience. She learned what not to do. This teacher is stripping away less effective methods in favor of something better. And the reason she is able to do this is because she had a true professional learning community that allowed her to take risks and celebrate her discoveries.
Teachers didn’t want the learning to end.
This goes beyond the teacher who expressed her excitement in looping with her current students next year. Several participants in this action research course have asked if they could take it again. The main reason: They felt like they just found the question they really wanted to explore. It took them most of the school year to find it.
Teachers became more assessment literate.
The term “triangulation” was never referenced with the teacher who focused on conversations to building reading comprehension and engagement. Yet that is what she did, when she felt one set of data was not corroborating with the other results and her own professional judgment. Almost all of the staff who participated in action research had 3-5 data points to help make an informed conclusion about the impact of their instruction.
I also learned a few things about myself as an administrator:
It is not the professional development I offer for staff that makes the biggest difference – it is the conditions I create that allow teachers to explore their interests and take risks as innovative practitioners.
My role often is to the side of the professionals instead of in front of them, even learning with them when possible. For example, we brought in two professors from UW-Madison to lead this course. The best decision I made was recognizing that I was not the expert, and I needed to seek out those who were.
Principals have to be so careful about providing feedback, as we often haven’t built up enough trust, we can make false assumptions about what we are observing, and/or we do not allow teachers to discover better practices on their own terms.
In a world of standards and SMART goals, it is frowned upon when teachers don’t meet the mark regarding student outcomes. The assumption in these situations is that the teacher failed to provide effective instruction. However, the fault in this logic is that learning is not always a linear process. We work with people, dynamic and unpredictable beings who need a more personalized approach for real learning. Facilitating and engaging in action research has helped me realize this.
What happens when student data doesn’t agree with what you think you know, especially about a student’s reading skills and dispositions?
It’s a situation that happens often in schools. We get quantitative results back from a reading screener that doesn’t seem to jive with what we see every day in classrooms. For example, a student shows high ability in reading, yet continues to stick with those easy readers and resists challenging himself or herself with more complex literature. Or the flip: A student has trouble passing that next benchmark, but is able to comprehend a book above his or her reading level range.
Here’s the thing: The test tests what it tests. The assessment is not to blame. In fact, blame should be out of the equation when having professional conversations about how to best respond to students who are not experiencing a level of success as expected. The solution is not in the assessment itself, but in differentiating the types of assessments we are using, questioning the types of data we are collecting, and organizing and analyzing the various data points to make sense of what’s actually happening with our students’ learning lives.
Differentiating the Assessments
It’s interesting how reading, a discipline far removed from the world of mathematics, is constantly quantified when attempting to assess readers’ abilities. Words correct per minute, how many comprehension questions answered correctly, and number of pages read are most often referenced when analyzing and discussing student progress. This data is not bad to have, but if it is all we have, then we paint an incomplete picture of our students as readers.
Think about yourself as a reader. What motivates you to read? I doubt you give yourself a quiz or count the number of words you read correctly on a page after completing a book. Lifelong readers are active assessors of their own reading. They use data, but not the type of data that we normally associate with the term. For example, readers will often rate books once they have finished them on Amazon and Goodreads. They also add a short review about the book on these online forums. The audience that technology provides for readers’ responses is a strong motivator. No one requires these independent readers to rate and review these books, but they do it anyway.
There is little reason why these authentic assessments cannot occur in today’s classrooms. One tool for students to rate and review books is Biblionasium (www.biblionasium.com). It’s like Goodreads for kids. Students can keep track of what they’ve read, what they want to read, and find books recommended by other young readers. It’s a safe and fun reading community for kids.
Yes, this is data. That data isn’t always a number still seems like a shocker for too many educators. To help, teacher practitioners should ask smart questions about the information coming at them to make better sense of where their students are at in their learning journeys.
Questioning the Data
Data such as reading lists and reading community interactions can be very informative, so long as we are reading the information in the right way.
Asking questions related to our practice can help guide our inquiries. For example, are students self-selecting books on their own more readily over time? Also, are they relying more on peers and less on the teacher in their book selection? In addition, are the books being read increasing in complexity throughout the year? All of these qualitative measures of reading disposition can directly relate to quantitative reading achievement scores, informing the teacher with a more comprehensive look at their literacy lives.
Organizing and Analyzing the Data
I recently had our K-5 teachers administer reading motivation surveys with all of our students. The results have been illuminating for me, as I have entered them into spreadsheets.
Our plan is to position this qualitative data side-by-side with our fall screener data. The goal is to find patterns and trends as we compare and contrast these different data points, often called “triangulation” (Landrigan and Mulligan, 2013). Actually, the goal is not triangulation, but responding to the data and making instructional adjustments during the school year. This makes these assessments truly formative and for learning.
Is the time and energy worth it?
I hope so – I spent the better part of an afternoon at school today entering students’ responses to questions such as “What kind of reader are you?”, “How do you feel about reading with others?”, and “Do you like to read when you have free time?” (Marinek et al, 2015). The information collecting and organizing has been informative in itself. While it takes time, by transcribing students’ responses, I am learning so much about their reading lives. I hope that through this process of differentiating, questioning, and organizing and analyzing student reading data, both quantitative and qualitative, we will know our students better and become better teachers for our efforts.
Landrigan, C. & Mullligan, T. (2013). Assessment in Perspective: Focusing on the Reader Behind the Numbers. Portsmouth, NH: Stenhouse.
Marinak, B. A., Malloy, J. B., Gambrell L. B., & Mazzoni, S. A. (July/August, 2015). Me and My Reading Profile: A Tool for Assessing Early Reading Motivation. The Reading Teacher, (69)1, 51-62.
Attending the NCTE Annual Convention in Minneapolis this year? Join Karen Terlecky, Clare Landrigan, Tammy Mulligan and me as we share our experiences and specific strategies in conducting action research in today’s classrooms. See the following flyer for more information.