Data-Driven or Data-Informed? Thoughts on trust and evaluation in education

Data-informed or data-driven? This is a question I have wrestled with as a school administrator for some time. What I have found is that the usefulness of student data to inform instruction and accountability rests on the level of trust that exists within the school walls.

First there is trust in the data itself. Are the results of these assessment tools reliable (consistency of results administered over time and students) and valid (accuracy in the results of the assessments to measure student learning)? These are good initial inquiries, but should only be a starting point.

Security of student information should also be a priority when electing to house student data with third parties. One question I have started asking vendors that develop modern assessment tools include “Where do you house our student data?”, “What do you do with this data beyond allowing us to organize and analyze it?”, and “Who owns the student data?”. In a commentary for The New York Times, Julia Angwin highlights situations in which the algorithms used to make “data-driven decisions” regarding probability of recidivism in the criminal justice system were too often biased in their results (2016). Could a similar situation happen in education? Relying merely on the output that a computer program produces leads one to question the validity and reliability of this type of data-driven decision making.

A second issue regarding trust in schools related to data is how student learning results are being used as a tool to evaluate teachers and principals. All educators are rightfully skeptical when accountability systems ask for student learning results to be counted toward their performance ratings and, in some cases, level of pay and future employment with an organization.

This is not to suggest that student assessment data should be off the table when conversations occur regarding the effectiveness of a teacher and his or her impact on their students’ learning. The challenge, though, is ensuring that there is a clear correlation between the teacher’s actions and student learning. One model for data-driven decision making “provides a social and technical system to helps schools link summative achievement test data with the kinds of formative data that helps teachers improve student learning across schools” (Halverson et al, 162). Using a systematic approach like this, in which educators are expected to work together using multiple assessments to make instructional decisions, can simultaneously hold educators collectively accountable while ensuring that students are receiving better teaching.

Unfortunately, this is not the reality in many schools. Administrators too often adhere to the “data-driven” mentality with a literal and absolute mindset. Specifically, if something cannot be quantified, such as teacher observations and noncognitive information, school leaders may dismiss these results as less valuable than what a more quantitative tool might offer. Professional trust can tank in these situations.

That is why it is critical that professional development plans provide educators with training to build assessment literacy with every teacher. A faculty should be well versed in the differences between formative and summative assessments, informal and formal measurements, deciding which data points are more reliable than others, and how to triangulate data in order to analyze results and make a more informed decision regarding student learning.

Since analytics requires data analysis, institutions will need to invest in effective training to produce skilled analytics staff. Obtaining or developing skilled staff may present the largest barrier and the greatest cost to any academic analytics initiative (Baer & Campbell, 2012).

Building this assessment literacy can result in a level of trust in oneself as a professional to make informed instructional decisions on behalf of kids. If a faculty can ensure that the data they are using is a) valid and reliable, b) used to improve student learning and instructional practice, and c) considers multiple forms of data used wisely, then I am all for data-driven decision making as a model for school improvement. Trust will rise and student achievement may follow. If not, an unfortunate outcome might be the data cart coming before the pedagogical horse.

References

Angwin, J. (2016). Make Algorithms Accountable. The New York Times. Available: http://www.nytimes.com/2016/08/01/opinion/make-algorithms-accountable.html?_r=0

Baer, L.L. & Campbell, J. (2012). From Metrics to Analytics, Reporting to Action: Analytics’ Role in Changing the Learning Environment. Educause. Available: https://net.educause.edu/ir/library/pdf/pub72034.pdf

Halverson, R., Gregg, J., Prichett, R., & Thomas, C. (2007). The New Instructional Leadership: Creating Data-Driven Instructional Systems in Schools. Journal of School Leadership. Volume 17, pgs 159-194.

This is a reation paper I wrote for a graduate course I am currently taking (Technology and School Leadership). Feel free to respond in the comments to extend this thinking.

Yes, School Funding Does Matter

The tweet gave me pause when I first read the headline:

I followed this link retweeted by Frederick Hess, contributor to Education Week, to a US News & World Report opinion piece titled More Money, Same Problems. It was written by Gerard Robinson (the source of the tweet) and Benjamin Scafidi. Robinson is a fellow at the American Enterprise Institute, “a conservative think tank” (Source: Wikipedia). Scafidi is a professor of economics at Kennesaw State University.

The authors acknowledge that “public education is important to the economic and social well-being of our nation”. They go on to point out that there are some students who are successful in public education and far too many who are not. You have no argument from me. Robinson and Scafidi also concede that an adequate level of “resources matter to education”.

Their commentary then gets into the the problems that they believe plague public education:

– While student school enrollment increased 96% since 1950, public school staffing increased 386%.
– Since 1992, public school national math scores have shown little growth (click to their source).
– Today’s graduation rates are only slightly above what they were in 1970.

Robinson and Scafidi follow up with their ideas for improving student outcomes in public education:

– Better involvement from parents
– State control of failing public schools
– Charter schools (a result of state takeovers)

While I appreciate their passion for providing a better experience for students who do not have access to a high quality public education, I take issue with their ideas for improvement.

First, parent involvement. While it can have an impact on student learning when the involvement is positive, it is often not something we as public educators can control in our settings. My experience tells me that the best public schools focus the majority of their efforts and resources on the limited time that they actually have with students. Dr. John Hattie’s research on what works regarding instruction places family involvement on the lower end of the effective educational approach spectrum. It can be effective, but there is a ceiling.

So what’s on the higher end of the spectrum? Everything that Robinson and Scafidi failed to mention, including:

– Formative assessment
– Feedback strategies
– Self-assessment
– Vocabulary instruction
– Classroom discussion
– Response to Intervention

In fact, one of the least effective practices for improving student learning outcomes are…charter schools. According to Hattie, charter schools have around the same effect size as ensuring students had appropriate amounts of sleep and altering classroom/school schedules. My time is important, so I will let charter school and school choice proponents wrestle with these findings.

What I do want to point out is that the most effective instructional strategies require generous amounts of school funding. Here’s why: Teaching is one of the most challenging professions. To do it well, educators need consistent and effective training in the areas of curriculum, assessment and instructional strategies. This requires funding and support for job-embedded professional development. Dollars should be allocated for training, time, resources, and opportunities to apply these new skills in a low risk/high success environment. If this sounds like a lot of money for this type of work, please remember that teaching is a profession. I am sure you would agree that our students are worth it.

Citing graduation rates and flatlining test scores might serve to perpetuate the opinion that public education is broken. However, this argument is a generalization of our system as a whole. Yes, there are ineffective schools and there are effective schools. No one would dispute this. Yet each school is an individual learning community. They each have specific strengths and needs, and should be assessed with valid and reliable measures. To paint a broad stroke over public education with data that is questionable at best (see here and here) is a disservice to the hard work and dedication that all public educators put in every day on behalf of our students.

I won’t argue that public education needs to improve. We do. It is the work that we should be engaging in every day. The least that people outside public education can do is to ensure that they consider multiple perspectives on a position they support and provide valid and reliable evidence to back it up.

Initial Findings After Implementing Digital Student Portfolios in Elementary Classrooms

On Saturday, I shared why I was not at ISTE 2016. That post included our school’s limited progress in embedding technology into instruction that made an impact on student learning. In this post, I share how digital student portfolios did make a possible difference.

I attempted a schoolwide action research project this past year around literacy and engagement. We used three strategies to assess growth from fall to spring: Instructional walk trends, student engagement surveys, and digital student portfolios. Each data point related to one major componenent of literacy:

  • Instructional walks: Speaking and listening within daily instruction, including questioning and student discussion
  • Engagement surveys: Reading, specifically self-concept as a reader, the importance of reading, and sharing our reading lives
  • Digital portfolios: Writing, with a focus on guiding students to reflect on their work, offer feedback, and set goals for the future

The instructional walks, brief classroom visits in which I would write my observations down and share them as feedback with the teacher, did show an increase in the frequency of student discussion during instruction but not in higher level questioning. My conclusion was there needs to be specific and sustained professional development around questioning in the classroom in order to see positive growth.

The reading engagement survey results were messy. While primary students showed significant growth from fall to spring about how they feel about reading. intermediate student results were stagnant. Some older students regressed. It is worth noting that at the younger ages, there was also significant growth in their reading achievement as measured by interim assessments (running records). I didn’t have really any conclusions. The survey itself might not have been intermediate student-friendly. At the younger ages, our assessment system is built so that students are seeing steady progress with benchmark books.

Okay, now for the reason for this post. Before I share any data about student writing and digital portfolios, I want to be clear about a few things:

  • A few teachers forgot to record their spring writing data. I did not include their students in the data set.
  • The results from my first year at the school (2011-2012) used a rubric based on the 6 traits of writing. Last year we used a more condensed rubric, although both rubrics for assessing student writing were a) used by all staff to help ensure interrater reliability and b) highly correlated with the 6 traits of writing.
  • The results from my first year at the school, in which no portfolio process was used beyond a spring showcase, came from a district-initiatied assessment team that score every paper in teams of two. This year’s data was scored by the teachers within our own school exclusively.

With all of this in mind, here are the results of student growth in writing over time from my first year as a principal (no portfolio process in place) and last year (a comprehensive portfolio process in place):

2011-2012: 10% growth from fall to spring

2015-2016: 19% growth from fall to spring

I have the documentation to verify these results. The previously shared points are some of the reasons why I hold these results a bit in question. At the same time, here are some interesting details about this year’s process.

  • All teachers were expected to document student writing at least six times a year in a digital portfolio tool. In addition, each student was expected to reflect on their work by highlighting what they did well, identifying areas of growth, and making goals for the next time they were asked to upload a piece of writing into their digital portfolio.
  • The digital portfolio tool we used, FreshGrade, was well received by families. Survey results with these families revealed an overwhelmingly positive response to the use of this tool for sharing student learning regularly over the course of the school year. In fact, we didn’t share enough, as multiple parents asked for more postings.
  • The comments left by family members on the students’ work via digital portfolios seemed to motivate the teachers to share more of the students’ work. Staff requested additional trainings for conducting portfolio assessment. They could select the dates to meet and offer the agenda items that we would focus on.

If you have read any of the research on feedback and formative assessment, you will know that many studies have shown that educators will double their effectiveness as teachers when they focus on formative assessment and providing feedback for students as they learn. It should be noted that our 19% growth is almost double what we achieved in 2011-2012.

One might say, “Your teachers are better writing instructors now than five years ago.” Maybe, in fact probably. But what we measured was growth from fall to spring and compared the results, not longitudinal growth over many years. The teachers can own the impact that their instruction made on our students this school year.

There was not formalized training for improve teachers’ abilities to increase speaking and listening in the classroom. Reading engagement strategies were measured but not addressed during professional development. Only the writing portfolio process along with the incorporation of digital portfolios to document and share this process was a focus in our faculty trainings.

Although these results are promising, I am not going to make any big conclusions at this time. First, only I did the data crunching of these results. Also, we didn’t follow a more formal research process to ensure validity of our findings. However, I am interested in pursuing partnerships with higher education to ensure that any results and conclusions found in the future meet specific thresholds for reliability.

One final thing to note before I close: Technology was important in this process, but my hypothesis is the digital piece was secondary to the portfolio process itself. Asking the students to become more self-aware of their own learning and more involved in goal-setting through teacher questioning and feedback most likely made the difference. The technology brought in an essential audience, yes, but the work had to be worth sharing.

1414585856_full.jpeg

For more on this topic, explore my digital book Digital Student Portfolios: A Whole School Approach to Connected Learning and Continuous Assessment. It is available for Kindle, Nook, and iBooks. You can join our Google+ Community to discuss the topic of digital portfolios for students with other educators.

If you liked my first book, check out my newest book 5 Myths About Classroom Technology: How do we integrate digital tools to truly enhance learning? (ASCD Arias). 

 

Three Recommended Technologies for Digital Student Portfolios

Right now I am closing in on finishing Chapter 4 of my upcoming ASCD book Digital Student Portfolios: A Guide for Powerful Formative Assessment (working title).

The first three chapters offer a definition of digital portfolios and why they should be utilized in every school. Now I am at the fun part: Describing the technologies that can be used for this type of initiative.

Next is a graphic I have “rendered” that summarizes the pros and cons of each of the three recommended technologies for digital portfolios: blogs, dedicated portfolio applications, and websites. It’s a draft. What are your thoughts on this topic? What am I missing or possibly misinformed in my knowledge about these tools? Please share in the comments.

Screen Shot 2016-05-11 at 7.19.10 PM.png

Opting In

Testing season is upon us. In our Title I elementary school in Central Wisconsin, we have had students preview the computerized assessment. The Chromebooks have been configured and the wireless tested. For the next six weeks, all 3rd through 5th grade students will be taking the Forward Exam, our third different standardized test in as many years.

All of our students except one: My son. He will be sitting this one out.

Our reasons are many. As a parent, I don’t believe the test will glean any useful information about his abilities as a learner. As our school’s principal, I want to set the example with regard to my position on this issue. As a person, having a student sit for multiple hours taking an examination that will have no bearing on his school career makes little to no sense. Students at this age cannot advocate for themselves.

This is not a simple or straightforward decision. Our school has been the recipient of $100,000 in state-level grants for the past three years in large part due to our student achievement results. We have taken pride in receiving these awards, in spite of the reality of how we received them. If other families in our school elected to opt out their kids, our school could lose federal funding – 95% of a school’s student body has to take the test to avoid sanctions. As I said, not so simple.

For these reasons, we are not only opting our son out of this year’s standardized test; we are also opting him into a performance portfolio assessment.

While the rest of the student body is testing, my son and I will be working together to develop an online repository of different artifacts that demonstrate his progress and performance during the school year. Each artifact will be accompanied with a personal reflection about why he included the piece and what knowledge, skill or disposition it showcases about him as a learner. We are using Google Sites for this process. He can take this digital portfolio with him throughout his school career, adding to it and replacing artifacts when appropriate.

I have no problem with families electing to opting their child(ren) out of the standardized test. It certainly makes a point and, collectively, can lead to some much needed change in education. At the same time, when we express our dissatisfaction with something currently happening, I believe we should also be offering some alternatives and creative solutions. Otherwise, we may create a vacuum that gets filled with something pretty similar to the problem we were trying to get rid of in the first place.

If we are opting out our kids of the standardized test, let’s be honest about why with them. When I spoke to my son about this decision, I explained that I believe developing a performance portfolio of his best work from the school year was a better way to showcase his learning than a standardized test. (He responded with, “I’m not sure what you are talking about, so I’ll just go with it.”) I also shared with him that this decision was both taking a position on an important issue and offering a solution to the problem.

Opting out is easy. Coming up with solutions is harder, yes, but it is also an essential part of advocating for equity in public education. Why not be a part of the solution?

How do we separate achievement and effort?

Our family recently stayed at my parent’s place, on our way to a short family vacation in Madison. My mother was rummaging around in some of my stuff from my school career and pulled out a college paper I had composed. Written on the title page of my work (a report about The Doors for an elective course in American music) were three words:

Two days late.

I remember this work because I actually enjoyed writing it. We were allowed to choose which musician(s) to research. The professor left a number of positive comments on it with regard to the content and organization of the paper. After rereading it, I thought I had provided some sound conclusions about the influence of The Doors on other artists and rock and roll in general. It was saved since the 1990s, when I took the course, if that says anything.

What I don’t remember is that it was two days late. Really – no clue. My best guess is that I probably didn’t organize my time well enough to complete it by the due date. College offers a lot of distractions! 🙂 There were a few grammatical errors that might have been corrected had I been more diligent about a writing schedule, something I try to do now.

At the end of the paper was my grade = C. The biggest factor: The 20% that was docked off my final score, 10% for each day late. I was given a C for an A paper.

That was back in the 90’s. Education has come so far since then.

In a column written for Education Week, Nancy Flanigan addresses the varied comments left on a teacher’s post. The subject of the post was the failure of her students to complete her assigned activity while she was away. “What an incompetent sub! Give ’em all zeros!” and “Candy and a free day for the five compliant ones!” were a few suggestions.

Being able to communicate a student’s attitude and responsibility a part from their understandings and skills is an issue still today. There isn’t a perfect system. Standard-based grading gets us closer. But, with all of the standards teachers need to tackle, the management itself of this type of reporting can become overwhelming and burdensome.

What have you found more effective in separating achievement from effort? Please share in the comments and start a conversation.

 

Podcast: Five Commonly Accepted Myths About Education Technology @BAMRadio @ASCD

I joined Dr. Rachael George for a podcast on BAM Radio to discuss my ASCD Arias book 5 Myths About Classroom: How do we integrate digital tools to truly enhance learning?  Enjoy!