Do no Harm

When used casually, AR helps students’ reading abilities grow. When used thoughtfully and with proven techniques, it leads to tremendous gains and a lifelong love of reading. – Getting Results with Accelerated Reader, Renaissance Learning

I am currently reading aloud Millions by Frank Cottrell Boyce to my 10 year old son. It is an interesting “what if” story: the main character and his older brother find a bag of money thrown off of a train in England. The problem is that England’s currency is soon transitioning from pounds to the euro. To add a wrinkle to the narrative, the main character’s mother recently passed away. To add another wrinkle, the main character can speak to deceased saints canonized within the Catholic Church. This story is nothing if not interesting and hard to predict.

Reading aloud to my son sometimes leads to conversations about other books. For instance, I asked him about a fantasy series that also seemed to stretch one’s imagination. I thought it was right up his alley. Yet he declined. Pressed to explain why, my son finally admitted that he didn’t want to read that series because he failed an Accelerated Reader quiz after reading the first book. Here is our conversation:

Me: “When did you read the book in that series?”

Son: “Back at my older school.”

Me: “Why did you take a quiz on it?”

Son: “Because we had to take at least one quiz every month.”

Me: “Did you not understand the book?”

Son: “I thought I did. It was hard, but I liked it.”

This is an educational fail. When an assessment such as Accelerated Reader causes a student to not want to read, this should be a cause for concern. To be clear, Accelerated Reader is an assessment tool designed to measure reading comprehension. Yet it is not a valid tool for driving instruction. What Works Clearinghouse, a source for existing research on educational programming, found Accelerated Reader to have “mixed effects on comprehension and no discernible effects on reading fluency for beginning readers.” In other words, if a school were to implement Accelerated Reader, they should expect to find results that were not reliable, with the possibility of no impact on student learning. Consider this as you ponder other approaches to promoting independent reading.

It should also be noted that none of the studies listed took a look at the long term effects of using Accelerated Reader on independent reading. That would make for an interesting study.

I realize that it makes simple sense to quiz a student about their comprehension after reading a book. Why not? The problem is, when a student sees the results of said quiz, they appear to attribute their success or failure to their abilities as a reader. Never mind that the text might have been boring and only selected because of points, that the test questions were poorly written, that the teacher had prescribed the text to be read and tested without any input from the student, or that the test results would be used toward an arbitrary reading goal such as points. Any one of these situations may have skewed the results. In addition, why view not passing an AR quiz as a failure? It might be an opportunity to help the student unpack their reading experience in a constructive way.

What I would say is to take a step back from independent reading, and to appreciate it as a whole. What are we trying to do with this practice? Independent reading, as the phrase conveys, means to develop a habit of and love for lifelong, successful reading. This means the appropriate skills, strategies and dispositions should be developed with and by students. Any assessment that results in a student not wanting to read more interferes with that process and causes more problems than benefits. The Hippocratic Oath in medicine states “Do no harm”. Sounds like wisdom education should heed as well.

Suggestion for further reading: My Memory of The Giver by Dylan Teut

I didn’t meet my reading goal (and is that okay?)

2016 has come to a close. Like any year, there were events to celebrate along with a few experiences we may not care to reminisce over. One event that is somewhere in the middle for me is that fact that I didn’t achieve my reading goal.

For the past two years, I have set a goal for number of books to read from January to December. In 2015 I not only met my goal but surpassed it (50/53). This past year I decided to up the ante – more is better, right? – and set a goal for 60. I ended up reading 55 books this year. Not too shabby, considering my recent move and a new job.

Screen Shot 2017-01-01 at 5.45.48 PM.png

Goodreads, the online community where I along with many other bibliophiles post said goals, seems indifferent to this fact. “Better luck in 2017!” is all the feedback Goodreads offers. I can live with that. The site focused more on all of the books I did read, covers facing out, along with number of pages read and related statistics.Screen Shot 2017-01-01 at 5.43.01 PM.png

I guess I could have pushed through in December and quickly devoured some titles just to meet my goal. They may not have been what I necessarily wanted to read though. Also, I could have thrown in a few more books that my wife and I listened to with our kids while driving. But to be honest, I was half listening and didn’t feel like I could count it.

I’m glad that I didn’t caught up in meeting arbitrary goals. If that had been the case, I may have passed on longer, more complex works of fiction such as All The Light We Cannot See by Anthony Doerr. It’s fiction, yes, but also helped me deepen my understanding of what it means to live in a nation that does not share your beliefs. If I had worried too much about meeting a reading goal, I might not have reread and reread again Last Stop on Market Street by Matthew de la Pena. It still floors me how many ideas and perspectives a reader can glean from such a short text. If I had worried too much about meeting my reading goal, I may have avoided reading reference books about writing, such as Write What Matters by Tom Romano and A Writer’s Guide to Persistence by Jordan Rosenfeld. These are not texts you plow through. Yet I come back to these resources for information and inspiration.

If I was teaching in the classroom again, I think I would adopt a Goodreads-approach to independent reading. Students would still be expected to set some type of goal based on number of books. But it would not be the function of independent reading. We would look at different data about their reading lives, including:

  • Variety of genres explored
  • Complexity of texts from fall to spring
  • Favorite authors, titles and series based on ratings and reviews
  • Classmates whose reading habits influenced their reading lives
  • Books on their to-read list
  • How they feel about reading in general

This data seems a lot more important than the number of books read. I do believe volume in reading is important. But what leads someone to read? We still get reading goals like number of books read confused with purpose. The purpose of a reading goal is to make a more concerted effort to read more and to read daily. The idea is that through habitual reading, we will discover new titles, authors and genres that we come to enjoy and find valuable in our lives. I think about how I got hooked on reading: in the 3rd grade, our teacher read aloud Tales of a Fourth Grade Nothing by Judy Blume. No reading goal, amount of guided reading or immersion into a commercial program did that for me.

As teachers take stock with their students during the school year regarding reading goals, I sincerely hope they look beyond mere numbers and work with their students so they can understand them as readers. Data that only measures quantity and disregards quality tells us very little about who our students are and who they might become as readers.

Screen Shot 2017-01-01 at 5.42.50 PM.png

Suggestion for Further Reading: No AR, No Big Deal by Brandon Blom

Learning Management Systems: Who are they for?

A learning management system, or “LMS” is defined as “a digital learning system” that “manages all of the aspects of the learning process” (Amit K, 2015). A teacher can use an LMS for a variety of classroom functions, including communicating the learning objectives, organizing the learning timelines, telling the learners exactly what they need to learn and when, delivering the content straight to the learners, streamlining communications between instructor(s) and learners, and providing ongoing resources.

An LMS can also help the learner track their own progress, identifying what they have learned already and what they need to learn (Amit K). There are many options for learners to share their representations of their understandings within an LMS, including video, audio, images and text. In addition, discussion boards and assessment tools are available for teachers and students in most systems.

This definition and description of your typical LMS leads to an important question: Who is the learning management system for?

If an LMS is for the teacher, then I think they will find the previously listed features to be of great benefit to their practice. As an example, no longer do they have to collect papers, lug them home and grade them by hand. Now, students can submit their work electronically through the LMS. The teacher can assess learning online. The excuse “My dog ate my homework” ceases to exist. Google Classroom, Schoology and Edmodo fall into this category.

Also, teachers can use the LMS tools to create quizzes that could serve as a formative assessment of the lesson presented that day. Data is immediately available regarding who understands the content and who needs further support. This quick turnaround can help a teacher be more responsive to student’s academic needs. There are obvious benefits for a teacher who elects to use an LMS for these reasons.

If, on the other hand, an LMS is for the students, then we have a bit more work to do. With a teacher-centric LMS, not much really changes regarding how a classroom operates. The teacher assigns content and activities, the students complete it, and the teacher assesses. The adage “old wine in new bottles” might apply here.

With students in mind when integrating an LMS in school, the whole idea of instruction has to shift. We are now exploring concepts such as personalized learning, which “puts students in charge of selecting their projects and setting their pace” (Singer, 2016), and connected learning, which ties together students’ interests, peer networks and school accomplishments (Ito et al, 2013). In this scenario, it is not the students who need to make a shift but the teachers. Examples of more student-centered LMSs include Epiphany Learning and Project Foundry.

The role that teachers have traditionally filled looks very different than what a more student-centered, digitally-enhanced learning environment might resemble. I don’t believe either focus – the teacher or the student – is an ineffective approach for using a learning management system. The benefits in each scenario are promising. Yet we know that the more students can have ownership over the learning experience, there is an increased likelihood of greater achievement gains and higher engagement in school.

References

Amit K, S. (2016). Choosing the Right Learning Management System: Factors and Elements. eLearning Industry. Available: https://elearningindustry.com/choosing-right- learning-management- system-factors-elements

Ito, M., Gutiérrez, K., Livingstone, S., Penuel, B., Rhodes, J., Salen, K., Schor, J., Sefton-Green, J., Watkins, S.C. (2013). Connected Learning: An Agenda for Research and Design. Media and Learning Research Hub. Whitepaper, available: http://dmlhub.net/publications/connected-learning- agenda-for- research-and-design/

Singer, N., Isaac, M. (2016). Facebook Helps Develop Software That Puts Students in Charge of Their Lesson Plans. The New York Times. Available: http://nyti.ms/2b3LNzv

Data-Driven or Data-Informed? Thoughts on trust and evaluation in education

Data-informed or data-driven? This is a question I have wrestled with as a school administrator for some time. What I have found is that the usefulness of student data to inform instruction and accountability rests on the level of trust that exists within the school walls.

First there is trust in the data itself. Are the results of these assessment tools reliable (consistency of results administered over time and students) and valid (accuracy in the results of the assessments to measure student learning)? These are good initial inquiries, but should only be a starting point.

Security of student information should also be a priority when electing to house student data with third parties. One question I have started asking vendors that develop modern assessment tools include “Where do you house our student data?”, “What do you do with this data beyond allowing us to organize and analyze it?”, and “Who owns the student data?”. In a commentary for The New York Times, Julia Angwin highlights situations in which the algorithms used to make “data-driven decisions” regarding probability of recidivism in the criminal justice system were too often biased in their results (2016). Could a similar situation happen in education? Relying merely on the output that a computer program produces leads one to question the validity and reliability of this type of data-driven decision making.

A second issue regarding trust in schools related to data is how student learning results are being used as a tool to evaluate teachers and principals. All educators are rightfully skeptical when accountability systems ask for student learning results to be counted toward their performance ratings and, in some cases, level of pay and future employment with an organization.

This is not to suggest that student assessment data should be off the table when conversations occur regarding the effectiveness of a teacher and his or her impact on their students’ learning. The challenge, though, is ensuring that there is a clear correlation between the teacher’s actions and student learning. One model for data-driven decision making “provides a social and technical system to helps schools link summative achievement test data with the kinds of formative data that helps teachers improve student learning across schools” (Halverson et al, 162). Using a systematic approach like this, in which educators are expected to work together using multiple assessments to make instructional decisions, can simultaneously hold educators collectively accountable while ensuring that students are receiving better teaching.

Unfortunately, this is not the reality in many schools. Administrators too often adhere to the “data-driven” mentality with a literal and absolute mindset. Specifically, if something cannot be quantified, such as teacher observations and noncognitive information, school leaders may dismiss these results as less valuable than what a more quantitative tool might offer. Professional trust can tank in these situations.

That is why it is critical that professional development plans provide educators with training to build assessment literacy with every teacher. A faculty should be well versed in the differences between formative and summative assessments, informal and formal measurements, deciding which data points are more reliable than others, and how to triangulate data in order to analyze results and make a more informed decision regarding student learning.

Since analytics requires data analysis, institutions will need to invest in effective training to produce skilled analytics staff. Obtaining or developing skilled staff may present the largest barrier and the greatest cost to any academic analytics initiative (Baer & Campbell, 2012).

Building this assessment literacy can result in a level of trust in oneself as a professional to make informed instructional decisions on behalf of kids. If a faculty can ensure that the data they are using is a) valid and reliable, b) used to improve student learning and instructional practice, and c) considers multiple forms of data used wisely, then I am all for data-driven decision making as a model for school improvement. Trust will rise and student achievement may follow. If not, an unfortunate outcome might be the data cart coming before the pedagogical horse.

References

Angwin, J. (2016). Make Algorithms Accountable. The New York Times. Available: http://www.nytimes.com/2016/08/01/opinion/make-algorithms-accountable.html?_r=0

Baer, L.L. & Campbell, J. (2012). From Metrics to Analytics, Reporting to Action: Analytics’ Role in Changing the Learning Environment. Educause. Available: https://net.educause.edu/ir/library/pdf/pub72034.pdf

Halverson, R., Gregg, J., Prichett, R., & Thomas, C. (2007). The New Instructional Leadership: Creating Data-Driven Instructional Systems in Schools. Journal of School Leadership. Volume 17, pgs 159-194.

This is a reation paper I wrote for a graduate course I am currently taking (Technology and School Leadership). Feel free to respond in the comments to extend this thinking.

Initial Findings After Implementing Digital Student Portfolios in Elementary Classrooms

On Saturday, I shared why I was not at ISTE 2016. That post included our school’s limited progress in embedding technology into instruction that made an impact on student learning. In this post, I share how digital student portfolios did make a possible difference.

I attempted a schoolwide action research project this past year around literacy and engagement. We used three strategies to assess growth from fall to spring: Instructional walk trends, student engagement surveys, and digital student portfolios. Each data point related to one major componenent of literacy:

  • Instructional walks: Speaking and listening within daily instruction, including questioning and student discussion
  • Engagement surveys: Reading, specifically self-concept as a reader, the importance of reading, and sharing our reading lives
  • Digital portfolios: Writing, with a focus on guiding students to reflect on their work, offer feedback, and set goals for the future

The instructional walks, brief classroom visits in which I would write my observations down and share them as feedback with the teacher, did show an increase in the frequency of student discussion during instruction but not in higher level questioning. My conclusion was there needs to be specific and sustained professional development around questioning in the classroom in order to see positive growth.

The reading engagement survey results were messy. While primary students showed significant growth from fall to spring about how they feel about reading. intermediate student results were stagnant. Some older students regressed. It is worth noting that at the younger ages, there was also significant growth in their reading achievement as measured by interim assessments (running records). I didn’t have really any conclusions. The survey itself might not have been intermediate student-friendly. At the younger ages, our assessment system is built so that students are seeing steady progress with benchmark books.

Okay, now for the reason for this post. Before I share any data about student writing and digital portfolios, I want to be clear about a few things:

  • A few teachers forgot to record their spring writing data. I did not include their students in the data set.
  • The results from my first year at the school (2011-2012) used a rubric based on the 6 traits of writing. Last year we used a more condensed rubric, although both rubrics for assessing student writing were a) used by all staff to help ensure interrater reliability and b) highly correlated with the 6 traits of writing.
  • The results from my first year at the school, in which no portfolio process was used beyond a spring showcase, came from a district-initiatied assessment team that score every paper in teams of two. This year’s data was scored by the teachers within our own school exclusively.

With all of this in mind, here are the results of student growth in writing over time from my first year as a principal (no portfolio process in place) and last year (a comprehensive portfolio process in place):

2011-2012: 10% growth from fall to spring

2015-2016: 19% growth from fall to spring

I have the documentation to verify these results. The previously shared points are some of the reasons why I hold these results a bit in question. At the same time, here are some interesting details about this year’s process.

  • All teachers were expected to document student writing at least six times a year in a digital portfolio tool. In addition, each student was expected to reflect on their work by highlighting what they did well, identifying areas of growth, and making goals for the next time they were asked to upload a piece of writing into their digital portfolio.
  • The digital portfolio tool we used, FreshGrade, was well received by families. Survey results with these families revealed an overwhelmingly positive response to the use of this tool for sharing student learning regularly over the course of the school year. In fact, we didn’t share enough, as multiple parents asked for more postings.
  • The comments left by family members on the students’ work via digital portfolios seemed to motivate the teachers to share more of the students’ work. Staff requested additional trainings for conducting portfolio assessment. They could select the dates to meet and offer the agenda items that we would focus on.

If you have read any of the research on feedback and formative assessment, you will know that many studies have shown that educators will double their effectiveness as teachers when they focus on formative assessment and providing feedback for students as they learn. It should be noted that our 19% growth is almost double what we achieved in 2011-2012.

One might say, “Your teachers are better writing instructors now than five years ago.” Maybe, in fact probably. But what we measured was growth from fall to spring and compared the results, not longitudinal growth over many years. The teachers can own the impact that their instruction made on our students this school year.

There was not formalized training for improve teachers’ abilities to increase speaking and listening in the classroom. Reading engagement strategies were measured but not addressed during professional development. Only the writing portfolio process along with the incorporation of digital portfolios to document and share this process was a focus in our faculty trainings.

Although these results are promising, I am not going to make any big conclusions at this time. First, only I did the data crunching of these results. Also, we didn’t follow a more formal research process to ensure validity of our findings. However, I am interested in pursuing partnerships with higher education to ensure that any results and conclusions found in the future meet specific thresholds for reliability.

One final thing to note before I close: Technology was important in this process, but my hypothesis is the digital piece was secondary to the portfolio process itself. Asking the students to become more self-aware of their own learning and more involved in goal-setting through teacher questioning and feedback most likely made the difference. The technology brought in an essential audience, yes, but the work had to be worth sharing.

1414585856_full.jpeg

For more on this topic, explore my digital book Digital Student Portfolios: A Whole School Approach to Connected Learning and Continuous Assessment. It is available for Kindle, Nook, and iBooks. You can join our Google+ Community to discuss the topic of digital portfolios for students with other educators.

If you liked my first book, check out my newest book 5 Myths About Classroom Technology: How do we integrate digital tools to truly enhance learning? (ASCD Arias). 

 

Action Research: Professional Learning from Within

By becoming question-askers and problem-solvers, students and teachers work together to construct curriculum from their context and lived experiences.

– Nancy Fitchman Dana

13266113_10209362500521785_3561560696307026816_nOver 20 teachers recently celebrated their learning as part of their work with an action research course. They presented their findings to over 50 colleagues, friends, and family members at a local convention center. I was really impressed with how teachers saw data as a critical part of their research. Organizing and analyzing student assessment results was viewed as a necessary part of their practice, instead of simply a district expectation.

Equally impressive was how some of the teachers shared data that suggested their interventions did not have an impact on student learning. One teacher, who explored student-driven learning in her middle school, shared survey results that revealed little growth in her students’ dispositions toward school. What the teacher found out was she had not provided her students the necessary amount of ownership during class.

Another teacher did find some positive results from her research on the benefits of reflection during readers workshop. Students wrote in response journals and engaged in authentic literature circles to unpack their thinking about their books they were reading. At the end of the school year, the teacher was starting to observe her students leading their own literature conversations with enthusiasm. This teacher is excited about having some of these same students in 2016-2017, as she is looping up. “I am really looking forward to seeing how these kids grow within the next year.”

A third teacher shared her findings regarding how teaching students how to speak and listen will increase their comprehension of reading and their love for literacy. One of her data points – student surveys – was not favorable toward this intervention. Yet her other two pieces of data (anecdotal evidence, volume of reading) showed positive gains. Therefore, she made a professional judgment that her students did grow as readers and thinkers. This teacher is also reflecting on the usefulness of this survey for next year.


In these three examples, I couldn’t help but notice some unique outcomes of this action research course:

  • Teachers were proudly sharing their failures.

With the first teacher who focused on student-driven learning, she developed a greater understanding about her practice than probably possible in a more traditional professional learning experience. She learned what not to do. This teacher is stripping away less effective methods in favor of something better. And the reason she is able to do this is because she had a true professional learning community that allowed her to take risks and celebrate her discoveries.

  • Teachers didn’t want the learning to end.

This goes beyond the teacher who expressed her excitement in looping with her current students next year. Several participants in this action research course have asked if they could take it again. The main reason: They felt like they just found the question they really wanted to explore. It took them most of the school year to find it.

  • Teachers became more assessment literate.

The term “triangulation” was never referenced with the teacher who focused on conversations to building reading comprehension and engagement. Yet that is what she did, when she felt one set of data was not corroborating with the other results and her own professional judgment. Almost all of the staff who participated in action research had 3-5 data points to help make an informed conclusion about the impact of their instruction.

I also learned a few things about myself as an administrator:

  • It is not the professional development I offer for staff that makes the biggest difference – it is the conditions I create that allow teachers to explore their interests and take risks as innovative practitioners.
  • My role often is to the side of the professionals instead of in front of them, even learning with them when possible. For example, we brought in two professors from UW-Madison to lead this course. The best decision I made was recognizing that I was not the expert, and I needed to seek out those who were.
  • Principals have to be so careful about providing feedback, as we often haven’t built up enough trust, we can make false assumptions about what we are observing, and/or we do not allow teachers to discover better practices on their own terms.

In a world of standards and SMART goals, it is frowned upon when teachers don’t meet the mark regarding student outcomes. The assumption in these situations is that the teacher failed to provide effective instruction. However, the fault in this logic is that learning is not always a linear process. We work with people, dynamic and unpredictable beings who need a more personalized approach for real learning. Facilitating and engaging in action research has helped me realize this.

Data Poor

Have you heard of “DRIP”? It stands for “Data Rich, Information Poor”. The purpose of this phrase is to convey the idea that we have all of this data in schools, but cannot organize it in a way that will give us the information to make responsive instructional decisions.

photo credit: Images by John ‘K’ via photopin cc

But what if this is not case? What if we are actually data poor? When we consider only quantitative measures during collaboration, such as test scores and interim assessments, we miss out on a lot of information we can glean from more qualitative, formative assessments. These might include surveys, images, audio, journaling, and student discussions.

In this post for MiddleWeb, I profile two teachers in my district who have leveraged technology to better inform their instruction and student learning. The videos and web-based products the students and teachers develop are captured as close to the learning as possible. The results are dynamic, authentic, and minimally processed.

In tomorrow night’s All Things PLC Twitter chat (follow #atplc), we will pose questions to dig more deeply into what data means in the modern classroom. There are too many ways for learners to show what they know to ignore the potential of connected learning and continuous assessment. Join us at 8 P.M. CST for this discussion.