Learning Management Systems: Who are they for?

A learning management system, or “LMS” is defined as “a digital learning system” that “manages all of the aspects of the learning process” (Amit K, 2015). A teacher can use an LMS for a variety of classroom functions, including communicating the learning objectives, organizing the learning timelines, telling the learners exactly what they need to learn and when, delivering the content straight to the learners, streamlining communications between instructor(s) and learners, and providing ongoing resources.

An LMS can also help the learner track their own progress, identifying what they have learned already and what they need to learn (Amit K). There are many options for learners to share their representations of their understandings within an LMS, including video, audio, images and text. In addition, discussion boards and assessment tools are available for teachers and students in most systems.

This definition and description of your typical LMS leads to an important question: Who is the learning management system for?

If an LMS is for the teacher, then I think they will find the previously listed features to be of great benefit to their practice. As an example, no longer do they have to collect papers, lug them home and grade them by hand. Now, students can submit their work electronically through the LMS. The teacher can assess learning online. The excuse “My dog ate my homework” ceases to exist. Google Classroom, Schoology and Edmodo fall into this category.

Also, teachers can use the LMS tools to create quizzes that could serve as a formative assessment of the lesson presented that day. Data is immediately available regarding who understands the content and who needs further support. This quick turnaround can help a teacher be more responsive to student’s academic needs. There are obvious benefits for a teacher who elects to use an LMS for these reasons.

If, on the other hand, an LMS is for the students, then we have a bit more work to do. With a teacher-centric LMS, not much really changes regarding how a classroom operates. The teacher assigns content and activities, the students complete it, and the teacher assesses. The adage “old wine in new bottles” might apply here.

With students in mind when integrating an LMS in school, the whole idea of instruction has to shift. We are now exploring concepts such as personalized learning, which “puts students in charge of selecting their projects and setting their pace” (Singer, 2016), and connected learning, which ties together students’ interests, peer networks and school accomplishments (Ito et al, 2013). In this scenario, it is not the students who need to make a shift but the teachers. Examples of more student-centered LMSs include Epiphany Learning and Project Foundry.

The role that teachers have traditionally filled looks very different than what a more student-centered, digitally-enhanced learning environment might resemble. I don’t believe either focus – the teacher or the student – is an ineffective approach for using a learning management system. The benefits in each scenario are promising. Yet we know that the more students can have ownership over the learning experience, there is an increased likelihood of greater achievement gains and higher engagement in school.

References

Amit K, S. (2016). Choosing the Right Learning Management System: Factors and Elements. eLearning Industry. Available: https://elearningindustry.com/choosing-right- learning-management- system-factors-elements

Ito, M., Gutiérrez, K., Livingstone, S., Penuel, B., Rhodes, J., Salen, K., Schor, J., Sefton-Green, J., Watkins, S.C. (2013). Connected Learning: An Agenda for Research and Design. Media and Learning Research Hub. Whitepaper, available: http://dmlhub.net/publications/connected-learning- agenda-for- research-and-design/

Singer, N., Isaac, M. (2016). Facebook Helps Develop Software That Puts Students in Charge of Their Lesson Plans. The New York Times. Available: http://nyti.ms/2b3LNzv

Data-Driven or Data-Informed? Thoughts on trust and evaluation in education

Data-informed or data-driven? This is a question I have wrestled with as a school administrator for some time. What I have found is that the usefulness of student data to inform instruction and accountability rests on the level of trust that exists within the school walls.

First there is trust in the data itself. Are the results of these assessment tools reliable (consistency of results administered over time and students) and valid (accuracy in the results of the assessments to measure student learning)? These are good initial inquiries, but should only be a starting point.

Security of student information should also be a priority when electing to house student data with third parties. One question I have started asking vendors that develop modern assessment tools include “Where do you house our student data?”, “What do you do with this data beyond allowing us to organize and analyze it?”, and “Who owns the student data?”. In a commentary for The New York Times, Julia Angwin highlights situations in which the algorithms used to make “data-driven decisions” regarding probability of recidivism in the criminal justice system were too often biased in their results (2016). Could a similar situation happen in education? Relying merely on the output that a computer program produces leads one to question the validity and reliability of this type of data-driven decision making.

A second issue regarding trust in schools related to data is how student learning results are being used as a tool to evaluate teachers and principals. All educators are rightfully skeptical when accountability systems ask for student learning results to be counted toward their performance ratings and, in some cases, level of pay and future employment with an organization.

This is not to suggest that student assessment data should be off the table when conversations occur regarding the effectiveness of a teacher and his or her impact on their students’ learning. The challenge, though, is ensuring that there is a clear correlation between the teacher’s actions and student learning. One model for data-driven decision making “provides a social and technical system to helps schools link summative achievement test data with the kinds of formative data that helps teachers improve student learning across schools” (Halverson et al, 162). Using a systematic approach like this, in which educators are expected to work together using multiple assessments to make instructional decisions, can simultaneously hold educators collectively accountable while ensuring that students are receiving better teaching.

Unfortunately, this is not the reality in many schools. Administrators too often adhere to the “data-driven” mentality with a literal and absolute mindset. Specifically, if something cannot be quantified, such as teacher observations and noncognitive information, school leaders may dismiss these results as less valuable than what a more quantitative tool might offer. Professional trust can tank in these situations.

That is why it is critical that professional development plans provide educators with training to build assessment literacy with every teacher. A faculty should be well versed in the differences between formative and summative assessments, informal and formal measurements, deciding which data points are more reliable than others, and how to triangulate data in order to analyze results and make a more informed decision regarding student learning.

Since analytics requires data analysis, institutions will need to invest in effective training to produce skilled analytics staff. Obtaining or developing skilled staff may present the largest barrier and the greatest cost to any academic analytics initiative (Baer & Campbell, 2012).

Building this assessment literacy can result in a level of trust in oneself as a professional to make informed instructional decisions on behalf of kids. If a faculty can ensure that the data they are using is a) valid and reliable, b) used to improve student learning and instructional practice, and c) considers multiple forms of data used wisely, then I am all for data-driven decision making as a model for school improvement. Trust will rise and student achievement may follow. If not, an unfortunate outcome might be the data cart coming before the pedagogical horse.

References

Angwin, J. (2016). Make Algorithms Accountable. The New York Times. Available: http://www.nytimes.com/2016/08/01/opinion/make-algorithms-accountable.html?_r=0

Baer, L.L. & Campbell, J. (2012). From Metrics to Analytics, Reporting to Action: Analytics’ Role in Changing the Learning Environment. Educause. Available: https://net.educause.edu/ir/library/pdf/pub72034.pdf

Halverson, R., Gregg, J., Prichett, R., & Thomas, C. (2007). The New Instructional Leadership: Creating Data-Driven Instructional Systems in Schools. Journal of School Leadership. Volume 17, pgs 159-194.

This is a reation paper I wrote for a graduate course I am currently taking (Technology and School Leadership). Feel free to respond in the comments to extend this thinking.

Yes, School Funding Does Matter

The tweet gave me pause when I first read the headline:

I followed this link retweeted by Frederick Hess, contributor to Education Week, to a US News & World Report opinion piece titled More Money, Same Problems. It was written by Gerard Robinson (the source of the tweet) and Benjamin Scafidi. Robinson is a fellow at the American Enterprise Institute, “a conservative think tank” (Source: Wikipedia). Scafidi is a professor of economics at Kennesaw State University.

The authors acknowledge that “public education is important to the economic and social well-being of our nation”. They go on to point out that there are some students who are successful in public education and far too many who are not. You have no argument from me. Robinson and Scafidi also concede that an adequate level of “resources matter to education”.

Their commentary then gets into the the problems that they believe plague public education:

– While student school enrollment increased 96% since 1950, public school staffing increased 386%.
– Since 1992, public school national math scores have shown little growth (click to their source).
– Today’s graduation rates are only slightly above what they were in 1970.

Robinson and Scafidi follow up with their ideas for improving student outcomes in public education:

– Better involvement from parents
– State control of failing public schools
– Charter schools (a result of state takeovers)

While I appreciate their passion for providing a better experience for students who do not have access to a high quality public education, I take issue with their ideas for improvement.

First, parent involvement. While it can have an impact on student learning when the involvement is positive, it is often not something we as public educators can control in our settings. My experience tells me that the best public schools focus the majority of their efforts and resources on the limited time that they actually have with students. Dr. John Hattie’s research on what works regarding instruction places family involvement on the lower end of the effective educational approach spectrum. It can be effective, but there is a ceiling.

So what’s on the higher end of the spectrum? Everything that Robinson and Scafidi failed to mention, including:

– Formative assessment
– Feedback strategies
– Self-assessment
– Vocabulary instruction
– Classroom discussion
– Response to Intervention

In fact, one of the least effective practices for improving student learning outcomes are…charter schools. According to Hattie, charter schools have around the same effect size as ensuring students had appropriate amounts of sleep and altering classroom/school schedules. My time is important, so I will let charter school and school choice proponents wrestle with these findings.

What I do want to point out is that the most effective instructional strategies require generous amounts of school funding. Here’s why: Teaching is one of the most challenging professions. To do it well, educators need consistent and effective training in the areas of curriculum, assessment and instructional strategies. This requires funding and support for job-embedded professional development. Dollars should be allocated for training, time, resources, and opportunities to apply these new skills in a low risk/high success environment. If this sounds like a lot of money for this type of work, please remember that teaching is a profession. I am sure you would agree that our students are worth it.

Citing graduation rates and flatlining test scores might serve to perpetuate the opinion that public education is broken. However, this argument is a generalization of our system as a whole. Yes, there are ineffective schools and there are effective schools. No one would dispute this. Yet each school is an individual learning community. They each have specific strengths and needs, and should be assessed with valid and reliable measures. To paint a broad stroke over public education with data that is questionable at best (see here and here) is a disservice to the hard work and dedication that all public educators put in every day on behalf of our students.

I won’t argue that public education needs to improve. We do. It is the work that we should be engaging in every day. The least that people outside public education can do is to ensure that they consider multiple perspectives on a position they support and provide valid and reliable evidence to back it up.

Action Research: Professional Learning from Within

By becoming question-askers and problem-solvers, students and teachers work together to construct curriculum from their context and lived experiences.

– Nancy Fitchman Dana

13266113_10209362500521785_3561560696307026816_nOver 20 teachers recently celebrated their learning as part of their work with an action research course. They presented their findings to over 50 colleagues, friends, and family members at a local convention center. I was really impressed with how teachers saw data as a critical part of their research. Organizing and analyzing student assessment results was viewed as a necessary part of their practice, instead of simply a district expectation.

Equally impressive was how some of the teachers shared data that suggested their interventions did not have an impact on student learning. One teacher, who explored student-driven learning in her middle school, shared survey results that revealed little growth in her students’ dispositions toward school. What the teacher found out was she had not provided her students the necessary amount of ownership during class.

Another teacher did find some positive results from her research on the benefits of reflection during readers workshop. Students wrote in response journals and engaged in authentic literature circles to unpack their thinking about their books they were reading. At the end of the school year, the teacher was starting to observe her students leading their own literature conversations with enthusiasm. This teacher is excited about having some of these same students in 2016-2017, as she is looping up. “I am really looking forward to seeing how these kids grow within the next year.”

A third teacher shared her findings regarding how teaching students how to speak and listen will increase their comprehension of reading and their love for literacy. One of her data points – student surveys – was not favorable toward this intervention. Yet her other two pieces of data (anecdotal evidence, volume of reading) showed positive gains. Therefore, she made a professional judgment that her students did grow as readers and thinkers. This teacher is also reflecting on the usefulness of this survey for next year.


In these three examples, I couldn’t help but notice some unique outcomes of this action research course:

  • Teachers were proudly sharing their failures.

With the first teacher who focused on student-driven learning, she developed a greater understanding about her practice than probably possible in a more traditional professional learning experience. She learned what not to do. This teacher is stripping away less effective methods in favor of something better. And the reason she is able to do this is because she had a true professional learning community that allowed her to take risks and celebrate her discoveries.

  • Teachers didn’t want the learning to end.

This goes beyond the teacher who expressed her excitement in looping with her current students next year. Several participants in this action research course have asked if they could take it again. The main reason: They felt like they just found the question they really wanted to explore. It took them most of the school year to find it.

  • Teachers became more assessment literate.

The term “triangulation” was never referenced with the teacher who focused on conversations to building reading comprehension and engagement. Yet that is what she did, when she felt one set of data was not corroborating with the other results and her own professional judgment. Almost all of the staff who participated in action research had 3-5 data points to help make an informed conclusion about the impact of their instruction.

I also learned a few things about myself as an administrator:

  • It is not the professional development I offer for staff that makes the biggest difference – it is the conditions I create that allow teachers to explore their interests and take risks as innovative practitioners.
  • My role often is to the side of the professionals instead of in front of them, even learning with them when possible. For example, we brought in two professors from UW-Madison to lead this course. The best decision I made was recognizing that I was not the expert, and I needed to seek out those who were.
  • Principals have to be so careful about providing feedback, as we often haven’t built up enough trust, we can make false assumptions about what we are observing, and/or we do not allow teachers to discover better practices on their own terms.

In a world of standards and SMART goals, it is frowned upon when teachers don’t meet the mark regarding student outcomes. The assumption in these situations is that the teacher failed to provide effective instruction. However, the fault in this logic is that learning is not always a linear process. We work with people, dynamic and unpredictable beings who need a more personalized approach for real learning. Facilitating and engaging in action research has helped me realize this.

Action Research and the Art of Knowing Our Students #NCTE15

What happens when student data doesn’t agree with what you think you know, especially about a student’s reading skills and dispositions?

It’s a situation that happens often in schools. We get quantitative results back from a reading screener that doesn’t seem to jive with what we see every day in classrooms. For example, a student shows high ability in reading, yet continues to stick with those easy readers and resists challenging himself or herself with more complex literature. Or the flip: A student has trouble passing that next benchmark, but is able to comprehend a book above his or her reading level range.

Here’s the thing: The test tests what it tests. The assessment is not to blame. In fact, blame should be out of the equation when having professional conversations about how to best respond to students who are not experiencing a level of success as expected. The solution is not in the assessment itself, but in differentiating the types of assessments we are using, questioning the types of data we are collecting, and organizing and analyzing the various data points to make sense of what’s actually happening with our students’ learning lives.

Differentiating the Assessments

It’s interesting how reading, a discipline far removed from the world of mathematics, is constantly quantified when attempting to assess readers’ abilities. Words correct per minute, how many comprehension questions answered correctly, and number of pages read are most often referenced when analyzing and discussing student progress. This data is not bad to have, but if it is all we have, then we paint an incomplete picture of our students as readers.

Think about yourself as a reader. What motivates you to read? I doubt you give yourself a quiz or count the number of words you read correctly on a page after completing a book. Lifelong readers are active assessors of their own reading. They use data, but not the type of data that we normally associate with the term. For example, readers will often rate books once they have finished them on Amazon and Goodreads. They also add a short review about the book on these online forums. The audience that technology provides for readers’ responses is a strong motivator. No one requires these independent readers to rate and review these books, but they do it anyway.

There is little reason why these authentic assessments cannot occur in today’s classrooms. One tool for students to rate and review books is Biblionasium (www.biblionasium.com). It’s like Goodreads for kids. Students can keep track of what they’ve read, what they want to read, and find books recommended by other young readers. It’s a safe and fun reading community for kids.

Yes, this is data. That data isn’t always a number still seems like a shocker for too many educators. To help, teacher practitioners should ask smart questions about the information coming at them to make better sense of where their students are at in their learning journeys.

Questioning the Data

Data such as reading lists and reading community interactions can be very informative, so long as we are reading the information in the right way.

Asking questions related to our practice can help guide our inquiries. For example, are students self-selecting books on their own more readily over time? Also, are they relying more on peers and less on the teacher in their book selection? In addition, are the books being read increasing in complexity throughout the year? All of these qualitative measures of reading disposition can directly relate to quantitative reading achievement scores, informing the teacher with a more comprehensive look at their literacy lives.

Organizing and Analyzing the Data

12189974_506691999491569_3464530376470669609_n
Students filling out reading motivation surveys via Google Forms and Chromebooks

I recently had our K-5 teachers administer reading motivation surveys with all of our students. The results have been illuminating for me, as I have entered them into spreadsheets.

Our plan is to position this qualitative data side-by-side with our fall screener data. The goal is to find patterns and trends as we compare and contrast these different data points, often called “triangulation” (Landrigan and Mulligan, 2013). Actually, the goal is not triangulation, but responding to the data and making instructional adjustments during the school year. This makes these assessments truly formative and for learning.

Is the time and energy worth it?

I hope so – I spent the better part of an afternoon at school today entering students’ responses to questions such as “What kind of reader are you?”, “How do you feel about reading with others?”, and “Do you like to read when you have free time?” (Marinek et al, 2015). The information collecting and organizing has been informative in itself. While it takes time, by transcribing students’ responses, I am learning so much about their reading lives. I hope that through this process of differentiating, questioning, and organizing and analyzing student reading data, both quantitative and qualitative, we will know our students better and become better teachers for our efforts.

References

Landrigan, C. & Mullligan, T. (2013). Assessment in Perspective: Focusing on the Reader Behind the Numbers. Portsmouth, NH: Stenhouse.

Marinak, B. A., Malloy, J. B., Gambrell L. B., & Mazzoni, S. A. (July/August, 2015). Me and My Reading Profile: A Tool for Assessing Early Reading Motivation. The Reading Teacher, (69)1, 51-62.


Attending the NCTE Annual Convention in Minneapolis this year? Join Karen Terlecky, Clare Landrigan, Tammy Mulligan and me as we share our experiences and specific strategies in conducting action research in today’s classrooms. See the following flyer for more information.

Screen Shot 2015-11-14 at 9.04.44 PM

Do you want to develop digital portfolios with your students? Join our book club!

The single most important thing you could do tomorrow for little to no money is have every student establish a digital portfolio where they collect their best work as evidence of their skills.

-Dr. Tony Wagner, Expert in Residence, Harvard University

Developing digital portfolios with your students can be a game-changing action in your classroom. Here are just a few of the benefits:

Not sure where to begin? Then join our July Book Club!

IMG_0192

Here is how to get started:

  1. Purchase the book on Amazon (link), iBooks (link), or Nook (link). I am offering 10% off this month when purchased directly through me, if you don’t mind the brief lag in response and a PayPal request.
  2. Request access to our Google+ Community (link). This is where our conversations will be housed.
  3. Check out the dates below for a timeline of chapters to be read.

June 29 – July 3:   Chapter 1 – Purposes for Portfolios

July 6 – July 10:    Chapter 2 – Performance Portfolios

July 13- July 17:   Chapter 3 – Progress Portfolios

July 20 – July 24:  Chapter 4 – From Files to Footprints: Beyond Digital Student Portfolios

In August, we will keep the conversations going informally. It would be a good month to ask final questions and conclude our time together with a celebration of sorts.

What you can expect from me:

  • A thought-provoking question posted once a week day in our Google+ Community throughout the four weeks. Also expect possible follow up responses from distinguished members of our community and/or me.
  • Full access during these four weeks to me for questions and demonstrations you might request regarding digital tools, processes, and leadership strategies. I will include my personal phone number and offer Google+ Hangouts to chat in real time.
  • An update on what our school is implementing regarding digital portfolios, current tools of choice, and our school’s brand new process for helping students reflect on and respond to their important and lifeworthy work online.

Not bad, right? I am also willing to issue very formal (~ahem~) certificates of participation for this book club, assuming frequent and thoughtful activity in our Google+ Community. This documentation may be used toward professional hours/accreditation within your district or university. Please check with your supervisor before assuming anything.

In closing, I can confidently state that the teachers I’ve observed who have experienced the greatest growth in their students’ knowledge, skills, and dispositions are those that a) highlighted their students’ best work, b) provided time for them to reflect on their progress, and c) gave feedback on their current capacities and allowed for personal goal setting.

If these descriptors sounds like the teacher that you might want to be in 2015-2016, I highly encourage you to join us for our July 2015 book club. You won’t regret it.

20140228-213002.jpg

Data Poor

Have you heard of “DRIP”? It stands for “Data Rich, Information Poor”. The purpose of this phrase is to convey the idea that we have all of this data in schools, but cannot organize it in a way that will give us the information to make responsive instructional decisions.

photo credit: Images by John ‘K’ via photopin cc

But what if this is not case? What if we are actually data poor? When we consider only quantitative measures during collaboration, such as test scores and interim assessments, we miss out on a lot of information we can glean from more qualitative, formative assessments. These might include surveys, images, audio, journaling, and student discussions.

In this post for MiddleWeb, I profile two teachers in my district who have leveraged technology to better inform their instruction and student learning. The videos and web-based products the students and teachers develop are captured as close to the learning as possible. The results are dynamic, authentic, and minimally processed.

In tomorrow night’s All Things PLC Twitter chat (follow #atplc), we will pose questions to dig more deeply into what data means in the modern classroom. There are too many ways for learners to show what they know to ignore the potential of connected learning and continuous assessment. Join us at 8 P.M. CST for this discussion.