If you skim back through my past posts here, you might notice that I have cast the word "data" with a very specific connotation. I even did a search on SFS for the word "data," and lo and behold, a bunch of my posts–and even more interestingly, a bunch of my comments on other posts were there… and just the snip shown in the search results highlights my apprehension, distrust, reservation, and resistance to data.
While I curse under my breath, I have to recognize: that search? That's data.
I'm having to re-evaluate my own resistance.
As I examine the new teacher evaluation system, I'm in general a proponent of what it contains, but anything that mentions that four-letter-word always unsettles me a little.
Not long ago I co-presented at a CSTP teacher leadership conference, and one of the points about leadership was to consider how to activate change and to recognize that growth and change cannot happen unless someone is upset. By upset, we didn't mean p'd off, we meant having their status quo challenged in a way that unsettled people enough to get them moving.
I guess that is what the d-word embedded in the new evaluation system is doing to me right now… unsettling me enough to allow me to change. Especially since I discovered Flubaroo.
Flubaroo is a simple little script you can add to a google spreadsheet to make it grade your students' work. (Cue the choir of angels and lights from heaven.)
I have used Flubaroo to grade three vocabulary assessments thus far this school year, and it does in about one minute what would take easily an hour to do by hand. More interesting, though, is the data it provides me. It does item analysis to let me know where kids as a group excelled or struggled. More than that even: it forced me to realize that my perception about my students and their performance on my vocabulary assessments was actually wrong.
For years, I always interpreted that multiple-choice tests were the "easiest" way to go for my kids to show their vocabulary knowledge. I relegated the cloze section to the end and made it the smallest, assuming that because it demanded a higher degree of reading skill and word knowledge, that it was unfair to include too many of these kinds of exercises. After all, with the multiple choice questions, kids were limited to four options (much easier to eliminate non-answers) but with the cloze, they had to draw from the entire word bank of 20 or 30 vocab words. For some reason, seeing the kids' wrong answer written there in the blank (as opposed to seeing the wrong bubble filled in) carried more psychic weight with me and thus led me to a greater assumption of failure carried in an incorrect cloze versus an incorrect choice.
So when the data revealed that my kids were rocking the cloze and bombing the multiple choice in three separate assessments, it forced me to rethink my assessment and what my assessment was really assessing.
First, I realized that the way I introduce vocabulary is more about its use in context than in memorizing a list of synonyms and definitions–so it made sense that this was the skill my students excelled at rather than the matching of abstract definitions.
Second, I realized that I would have moved forward with my past assumptions had I not set aside my resistance to data.
To me, this is where data has its place in effective teaching: evaluation of student performance and progress. I'm having to eat my past words–to an extent–and accept that data actually can be a very powerful part of my teaching process. In the past, my only data had been the gradebook and my observations. Now, I see that engaging data differently can make a difference in my approach.
I remain skeptical about how data may be misused by ineffective administrators as part of the new evaluation system in our state. However, I'm now more convinced about the value of close examination of classroom data to inform my instruction. My experience also points out a confounding factor when it comes to teachers using data: it matters whether or not a teacher has the tools (literal and figurative) to evaluate student data.
If I cannot change in light of new experience and information, how can I expect my students to? My relationship with data is warming…just a little.
I have also found times when my perceptions of my students were shown to be inaccurate when I looked at their scores on assessment.
That being said, I have far more often seen data be misused, misinterpreted, and manipulated. In addition, I’ve seen decision that were based on faulty assessments that were either poorly designed, or designed to measure something far different than how it was being used.
Within education, we have a long way to go in learning how to design assessments and instruction in a way that makes data a reliable and useful tool.
I was talking to a parent this week who works in business, and as we discussed some research into a current educational practice he laughed and said, “Well, you know what they say…there are lies, damn lies, and statistics.”
I need give Flubaroo a try. I like data, but like Maren, I’m wary. Maybe even a little cynical; at least sometimes.
Take today, for instance. As I was getting ready to head home, I walked past a colleague’s room and said goodbye. She told me she was a little bummed about her students’ math pretest scores.
“Well,” I said, “The upside is that in this district they make a big deal out of pre and post test scores. Come June, you’ll probably be glad your students tanked the pre-test.”
Cynical? Yeah.
Honest? Yeah.
I agree–I’ve always thought Item analysis is a great tool for using data to inform instruction. Kristin asked why OSPI didn’t offer something similar, but, honestly, I think they do–they have item analysis for many subject areas for many years of MSP/HSPE/WASL data at the school level. https://eds.ospi.k12.wa.us/WASLTestItems/ (I know the link says WASL, but really it has a lot of MSP/HSPE data in it!) For years and subject areas without the item analysis, there is still strand analysis on the Washington state report card site.
Now that I think about it, this school level data from OSPI is very useful for teachers in schools like mine where only one or maybe two teachers teach each class–school data and classroom data are often the same. In a larger school, with many different teachers for each class, this OSPI school level data isn’t classroom data, but is still worthwhile for teacher group discussions.
I’m a fan of teachers using assessment data to inform instruction. The problem with large scale assessment data? It’s been misused for so many other purposes, and there’s a lot of “noise” in there–so many variables affect the results. Flubaroo, something that allows item analysis on a teacher’s own classroom assessments, sounds like a great tool!
I like data. I think it’s because I tend toward big generalizations that lead me down the wrong path. Last year, I had a student who was gobbling up books. I was so impressed! I lay awake nights thinking of titles I could recommend to him next. It wasn’t until I started looking at his work – data – that I realized he had no idea what he was reading. I’d love to be the kind of teacher who jumps on chairs and recites Whitman, but I’m pretty sure my students wouldn’t learn any concrete academic skills. And I’m not being blithe when I say we can teach those academic skills, measure whether students got them, and also be inspirational.
I am so excited to try flubaroo. How can we get OSPI to use it on our state assessments? I feel like teachers are investing a lot of energy into wrapping our minds and practices around state assessments, and I don’t think they’re great assessments or that they’re well applied.
While I agree with you on the ineffective administrator, I think we can get beyond those reservations. First, we can speak up and make sure our administrative teams are calibrated when they look at data and evidence. Seattle’s contract clearly lists what is considered data, so both teachers and administrators need to be on the same page. And teachers need to read their contracts and be prepared to show evidence. Like working on our National Boards, teachers are expected to pay attention to student progress in the new evaluation system. I think we will work the kinks out, and that the new system is much better than the old.