By Tom
The Los Angeles Times created a recent stir by publishing the standardized test results of some of their local teachers. The premise, of course, is obvious: teachers aren’t currently working as hard as possible or doing as much as they could to promote student learning. And they won’t improve unless motivated by shame. I couldn’t agree more.
And since I want to do everything I can to improve, I’ve got a plan.
I’ll start by publishing my class results online. Everyone will know exactly how many kids learned in my class and how many kids didn’t.
Then I’ll take it up a notch. In the interest of accountability and transparency, I’ll disaggregate the scores so that the world will know exactly which students I didn’t teach successfully. They’ll notice that some students did well despite my lack of effort and that some did poorly because of it.
Then I’ll upload my gradebook and attendance records. Everyone will see that the same students who did poorly on their state tests also did poorly on their classroom work. The astute readers will note that I even allowed some students to skip some of their school work. Not even turn it in. And they’ll be horrified to see that I didn’t even have everyone come to school every day. In fact, I actually had many of the lowest-performing students miss the most learning time. That should definitely make me improve.
But I’m not taking any chances. I’ll install web cams in my room so that everyone who holds a stake in the success of my students can watch my ineptitude. Newspaper editors can log in and watch me teach poorly. Members of the Business Roundtable can watch me fail to give my students the skills they need. Parents can watch their own children endure my weak lessons. Everyone will watch as my lowest-performing students engage in off-task behavior whenever possible and work hard only when under my direct supervision.
Yes, the new Tom will be totally accountable, totally transparent, and much, much better.
Tom – am I understanding this correctly? Sometimes students don’t do what you expect? Really? You need to develop more control over them. The more you control them, the more they know that you care.
Perhaps if our tests were closely aligned to our curriculum–and measured important things, rather than easily measured things–teachers would be more amenable to sharing test data.
Perhaps if failing students were considered challenges, puzzles to be solved by dedicated teachers, rather than indicators of low and ineffective practice, teachers would be eager to present and analyze test data.
Perhaps if ill-informed real estate agents didn’t carry around state testing scores as a sales tool, or states were able to stick with a single set of standards, benchmarks, and curriculum frameworks for more than a couple of years, teachers would see test data as valuable measures of ongoing school goals.
KIPP schools, BTW, publish data on how many students are accepted to four-year colleges. But they do not present data on how many actually attend or finish. Shallow goals and shallow data make for hollow results.
Kristin makes a great point. In particular, reading comprehension is a very situational skill…and all these tests involve reading comprehension. I have my students participate in some computer diagnostic (reading) assessments each year, and as part of my effort to teach them about study environment and focus, I have them take the tests at different times of the day and with different degrees of distraction, rushing, resources, etc. The kids’ results are all over the chart. I’ve used this in the past to teach about creating a good study environment, but it serves the same point that Kristin’s husband discovered.
I am not opposed to data for evaluation of student needs by an individual teacher–teachers gather data every day and use it to evaluate students and adjust instruction.
Jason-
You may want to read this: http://www.springerlink.com/content/e3k16505127401q6/
It looks like the “boat I missed” is sinking. In short, two leading experts on educational evaluation have questioned the use of value-added data for teacher evaluation. Even William Sanders, commenting on the report, admits that value-added data might be useful to identify the absolute best and absolute worst teachers, but it’s unreliable for the folks in between.
The reality as most of us know it is much closer to the experience related by Kristin’s husband. Yes, there are differences between teachers. But you can’t rely on student test data to figure it out.
It’d be nice and simple if you could, but you just can’t.
Ha! This was an excellent read – thank you.
My husband, an elementary teacher, tells an interesting story of their head teacher accidentally having the same group of students take the state-mandated assessment twice. It’s understandable – this is an online test and the young students wouldn’t know they were supposed to take it only once. The head teacher thought he was taking them to a different assessment.
Anyway, the students, who took the SAME state assessment they took a week before, earned completely different scores. Students who performed well the week prior performed poorly, and vice versa. The test hadn’t changed, but the students performed differently because it was a different day in their lives.
Test my students. I’m fine with that. But consider the data as something that will help me teach them what they don’t know, or don’t know well. Don’t publish the data as if it’s an absolutely accurate capture of my skill as a teacher.
When a car has its emissions tested, it’s done in a very controlled environment. They don’t allow you to test emissions on a mountain road, or driving on the beach in Oregon, or laboring up Lombard in San Francisco. They’re that careful with CARS, and yet they think a CHILD can be thrown into a test situation on any day of his tumultuous life and test accurately.
Then they publish the results to shame the teachers who took on the most challenging students. Well done.
It’s not about the data, it is how the data is interpreted and used. Data, by its very nature, seems to carry weight and authority when in reality it is nothing but a tool for manipulation.
The data movement scares me, not because of what numbers will show, but for how the numbers will be misused, misconstrued for people (not by people) and twisted in ways which are fundamentally contrary to the original purpose of the collection of the data in the first place.
The premise is simple and you totally missed the boat.
It’s not about teachers not working hard or trying, it’s about some teacher just not being very good at their job.
* I think what the LATimes did was a step or two too far, but I view value-added measures as some of the best information we have of teacher efficacy when it’s available. How it should be used and reported is another can of worms.
I know this is “satire” but KIPP schools do publish how well each student is doing…who is on track to be awarded the end of year trip…so…truthfully, go for it!! To the astute parent, not looking for a babysitter or a handout but a real teacher, they will understand what the data means…for the rest, well they need to wake up and realize their students have to earn their grades rather than have a grade given them…
So…do it! I dare you!