It seems like a no-brainer. If you want to evaluate my effectiveness as a teacher, you need to look at what I do in my classroom. If you want to evaluate my impact on student learning, you need to look at the work I make my students do and see how that work reflects my students' growth over time.
This is the right way to judge my job performance. But doing it this way takes time, is complicated for my boss (who has to do the same for two-dozen other teachers in two-dozen other contexts), and requires physical and intellectual investment into practices that can sometimes be uncomfortable and challenging (read: it requires change).
Too often in education, doing the difficult, right thing is avoided in favor of doing the simpler, easier to administer thing. When our students do this, we chastise them for cutting corners and missing out on the real value of the work–by doing so they are only cheating themselves, we tell them. When we do this as a system, we are cheating society.
It would be easier to administer if my and my colleagues' job evaluation was based simply on one or two blanket assessments that are totally detached from my students, my classroom or my context. It'd be one test, one set of data, one easy "solution." And, that cutting of corners means the cheating of the system.
As part of a TPEP Regional Implementation Grant district I find myself on the front edge of the movement toward reforming teacher evaluation. What this new system is asking is complicated, time consuming, challenging, and demands change and growth. If executed properly, it is not the cutting of corners and the cheating of a system. I believe this more every time I sit in sessions at my ESD to learn about updates from OSPI.
We have the chance to do something really right, here. And the phrase that was echoed again and again throughout my most recent info session at the ESD is what solidifies this for me: "Keep it close to the classroom."
This statement, verbatim, was spoken so many times by Michaela Miller, the TPEP Project Coordinator from OSPI, that I briefly considered tallying its use the way I used to tally the "ums" my Comm 101 teacher would use in his lectures. For a different purpose, though. There are so many ways that statewide policy can go badly that it is sometimes hard to recognize that it can go well. The constant advocacy, by Miller and others at OSPI, that teacher evaluation be focused in the classroom, not just in aggregated data on a spreadsheet, is what gives me great faith that we are being handed a system that–when we implement it wisely–will actually do what we want it to: examine teacher practice and student learning. No globally-aggregated but easier-to-administer collection of numbers is capable of painting such a clear and useful picture of my and my colleagues' actual effectiveness.
The idea of "keep it close to the classroom" is most resonant when it comes to the state-mandated use of student growth data as part of our evaluations. During their presentation at ESD 112, Miller shared this slide (see slide #44 of this presentation):
Granted, this image above is a bit out of context (click on it if your browswer makes the image too small to read)… here's a little to help frame it: Miller explained that OSPI's position about the assessments used to evaluate student growth for the purposes of teacher evaluation need to be centered squarely in the classroom practices of the teacher being evaluated. Those assessments must be connected to the goals the teacher set for those learners in that classroom. The goal needs to come first: where does the teacher as professional want his/her students to go? Then, and only then, can the assessment be selected. The assessments must match the goals for the learners in that classroom, in that context. Then, and only then, can meaningful data be extracted and used to evaluate students' actual growth.
This is the right way to evaluate student learning and growth. This is what we already do in our classrooms, if we're really doing what we're supposed to be. This is the only way that this sytem will work: force us to examine and evaluate what we already do rather than give us one more thing to do.
Would it be easier for administration and leadership to say, for example, everyone's student growth data is based on a MAP test or an EOC or a widely administered "standardized" tool? Sure, easier to administer, but too far from the classroom and too rife with countless "what ifs" and variables that prevent such an assessment from being capable of validity–let alone that such blanket measures fail the test of being directly matched to learners' real and immediate needs. I have been in the business only just over a decade, but I have never seen a situation where doing something easier to adminster was actually more impactful that doing it the harder, more complicated way. Lasting change will only come from doing the hard work.
Close to the classroom–definitely a phrase worth repeating!
Let’s hope the powers hear it, and accept it.
First of all, everything Michaela says is accurate. Period. That said, Keeping it close to the classroom is the best advice anyone could utter when it comes to teacher evaluation.