Surprised by your summative TPEP score? You shouldn’t be…

It’s that time of year again when school comes to a close and seniors are waiting for graduation. As I think about that final report card, I know that the grades that my students will see will be of little surprise to them. We’ve been communicating all semester long about their progress towards the learning goals and standards. They’ve been assessed throughout the semester and I’ve offered significant feedback to them about their work and skill development. I’ve met with students routinely throughout the year to discuss learning strategies and how to overcome their perceived weaknesses. Now, as the year culminates, students should be pretty clear as to where they stand academically in my class.

So as teachers come to the end of this year’s TPEP (Teacher/Principal Evaluation Project) cycle, do all teachers know how they’ve been assessed? Have they had the opportunity to receive feedback about their teaching throughout the year? Will they be surprised when they see that summative TPEP score on their final evaluation?

For the past three months I’ve been engaged in pre-bargaining contract language to formally transition TPEP from a LOA (Letter of Agreement) into a more permanent place in our CBA (collective bargaining agreement). Part of the pre-bargaining process includes research. I’ve spent quite a bit of time talking to teachers from other districts and looking over contracts from districts across the state. What I’ve learned is that TPEP implementation and annual process operates different depending on where a teacher works. My biggest take away: teachers and evaluators might be meeting routinely, but districts have distinct operating definitions of what “routine” looks like.

TPEP has been part of our state for the past six years. My district began implementation of the project during the 2013-2014 school year. Our implementation was fairly democratic. A committee of teachers and administrators selected the Danielson Framework. Core principles and beliefs were drafted and a game plan was put into place. At the core of our work was a belief that TPEP was to be a growth model for our teachers; a process by which teachers and administrators are constantly working to refine teaching and learning in and out of the classroom.

As implementation began, we (both teachers and evaluators) quickly found that the Comprehensive model was cumbersome if we wanted to be good stewards of our core beliefs and principles. Because our local union and administration agreed to meet once a month to discuss TPEP related issues/concerns, teachers asked to make a change to the district TPEP procedure. Beginning in November 2013, teachers on TPEP began meeting once every two weeks with their evaluators. The meetings became a time where teachers could present artifacts and materials to evidence evaluative criteria. Because I chose to be an early adopter, I met with my evaluators once every two weeks from November until April. During that time I was truly challenged. I don’t mean this negatively, whatsoever. I was the one who decided what evidence would be examined and I was the one who began the conversation about how I wanted the evidence scored. This did not mean that I always got my way or that my administrators were push overs. Instead, I was asked questions and given feedback about my practice in a way that I had not received in the past. If I disagreed with the score, I had an opportunity two weeks later to offer additional evidence. I was able to refine my student growth goals, carefully analyze student success towards those goals, and discuss that success or lack thereof, with my evaluators. That format, adopted nearly three years, with some minimal adjustments, remains in place today. It provides teachers with constant feedback. As a result, teachers are encouraged to think differently about their practice. Teachers are now taking risks in engaging learners with new techniques and strategies and seeking assistance from their coach (that’s me!).

Now we are wrapping up our third year on the cycle and transitioning TPEP into our contract. All of our veteran teachers (as well as new teachers) have completed Comprehensive. Although it is no longer feasible for our evaluators to meet once every two weeks with every teacher on Comprehensive, both evaluators in my building set a goal to meet once every three weeks. It doesn’t always happen– after all parent meetings come up, teachers or administrators are sick, but I hold firm in my belief that meeting routinely, throughout the school year, is the best way for an evaluator and a teacher to manage this process. Routine meetings offer the opportunity for teachers to talk about their work, show off when things are going well, and ask for help when they aren’t. When the meetings are routine, they become low risk and less stressful, thus leading to genuine conversations about teaching and learning. When the meetings are routine, the final summative assessment at the end of the year isn’t a surprise, instead it’s confirmation.

But here’s the problem. This isn’t happening everywhere. Teachers in districts across the state tell me that they rarely meet with their evaluator to discuss their practice. Teachers aren’t given the opportunity to routinely reflect and gather feedback about their practice. Danielson (whose model is one of the three approved in the state) points to the fact that routine meetings need to take place in order to see real growth in teaching (Educational Leadership, Vol. 68, No. 4). Many teachers have no idea what final score they will receive until they attend the year end summative meeting. Qutie frankly, this is unacceptable. It is time for teachers to question what “routine” meetings are and to ask that language and practice match intent and goals. A teacher’s summative score should not be a surprise. When teachers feel disconnected to the process and administrators don’t meet with teachers regularly to discuss progress, the entire evaluation process invalidates and undermines the growth model mindset. What could teaching and learning look like if all teachers benefited from this regular, intentional feedback?

If we ask our students to engage in learning with a growth mindset and we use regular feedback to build reflection for our students, shouldn’t our teacher evaluation system mirror that same practice? I completely understand that TPEP is a lot of work for teachers and evaluators. It’s supposed to be. Accomplished teaching requires constant reflection based on feedback and assessment in order to refine goals and practice. If we expect our teachers to provide feedback to students, shouldn’t we ask the same of our teacher evaluators?

2 thoughts on “Surprised by your summative TPEP score? You shouldn’t be…

  1. Shari Conditt

    Mark, we’ve looked at your language and taken some ideas from it. Inherently I like the idea of “natural harvest” idea but I worry that it can be nebulous for contract language. However, I absolutely recognize the desire to move away from defining exactly what a preponderance of evidence is as it makes the process feel mechanical.

    I concur, evaluators that are instructional leaders know and practice the feedback/reflection cycle. However, other evaluators create different priorities which undermines the intent of those who supported and developed the TPEP process. We must demand that in order to honor the process, we must make this feedback regular.

  2. Mark Gardner

    We’re operating on an MOU (Memo of Understanding, like a Letter of Agreement) that adds in a required “Mid Year Check” conference, where the teacher and evaluator go over the evaluation and establish formative ratings. If the formative rating is P or D, that is an indication that the evaluator has seen a pattern of evidence that is convincing enough that this is the standard mode of operation in the classroom (and no further evidence needs to be gathered). If anything is B, U, or “not yet observed,” then the teacher and evaluator enter into agreements about how to gather artifacts and evidence around that area. This might mean scheduling a series of observations, gathering specific artifacts, etc. Our MOU also goes on the ‘no surprises’ premise, in that any rating established at the mid-year cannot be reduced without there having been a clear pattern of countervailing evidence (and communication with the teacher).

    We’ve really resisted any premise that requires teachers to gather X number of artifacts, because then the focus becomes the X rather than the quality of artifacts…and the model gets reduced to the checking of boxes. We strive for that “natural harvest” or look for “naturally occurring evidence” that emerges as a function of our work, rather than looking for ways to re-digest, re-package, or portfolioize the work we’re already doing for the sake of an evaluation.

    It’s not perfect, and it is time consuming if done well. However, what we give attention to gets cultivated…if we want to cultivate good teaching, we do need to give attention to it.

    All that said, even within the ten buildings in our district there is wide variability in the experience that a teacher has with his/her evaluator, both in terms of the quality of the conversation and the frequency of contact. Or stronger building leaders make the time to keep instructional leadership the core of their work–others struggle to prioritize this work, and this translates to the experience teachers have as the evaluated party. No amount of inter-rater reliability training will equalize the experience between a teacher whose principal intentionally devotes time and energy to evaluation and teacher with an evaluator for whom it is an afterthought and an unplanned-for burden to be hastily completed at the lowest threshold of the law and contract.

Comments are closed.