I am no fan of standardized testing, so one of the greatest strengths of our teacher evaluation system in Washington state is that it empowers teachers to use classroom-based assessments (and data) to illustrate how they are fostering growth in students’ knowledge and skills. Closer to the kid always is the way to go, so using an assessment I designed or chose with my learners in mind will always (for me) trump using a corporate product, no matter how “standards aligned” it might claim to be.
One problem that I am seeing throughout my district and in other districts whose staff I support through WEA is that with all the many moving parts of teaching, student growth examination often ends of falling into the realm of “whatever is easiest to count.” Further, as I engage in deeper conversations with teachers who go this route, the whole student growth process becomes an exercise in compliance and thus a box to be checked.
When this occurs, we’re letting ourselves choose to waste our own time.
I have worked with teachers for several years now to design and implement student growth goals, and some patterns are starting to emerge:
What teachers are doing that works:
- They choose a cognitively complex, long-term skill they will be teaching and assessing anyway. For example, instead of focusing on the (albeit important) standard about mental addition of 10 and 100, teachers are focusing how students grow in their ability to communicate their reasoning process using multiple representations… and idea that appears in multiple standards and shows up again and again, and is a skill they explicitly teach.
- They use proficiency rubrics or scales that describe a change in cognitive skill, not an increase in speed, accuracy or quantity. When working with teachers, I try to stress that growth should be about what students can do with what they know rather than just what they know. A proficiency scale that describes increasing levels of cognitive complexity, when used as the yardstick for monitoring growth, is often the missing piece that moves student growth from feeling like a hoop to feeling like it matters.
- They engage their students with the data. This looks different in a kindergarten class than in my Senior English class, but it is possible with all learners. When teachers help a student understand what the long term goals are, what progress toward those goals looks like (other than just “getting more right”), and where he or she is in personal progress toward that goal, the data suddenly means something.
- They choose something they care about. This is hard when administrators arbitrarily assign teachers to certain goals (a practice that I strongly discourage). Teachers who care about a kids’ math problem-solving skills or their ability to defend claims with valid arguments will naturally want to spend time focusing on developing that. It helps make student growth exploration part of the work, not an extra.
When it’s not working, it is often because:
- The skills being measured are actually separate from the skills being taught. The two places I see this tend to be fluency (higher accuracy in shorter time) and memorization of facts (correctly memorizing dates and places in one history unit, then comparison of correct memorization of dates and places in the next unit). With the math fluency example, while it is true that fluency (in terms of accuracy and speed) has value, rarely is it explicitly taught. Typically, the growth in fluency comes from repeated opportunities for practice. Rarely can the teachers I work with directly point to a series of lessons and articulate how those lessons specifically promote higher accuracy in a shorter time. What is taught, though, is a different kind of fluency: the fluency with which a student can apply skills to different contexts. As for memorization, this goes back to the question about what kids do with what they learn. Few teachers get super excited about their students’ memorization skills. Their critical thinking and problem-solving using what they’ve memorized? That’s a different kind of celebration, and growth worthy of exploring.
- The skills assessed tend toward lower cognitive-demand. This ties to the memorization example above. Teachers often express to me that they feel dissatisfied when their goals are low on Bloom’s (hovering in recall or identify) or shallow in Webb’s DOK (staying with memorized or regurgitated information). Part of the dissatisfaction is that such levels of engagement with content are ephemeral: unless that knowledge gets used somehow, it gets left behind.
- The data is gathered to send out of the cycle, rather than into the cycle. When a teacher feels like he or she is doing this work “to show the principal,” we’re missing the boat. Ideally during the evaluation process we’re giving our principal a peek into the work we do for kids, as opposed to doing work for the principal…and this is a choice we as teachers make. As we gather assessment data, it should inform us about what instructional moves to make next and it should inform kids about their progress. The purpose should be to enrich teaching and learning, not to build a dog and pony show for the boss. (And guess what, when we use data to enrich teaching and learning, no dogs or ponies are needed… the “show” is in the work we’re already doing.)
- Assessments are designed to gather “data,” then shoehorned inauthentically into the school day. This often seems to happen when a teacher feels compelled to adopt the same goal as his or her PLC (a practice I also have serious issues with). Ultimately, the student growth “data” should be drawn form formative assessments that are already occurring as part of the teaching-and-learning process.
By the time I shifted from the classroom into my full-time mentorship role this school year, I had spent four years in my own teaching practice wrestling with this student growth process. The first year or two, I did every one of the things under the “when it’s not working” list above (and more, essentially working very hard to get in my own way). It wasn’t until two years ago that I found systems I liked, and it wasn’t until last year that I really felt like this examination of student growth had become a practice so powerful for students and so informative for me that I can no longer imagine doing my job without it…regardless of what the law says.
My own journey was a multi-year one. I think some administrators may expect teachers to have it all figured out their first year, but it’s a shift in practice and it requires screwing up a few times before it really becomes a valuable practice. We also need to give ourselves time to figure it out and make it matter to our work, rather than persist in compliance mode.