Educator effectiveness is where it’s at right now in Washington state. Student teachers are currently filming themselves and analyzing student learning for the edTPA (teacher performance assessment). We have a challenging ProTeach evidence-based assessment for teachers trying to get their professional certificate. Approximately 13% of the teachers in our state are National Board certified. In addition to all of this, we have a new teacher principal evaluation system that is currently being piloted and will go into effect next school year.
Against the backdrop of all these educator effectiveness programs, last week Chad Aldeman, with an organization named Education Sector, released a report titled, “The Evergreen Effect: Washington’s Poor Evaluation System Revealed.” You can read a short summary blog post or the full report. When teachers and administrators across our state are working hard right now to get a new evaluation system up and running for next year, such a report deserves a closer look.
Mr. Aldeman starts by painting the picture of five elementary schools in Pasco. Aldeman talks about how the students perform poorly on state tests while the teachers, despite the low test scores, are almost all evaluated as satisfactory. My fellow blogger Tom White wrote more about this. What does Aldeman not mention? These particular schools in Pasco have 50-70% of their students learning English–some of the highest percentages of English language learners in the state. Our state tests are given exclusively in English—clearly students who do not speak English are going to be at a huge disadvantage. Giving teachers poor evaluations because their English-learning students do not perform well on tests in English is not going to improve student learning!
Aldeman adds that the Pasco superintendent declined to seek federal school improvement money. Well, the superintendent was faced with a decision. A condition of accepting the federal money was school turnaround: the principals and half the teachers would need to be fired or the schools would need to be closed and the students redistributed to other schools. These options would not improve student learning in these schools, because again, the low test scores seemed likely to be the result of students who were English language learners performing poorly on a test given in English.
Aldeman’s report, “Washington’s Poor Evaluation System Revealed,” has the rather bizarre heading of “Charts you can trust.” One of these charts is a bubbly word-cloud affair: you can see it near the top of this blog post. Aldeman looked at labels for teacher performance levels across the state and grouped like terms together. The size of the bubble shows the relative frequency with which the label appears. Aldeman says, “The first thing to note is that…about half as many categories describe poor performance as describe good performance. These lists suggest that districts have a tough time even talking about unsatisfactory or ineffective performance.” Charting performance level label frequencies is not an effective way to draw conclusions about district conversations. In addition, Aldeman’s interpretation of these label frequencies neglects to note that districts may have had a three tier system: below competency, meets competency, and exceeds competency: this would directly explain having more positive than negative labels. Districts with a four tier system may have had the “passing” cut score between a level 1 and 2, instead of between a level 2 and 3, again resulting in more positive than negative terms. Important to note: Aldeman’s critique here is of Washington state’s old evaluation system: all involved are now leaving that system behind and moving on to something new!
Aldeman does mention our state’s new TPEP evaluation system. He states, “As districts start implementing their new evaluations, they can learn a lesson from other states that have been early adopters of new systems: truly meaningful improvement will require more than just tweaking requirements.” Tweak? TWEAK? Washington state’s new teacher principal evaluation program is hardly just a tweak of our old evaluation model. To characterize the changes as a “tweak” shows a serious misunderstanding of the legislation. Our new evaluation system is not even an overhaul of the old one: our previous evaluation system has been completely eliminated and replaced with something new.
Aldeman adds, “Merely tweaking old evaluation systems is not sufficient to change a culture that doesn’t value performance.” Referring to the Washington state education community as a culture that doesn’t value performance is a mischaracterization: from the beginning, organizations representing teachers, parents, administrators, school board members, and state agencies have all been heavily involved in evaluation reform.
Actions of these Washington organizations even outside of evaluation reform show their collective commitment to valuing educator effectiveness: Washington state is a leader in piloting the edTPA for student teachers; ProTeach is a robust certification program with associated professional development; and Washington state ranked second in the nation this year for the number of new National Board certified teachers. Washington state has a strong and multi-faceted educator effectiveness system, and with the new evaluation implementation next year, that system is continuing to improve.
Hi, thanks for reading and engaging with my work. I posted a follow-up piece addressing some (but not all) of your concerns at: http://www.quickanded.com/2013/04/educators-on-the-evergreen-effect.html
Kristin, “Magical unicorn science” describes perfectly the science going on behind that bubble poster–that’s some great terminology!
Those are some interesting thoughts on ProTeach. Prior to ProTeach, we had ProCert, which seemed to receive negative reviews all around. Prior to ProCert, we had nothing but the clock hour system. Do we need a professional certificate program? If we do need a professional certificate program, is ProTeach the right approach? Could we have a more rigorous entry level teaching certificate and then eliminate ProTeach, or replace it with some sort of extended mentoring or teacher residency program? Maybe some ideas here for another blog post.
Heather, Thanks for commenting–it’s great to hear firsthand experience from someone who previously taught in Pasco and currently works in a school with a high percentage of ELL students.
I’ve got to disagree about the ProTeach. I just saw my husband wade through it. It’s a lot of busy work.
Your post perfectly highlights one of the biggest, and most damaging, tendencies in EdReform right now. The focus is on data, and that’s good, but when people get so entrenched in data that they’re gathering words and making word bubble posters and then using that as evidence for policy, it’s just magical unicorn science.
The very best thing data has given us is an indisputable picture of inequity and the effects racism and poverty have on a child’s academic performance.
To continue with magical unicorn science and try to turn something very organic and complex, like what happens between a science teacher and 32 kids in Pasco and how that’s a different thing than happens between a reading teacher and her kids in Seattle, into a 500-word report, is more than just inaccurate, it’s damaging.
Student Advocacy groups don’t have the bodies or time to visit schools that represent a broad swath of our districts and populations. They look at the numbers. They look at the reports that put Pasco and Seattle in the same context. Then, they push for policy.
Sometimes this works. There are some great changes being made to policy – our new, more meaningful evaluation system among them – but too often the numbers seem to indicate the same strategies will work with every kid, and that’s just not true. Teachers know that.
I would like to see the great movement to serve kids start to include the classroom as one part of a network of settings that creates a child ready for career and college. Where’s the pretty bubble map for that?
Thank you for defending ELL students and pointing out that it takes time and resources for English Learners to acquire English and to meet standard on tests given in English. As a Nationally Board Certified Teacher who works currently in a school with 50% ELs and nearly 90% free or reduced lunch, there is strong evidence of what you stated- scores are not stellar, but there are complex issues around WHY they are the way they are. Staff dedication is high, as evidenced by the facts. We have 6 NBCTs on staff and two teachers with doctorates. State tests do not show the whole picture of student learning, though they are one measure. Also, I started the first five years of my teaching career in Pasco. They are exemplary in their proactive nature at seeking training for teachers. I was really fortunate to start out there, where I gained a solid base for my teaching.
Tweak!! I had the same exact reaction to that word. I know it isn’t good to allow one word choice to undermine an entire argument, but man oh man, if his research suggests that the new system is just “tweaking” the old, it makes me question the veracity of every other claim he offers–even the ones I can see a shadow of merit in.
We’re not just going from unsat/sat to four tiers, each of those tiers is accompanied by detailed performance level descriptors (47 pages worth, in my framework), where as the previous system in law was–literally–a seven-point bulleted list, with each bullet containing not even a complete clause. Seven bulleted points to 47 pages of language describing different levels of teaching performance is hardly a mere “tweak.”
A little investigation into Mr. Aldeman’s institute, Education Sector: it calls itself an “independent think tank”, a category that pretty much includes whatever you want it to. But in trying divine just where their “independence” leads, I think it is telling that their interim CEO is John Chubb, the founder of EdisonLearning, a company that pioneered the concept of charter schools. Now we are all judged by the company we keep, so maybe is unfair to suspect that they would like to see public education look as bad as possible so that the private sector might step in. But I have to agree with drpezz in the comment above.
And I would also ask Mr. Aldeman what is considered in reasonable level of incompetence? And how much incompetence is allowed with charter schools in general? The studies I have seen seem to suggest that there is plenty to go around in the charter school industry.
And good job dissecting the details of Mr. Adleman’s examples, reality is usually more complicated then someone simply trying to make a point would like to admit.
He reveals his bias with this line: “systems must recognize and reward high-performers and identify low-performers.” He is obviously pushing a pre-conceived conclusion and is now looking for the evidence to justify it.