This summer I saw something very similar to the chart you see here. A principal was explaining how she gathered and analyzed student data, and how that data drove her administrative decisions.
You can see, if you click on the chart, that Teacher B didn't do so hot. Because a teacher's reputation precedes him, what's going to happen if parents find out about Teacher B's scores? Will they request a class change? Will they complain? Test scores are scary for teachers because they don't tell the whole story, but they tell an important part of the story.
You see that Teacher B's students lost ground in reading and math between the spring of their third grade year and that of their fourth, while Teachers A and C had students who improved.
When I sit down to have my PG&E post conference and my administrator discusses student growth with me, she's expecting (and hoping) that I will be like Teachers A and C.
We can make explanations for Teacher B – more kids in poverty, disruptive kids, kids who spent summer in front of the T.V., but no. The demographics between the three classrooms were the same. The difference was that Teacher B, while he had "the most loving classroom in the school," was also teaching at more of a third grade level than a fourth grade level. The principal examined the situation, made a diagnosis, moved the teacher to third grade and plans to support him in the upcoming year.
I think this is impressive. As teachers we worry about tests and how the data can be disconnected from the human value of what we're doing with our students, but if the data is never examined maybe we're not going to reach our potential as educators. If our administrators aren't trained to problem solve once the data reveals weak spots, then someone who might be an outstanding third grade teacher is lost just because he wasn't very good at fourth grade.
Life skills matter, but so do academics. This study was done by a group of economists who were at first skeptical about "value added data," but once they started looking they found that "the value-added scores consistently identified some teachers as better than others, even if individual teachers’ value-added scores varied from year to year." Teachers who consistently moved their students from September to June had lasting positive impact on pregnancy rates, college completion, and earning.
How do we continue to make sure that when student data is used, it's used in a way that improves student outcomes and strengthens a teacher's practice? How do we ensure that the data accurately measures a student's growth, and not a parent's tutoring, or a summer's enrichment?
This represent a good use of data by an administrator: The data caught her eye and compelled her to drill a little deeper, which is when she found that he was teaching fourth graders at a third grade level. Hopefully this will lead to a situation in which “the most loving classroom in the school” also has kids who are learning.
Janette, that’s a great question. I didn’t ask for clarification during the presentation, but as someone who has moved from 10th to 9th and back again, and then to 7th, and now to 6th, 7th and 8th, I think that regardless of standards there is a way of scaffolding and approaching students that varies based on their maturity and development.
Last year, for example, my first year back with 7th graders after years teaching high school, a mother emailed me with some great constructive criticism. Basically, what I considered to be high standards for my students she considered inappropriate for 7th graders, and she was right. I was moving too fast, not explaining enough, and I took a step back.
My guess is that the teacher on the graph was simply a better fit for a younger group of students. Maybe there are deep problems with interpreting standards and scaffolding instruction, but I think the principal wanted to see if she could support her teacher rather than simply begin the firing process, and thought third grade would be a good place to start.
I’m having trouble figuring out what it means to “teach at a third grade level”. And how changing to third grade will really help. Does the teacher not know the 4th grade standards? Does he have low expectations of students? If so, wouldn’t they just be lower when he moves down a grade resulting in teaching ‘at a second grade level’?
I agree with Rob that I really hope there was more going into this than just looking at one year of a bar graph. I don’t think test scores are scary, but I think the mis-use of them is terrifying.
Sorry about the repeat comment (a result of typing on my phone)
For me, the most important part of this data is that it tells a part of the story. Hopefully the principal has been evaluating this teacher regularly and saw evidence that he/she was “teaching more of a third grade level” and that this notion was supported by student growth data (when that data came available).
But, as you rightly point out, there are other factors that could explain those scores. So, it is important that this student growth data be the beginning of a conversation and part of the overall picture. But this picture is complex. I fear an overly simplistic use of this data.
I also fear a reactionary and thoughtless use of test data with teacher evaluations, even though I know the vast majority of principals would have a response similar to this principal’s.
I completely agree that data points should be two or three assessments between September and June. The spring-to-spring state assessments aren’t as meaningful, unless (as I believe the situation represented by the graph) there were other flags of concern or the test data showed this pattern year after year.
My biggest concern with student test data, though, is that we’ve gotten to point W with using data to evaluate a teacher’s impact but we’re still at point B with our assessments. The MAP test is hampered by tech needs. At my school it’s not uncommon for a child’s computer to crash, and for him to be moved to an alternate test room to finish his test. The MSP and HSPE are not as meaningful as they could be when it comes to measuring a teacher’s impact.
As an LA teacher, I had 50 minutes a day to prepare students for two major tests. In fact, I had 50 minutes a day to prepare students for three days of testing, one for reading and two for writing.
Math teachers, on the other hand, had 50 minutes a day to prepare students for one test.
Is that taken into account when growth is measured?
I still think it would be more compelling to consider growth data from two points within this teacher’s experience with the students. That said, the data appears to have ultimately been used properly to better match the teacher with the grade level where his dispositions and habits fit better with the developmental stage of the children.
For me, the most important part of this data is that it tells a part of the story. Hopefully the principal has been evaluating this teacher regularly and saw evidence that he/she was “teaching more of a third grade level” and that this notion was supported by student growth data (when that data came available).
But, as you rightly point out, there are other factors that could explain those scores. So, it is important that this student growth data be the beginning of a conversation and part of the overall picture. But this picture is complex. I fear an overly simplistic use of this data.
As student data appears that it will be playing an ever increasing role in our state over the next few years in high stakes decisions like teacher placement, non-renewals, and RIFs, the questions you ask in your last paragraph become ever more important.
How do we determine what role the teacher has played in the student data? Answering this question will help teachers better inform their instruction to improve student learning, and avoid human resource decisions made on factors beyond a teacher’s control.