Would Value-Added be More Fair?

by Brian TestFestLogo
 

About a year ago I wrote a post on the idea of using "value-added" as a tool in teacher evaluation.  The Seattle Times weighed in recently with an editorial endorsing it, and encouraging "retrograde union leaders" to quit opposing attempts to link teacher evaluations to student learning.  As a local union leader I cringe at being called retrograde, but I'm getting used to the Times anti-union bias.  I am not opposed to looking at student progress as part of an evaluation system. That makes sense. What I do think is important is that the weight placed on any test score used for evaluative purposes must be commensurate with our confidence in the reliability of the test.  In my high school last year 84% of the students met standard on the Reading HSPE, 91% passed Writing, 42% passed the Math portion, and 43% passed in Science.  In Reading and Writing our students did significantly better than the state average; in Math and Science we did slightly worse.  But look at those numbers.  Is it really reasonable to believe that the same students that do so well in Reading and Writing are so terrible in Math and Science?  Or to believe that somehow the language arts teachers in the state are far and away better teachers than their colleagues in math and science?  Is it possible that the tests might not be fair?  Isn't it possible that the bar has been set at the right level for Reading and Writing, and far too high for Math and Science? 

So until the powers-that-be address the fairness of the Math and Science tests, and they must, wouldn't it be more fair to use some kind of assessment that could determine whether or not my students actually knew more in June than they did in September?  I love the reading tests that they give in elementary schools, that tell us to the nearest tenth the grade level that a child is reading at. If they get a student in a fourth grade classroom that is reading at level 2.1 in September, but at level 3.8 in June they have worked wonders!  The child has made up almost 2 grade levels in a year!!  That's some good teaching!!! (Even if the student is still below grade level).

But in my math classroom in the high school I can work mightily to remediate a student who didn't know how to add fractions, or multiply integers, in September, and succeed beyond my wildest expectations, only to be told that he still didn't pass the End-of-Course exam in June.  That leads to cognitive dissonance.

So I agree with Kristin.  If we're going to test, and we should, let's really measure how much our students are learning every year. There are still a lot of variables outside my control, like home-life, parental support, nutrition, motivation, etc.  But I'll take my chances.  If you give me a room full of students and measure what they know in September and what they know in June, and decide my fate based on the results, I can live with that.  Just don't judge the kids or me with tests that are so patently unfair.

 

3 thoughts on “Would Value-Added be More Fair?

  1. Mark

    David, I’ve never really thought about the language arts from that perspective. Students are constantly practicing reading and writing… and to bolster your point, I can always tell who of my students have a certain history teacher by the pace at which they pick up a certain mode of writing. I credit that teacher, not my own instruction (or perhaps in harmony with my own instruction), for that rapid development.

  2. David B. Cohen

    I wonder if there’s a difference between math and English teachers on this topic. You might start the year confident that none of your students have been doing any trigonometry in their free time, and to whatever extent they master SohCahToa, you’re the main reason. As I’ve written repeatedly in blogs and articles, the tests barely cover what I’m supposed to teach, and they cover the skills that my students actually practice in most of their classes and even outside of school. Then, try to factor in those who have support classes and those who don’t, and every other significant difference, and I’m really not convinced that any standardized test I’ve ever seen or heard of has any legitimate place in my evaluation. Those tests are mostly designed to look at students at a school, district, or state level. I don’t think there’s persuasive evidence that the sample sizes of test questions or students are going to permit valid inferences regarding my effectiveness.

  3. Mark

    I wonder when the powers that be realize that beginning of course and end of course assessments (or some other kind of longitudinal data within a given school year) is the only way to show (1) growth in student skills and learning and (2) an individual teacher’s actual impact on student learning.

Comments are closed.