Misusing Data

File6271273137854 By Mark

I teach high school English. At our inservice meetings this past week, last spring's HSPE scores were unveiled. Our 10th graders passed the reading HSPE at a rate of 91.7%, above the state average of 85.1%. Bolstering our pride even more, 75.3% of our 474 tested sophomores earned an L4 score, the highest bracket of scores. Out of all 474 students, only six scored L1 ("well below standard"). While we certainly still need to keep finding ways to support those kids who don't yet have skills up to standard, those numbers are pretty good. Data doesn't lie, right?

Something to celebrate, right?

Nope. The data, when read properly, actually proves that we failed. We failed miserably.

My school failed to meet AYP. We are hanging in Step 1 of Improvement because of some struggles that our math department had the last few years (despite having above-state-average scores and making improvements as the state shuffled through three different incarnations of assessment), and though the math folks met their AYP goal this year, we in my department did not.

We missed the mark in one cell: Special Education Reading.

In that cell, we tested 34 students. Eleven earned an L2 score and only three earned an L1. Excluding the one "no score" test (which is what OSPI apparently did in the chart I looked at), this all means our Special Education reading pass rate was 57.6%. The state average in Special Education reading was 51.6%.

The "problem" is that the previous year's Special Education students passed reading at a rate of 73.0%. The year before that: 81.8%.

They were different kids, with different strengths and skills needs. Yet, we are forced to measure our "progress" and therefore our "success" by completely different groups of learners and adhering to a seemingly arbitrary upward trajectory "uniform bar" on some powerpoint graph. To be honest, I don't totally understand it all. Nonetheless, according to the charts on the OSPI website, the data proves it: we failed.

This last year's group of sophomores brought unique challenges compared to the previous year's tested kids. Simply put, they were a different group of kids. We as educators get that…we understand that each grade band seems to have it's own identity as it travels up the ranks. The class of 2009, for one, is legendary in our particular district, and you can see the wave of score dips in all the pretty charts where that class, with its unique and increased needs, moved its way up.

But back to the 34 tested students in question–in particular the 14 we apparently left behind. When I go back in time and look at this same cohort's scores in middle school, that same cell had a pass rate on the WASL (yep, it wasn't even the MSP yet) of 38.5%, and likewise our middle schools failed to meet AYP the year those kids were tested in the 7th grade. I know my middle school counterparts, and I know they work very hard…by no means does that red NO in the OSPI School Report Card that year make me think those teachers weren't doing their jobs. Rather, I wonder at the work they must have done to even get those kids to passing at that 38.5%.

We took that cohort of kids (granted there was probably some change in who constituted that group though we're not a particularly transient district) and raised their collective scores from 38.5% passing to 57.8% passing.

But, since we're not allowed to compare apples to apples, that growth is immaterial to the powers that be beyond our district. No, we need to make sure are reading our data properly. Our "failure" of those 14 students…14 out of all 474 tested…is what now defines us. The data does not lie.

So, we begin our year labeled a failure.

8 thoughts on “Misusing Data

  1. Annette

    That’s a really good point Tom. But at the high school level, many of our SPED students take the HSPE test at their specific grade level. It is based on what their IEP says.
    I am not familiar with middle schools, so I cannot speak to their practices.

  2. Tom

    Maybe it’s just me, but if a high school special education student DOES pass the MSP, why should that student be in special education?

  3. Annette

    We too failed AYP in a cell. It is a discouraging way to begin the school year because when we got looking at the trends that our Writing and Reading and Math score made, they were all positive. However, that just isn’t good enough.

  4. Stephanie

    Kentucky has just this year switched to an assessment system that measures growth in addition to achievement. I can’t tell you how excited teachers are for the opportunity to compare “apples to apples” and to see the impact they made on each and every student.

  5. DrPezz

    We missed making AYP by 5 students (who needed to pass math in the Hispanic cell). Now the progress we need to make is near impossible. We’ll be labeled a failure in perpetuity.

  6. Mark

    The numbers mean nothing, and we know it, but they are way too easy a tool for those outside our classrooms. I am anti-data… that is, this kind of “big” data. However, I’m always collecting both quantitative and qualitative data on my students to track their learning and progress in my own classroom. At the classroom level, maybe even within a PLC or something, that’s useful. However, when we telescope it out to these broad, sweeping numbers, we immediately get disconnected from reality.
    It is so frustrating, because these numbers become “reality” to non-educators, and a convenient way to justify their claims that we are not doing our jobs. We are. I know my colleagues, I know my students. This data is an insult to the blood, sweat, tears invested by students and teachers alike, and is a further insult to REAL GROWTH that actually did happen when you look at individual students over time.

  7. Kristin

    This system drives me crazy. Washington has been doing this for how long, fifteen years?
    I used to think that testing a child in September, January and June would be a good indicator of a teacher’s impact, but experiences last year made me rethink that. Until the test means something to the students – something more than, “these results will affect my teacher’s and school’s score” – the tests aren’t reliable. And that’s the tests that are given at the right times.
    The school report card system you describe is so ludicrous I feel like I’m in a dreamworld, trying to make sense of nonsense. Comparing the class of 2013 to the class of 2012, and using that comparison to decide if good teaching has happened – who thinks that makes sense? Obviously, people who don’t work with kids and who earn a lot more than those who do.
    The department of Education demands numbers. OSPI shops for some numbers and publishes them on fancy, color-coded pages, and someone feels justified in determining the effectiveness of a teacher or a school, even though the numbers MEAN NOTHING.
    I think the data can reveal things within a class group – if one population is doing very well while another population is barely literate, that information should encourage a school to reallocate resources. Olympia doesn’t seem to look at that information, though.
    While OSPI mucks around taping flies to a sheet of paper and calling them fairies, there are people who are doing the time-intensive work necessary to use data well. Mercer Middle School in Seattle puts in the time required to track the progress of individual children, and can show that one child made this much progress in math, and another made this much progress in reading. As well, the whole school culture has helped kids invest in tracking their own progress, so I think it’s safe to say kids are doing serious work on the tests. Not all buildings have that, and not all buildings invest the resources to track the progress of individual children.
    And anyway, AYP doesn’t reward that individual growth. It’s too much trouble to look for it.

Comments are closed.