SB 5895: The D-Word

Z1DGoNBy Mark

Teacher evaluation is back on the radar. Senate Bill 5895 is due to be heard by the House Education Committee on February 16th (CSTP has produced a summary for quick review, but the whole bill is linked above).

One of the sticking points for me–of which there often many in any policy–has to do with the provision that at least three of the eight dimensions on which I am to be evaluated must be supported with student growth data.

There's that d-word again. 

Luckily, I found language in the bill clarifying "student growth data":

(below, excerpted from Senate Bill 5895 p. 3, lines 24-35… or subsection "f" of section two. I hope that letter isn't some sort of foreshadowing of my evaluation.)

Student growth data that is relevant to the teacher and subject matter must be a factor in the evaluation process and must be based on multiple measures that can include classroom-based, school-based, district-based, and state-based tools. Student growth data elements may include the teacher's performance as a member of a grade-level, subject matter, or other instructional team within a school when the use of this data is relevant and appropriate. Student growth data elements may also include the teacher's performance as a member of the overall instructional team of a school when use of this data is relevant and appropriate. As used in this subsection, "student growth" means the change in student achievement between two points in time.

I'm no pro at decoding legislative language, but if I'm reading this right, then a key part of my evaluation will be whether I can demonstrate that my students have learned my content or skills over the course of their time with me…and that there is some flexibility in how that data is gathered.

Sounds good and reasonable to me. I particularly like that there is openness for what that assessment of growth can look like–even classroom-based!–thus enabling freedom for interpretation toward what I hope are fair assessments of student learning at all grade levels and subject areas.

One could argue that ambiguity and freedom of interpretation can lead to problems. I'd take ambiguity and adaptability to different contexts over the Cult of Sameness any day. 

Here and here are some of my past thoughts about teacher evaluation.

 

15 thoughts on “SB 5895: The D-Word

  1. Kristin

    I’m with you on reading the language to mean there are different ways to show student growth.
    This allows us to move the conversation away from the “what about teachers of untested subjects?” refrain.
    And I agree with Tom. If a teacher can’t provide evidence that her students have learned the content, she’s not organized enough or skilled enough.

  2. Mark

    Drpezz. No worries. I understand your reticence about student data in three categories, but was much relieved by the explanation given about how data is to be used. It is more about how the teacher uses data him or herself, not just whether scores go up enough.
    My biggest worry, still, is about administrators. They will not have the time, and let’s face it, some will not have the expertise in classroom instruction. There’s some real training, capacity building, and paradigm shifting needed for some administrators.

  3. drpezz

    Hi Mark,
    Apparently, the legislature took out the single scoring requirement. When I double checked it, I saw that it had disappeared. My bad.
    (I still don’t like the requirement of student growth in three criteria.)

  4. Mark

    That’s interesting. Where are you getting that information? Like I mentioned, I’m part of a TPEP regional implementation workgroup and I have seen nothing about OSPI setting percentages–in fact, if anything I’ve seen a remarkable amount of local control in terms of assessment, evaluation, and rubrics. The assignment of the teachers’ summative evaluation (overall score 1-4) is entirely up to the supervising adminstrator, who uses the rubrics based upon the frameworks.
    From what I have been told and what I have read, there is no obligation that quantifiable data be used…if that is the case it would be a district decision. Classroom-based data (point to point comparisons of students effetiveness in writing an introductory paragraph, for example) can be used to discuss growth even if it is not quantified. It was made very clear to us today in our meeting that the data is not what the evaluator is to look at–what the evaluator is to look at is what the teacher does with the data he/she gathers about the learners in question.
    I guess I don’t know what the “percentages” are that you’re referring to (that OSPI would have control over). There isn’t an aspect of the evaluation that involves percentages, per se. Are you talking about the summative scores in each of the eight criteria being weighted differently? Again, the current legislation outlines that local control is involved there: for provisional/ probationary employees (and all employees once every four years) the eight criteria are evaluated (the “long form” equiavalent) and teachers not in probationary/provisional status may be evaluated primarily on as few as one of the eight criteria. There are details of this in the CSTP brief linked in the post above (see page two: Comprehensive vs. Focused process).

  5. drpezz

    The four-point summative rating is the same, but OSPI is being directly to determine the scoring–range-finding, averaging, percentages, etc.–required to achieve the four-point summative rating (even though the models are not the same).

  6. Mark

    @drpezz: I didn’t see that, I’ll ask today for clarification. In the most recent info I received, the four-point scale is the same no matter the model, but the language of each of the descriptors in each point of the scale differs based on the framework selected. I’ll see what I can figure out.

  7. Mark

    drpezz–about the single scoring method: are you talking about the three different frameworks? My understanding was that each carries its own scoring rubric. But, I’m sure I’ll learn more tomorrow, as I sit in a TPEP RIG workgroup session from 9am-3pm… any questions anyone would like me to ask?

  8. drpezz

    I’m not necessarily opposed about “student growth data” being included; however, I do have two issues with this bill regarding the growth data.
    The mandate of three of the eight criteria having a “substantial” weight applied to them is arbitrary, and student growth data may not fit with numerous criteria.
    While this is better than attaching a state test score, it’s still an arbitrarily parochial decision and does nothing to analyze the results of the pilot districts (which can’t produce results until June when the year’s pilot finishes).
    Plus, having a single scoring method while using three different models is ludicrous. Each model should have its own scoring method because the models are not the same. Now, looking to see if each model and scoring system produces the same summative score is a good idea, but this too has not been considered.

  9. Tom

    I think I’m with you on this one, Mark. It seems to me that if a teacher can’t find evidence that students are learning, then they’re either not looking hard enough, or not teaching hard enough.
    Furthermore, as a union member, I welcome the opportunity for the union to get away from it’s role as representatives to teacher in trouble. This bill clarifies and simplifies the process by which a principal can fire an unproductive teacher.

  10. Tamara

    If the student growth data was to be akin to the NB portfolio it would have to be an abreviated version. Especially if looking at ALL students. Given teh time it took in NB just to look at two….

  11. Mark

    At one meeting someone did chime in with the comparison to the NB portfolio. I think that one worry I have is the amount of time necessary to compile data, and what will be acceptable in terms of presentation of that data. We don’t have the time to re-do national boards every year, or to do that scope of data collection and analysis, so there’s got to be some happy medium.

  12. Tamara

    If the student growth data can be viewed through a lense like the one used in the NBCPT portfolio (hey! no wheel to reinvent!)I think this new evaluation model has potential to be a tool for genuine professional development. Sure, ambiguity can be scary and a breeding ground for problems. But if everyone vested can approach it from a place of positive intent and keep the focus on using evaluation as a tool to promote growth, I’m with you Mark: it beats that cult of sameness (read satisfactory). A little Pollyanna? Maybe. But we can’t get to the place of positive intent if we remain mired down in suspicion and cynacism (which has been a hard place not to get stuck in lately).

  13. Mark

    Ryan, are you our first troll in a while?
    First off, we try to have reasoned discourse here on this site, even when we do disagree.
    Second, please elaborate, Ryan. If you read it differently, I’m open to hearing your informed interpretation.

  14. Ryan

    “I’m no pro at decoding legislative language, but if I’m reading this right, then a key part of my evaluation will be whether I can demonstrate that my students have learned my content or skills over the course of their time with me…and that there is some flexibility in how that data is gathered.”
    Sucker.

Comments are closed.