Reading, Thinking, the Media and the Truth

I teach 9th grade English so one of my Common Core State Standards reads like this: 

Informational Texts: Delineate and evaluate argument and specific claims in a text, assessing whether the reasoning is valid and the evidence is relevant and sufficient; identify false statements and fallacious reasoning.

I usually focus most on this standard when examining logical fallacies portrayed in advertising as part of my propaganda unit during the teaching of Animal Farm. The kids quickly see the illogical and unsupported claims about toothpastes, beauty products, diet pills and any number of other too-good-to-be-true product pitches. When the validity of the reasoning only takes a moment of critical thought to deconstruct, they get good at it. When claims are presented that "seem" valid on first blush, though, the kids have a hard time decoding the nuance of falsehood behind the presumptive truth.

The route information takes nowadays is more like the game of telephone than ever before, with information being stripped, twisted and de-contextualized until it emerges at the end of the line as a statement whose meaning is a completely different message than the original referent. Thus, our challenge is not to help students spot the obviously fallacious reasoning, but to have their radar on for the subtle (and I believe, often intentionally manipulative) misinformation, misguidance, incompleteness, or writerly interpretation that portrays itself as truth and fact.

This was already in my mind when I read this seemingly innocuous passage in an article about teachers:

"Teacher evaluations have been a contentious issue in Washington state, as they have been in other parts of the country, and last year state lawmakers passed a law that for the first time makes firing teachers and principals a potential outcome of a poor evaluation." (Source)

The context of this article is that some teachers in the Seattle school district are "boycotting" MAP testing for a variety of reasons, the thrust being that test scores are now part of the evaluation process (more on this part-truth in a moment!)

My issue with the quote is the latter part: "a law that for the first time makes firing teachers and principals a potential outcome of a poor evaluation." Seems innocuous, but is simply inaccurate. Firing teachers has always been a "potential outcome" from a poor evaluation. Otherwise, no wonder the world thinks we have iron clad job security when we reach "tenure" (let's not even start with that word…) The implication is that the only way a teacher would be fired would be from gross misconduct, not ineffectiveness as an instructor. The fact is, it is has always been possible for a teacher's poor performance in the classroom to–with due process–result in his or her termination.

I do not think that this particular article or passage was overtly biased or agendaed. That just means it is one of the first players in the telephone game. The message simply has not mutated fully yet to its most dangerous form unrecognizable when compared to the facts.

The other piece of all this that does concern me is the continuing misunderstanding and misinterpretation about student growth scores and how these impact a teacher's overall summative evaluation under TPEP.

Here's the fact: under the OSPI rules as published on tpep-wa.org, even if a teacher's "Student Growth Impact Rating" is "low," it does not automatically trigger a "basic" or "unsatisfactory" summative score. It does, however, trigger a "Student Growth Inquiry" that requires an adminstrator to examine, among other things, whether extenuating factors such as student attendance or mobility might have impacted cohort student growth data.

Again: a "low" SGI rating in no way is a direct trigger to a "basic" or "unsatisfactory" summative rating, let alone an automatic firing. The Student Growth Inquiry guarantees a teacher due process in his or her evaluation, and even if the inquiry still reveals "low" growth, there is no automatic penalty. Rather, there is an emphasis on using the inquiry to help the teacher improve. Imagine that.

As I alluded to in my comments under Kristin's first discussion of Seattle's "student growth rating" mess, it is critical that teachers understand a few things:

First, your "Student Growth Impact Rating" is about the growth you create, not your students' growth in comparison to the growth your colleagues manage to achieve with their students. OSPI has set standard ranges for what constitutes "low," "average" and "high" impact. No where in the calculation of those ratings is the relative data of other teachers considered–not even in Student Growth Criterion 8.1, which is about how well the teacher collaborates with colleagues around setting student growth goals.

Second, your "Student Growth Impact Rating" is not just about data. It is about growth, not achievement. Achievement is a student's performance in a single assessment. Growth is about a student's change in achievement over at least two points in time (again, OSPI's definitions, not mine. See the .pdf file I linked above). A single state or MAP test is a measure of achievement. It is not a measure of growth. If such achievement data is used on a teacher's evaluation, I foresee major legal issues. 

Last, and in my opinion, most significant: there must be a connection between the goals that a teacher sets for groups of students and the assessments being used to evaluate student growth. Three out of the five rubrics used to determine a teacher's growth rating are based upon the quality of the diagnostic and prescriptive goals a teacher sets for his/her students and the quality choices of formative and summative assessments that are employed to evaluate student progress. This helps to ensure that the data that is ultimately used is based on worthwhile data-gathering instruments (assessments). If a teacher and if the students do not know the content of the assessment (which is a chief complaint in the article about the MAP boycott), then there cannot be appropriate student-centered goals set, and the assessment ultimately is not effectively be aligned with the instruction because of the ambiguity of tested content and skills. The good thing: if a teacher were to get a "low" SGI rating under TPEP because of use of MAP testing, it would trigger a Student Growth Inquiry, which is charged with examining whether the goals, curriculum, and assessment were aligned. In this test of the inquiry, I believe that MAP scores would fail to pass that test.

All this started with a single innocuous quote which wasn't intended to misinform. We as teachers need to get in the practice of going to the horse's mouth to get our information. We need to ignore the media–or at least, if some statement in the media gets a reaction from us, we are obligated to educate ourselves on the facts that were the first utterance in that game of telephone. We're talking about our profession and our livelihood here. 

Go to the source–the first utterance. I've linked some above because I'd hope you do to the same with my writing as well, since I'm a player in that game of telephone. I've tried hard to make sure my information matches the first utterance as perfectly as possible–but I'd rather all teachers go straight to the law, policy, and rules to get the truth unfiltered in order to have informed discourse about these very important matters.


Two key resources for information about teacher evaluation in Washington State…use these to confirm anything purported as fact. It takes work, but finding the actual truth always does:

3 thoughts on “Reading, Thinking, the Media and the Truth

  1. Kristin

    This is a great capture and deconstruction (I’m not even sure what I mean by that, but I like it) of the situation.
    The more we react to inaccuracies, like rumors about ed reform agendas or Gates or teacher evaluations, the more time we waste. And time is the one thing a teacher never has enough of.
    I fully support the Garfield teachers, and will be writing a post on the situation in my district. The fact is that (and it is accurate) the MAP test is not aligned with common core standards, state standards, or the goals teachers have established for individual students. Therefore, it’s useful as a quick snapshot of where a child is in his ability to read and respond to a whole smorgasbord of items, so that we can see something like, “John needs to increase his vocabulary, John struggles with informative texts,” but it is an unacceptable way to measure whether a child has made progress on the standards.
    As we all learned in Assessment 101 in our TEP programs, an assessment tool needs to be aligned with what was taught.
    A pre-assessment tool can be whatever you want. The MAP is, if anything, a useful pre-assessment tool.

  2. Mark Gardner

    You’re so right, Maren. We have to, have to, have to make sure that any information we hear is actually valid. Whether it is about this evaluation system, school funding, or whether Obamacare is doing what X, Y, and Z political commentators say it is.
    I’ve also found that with students it pays to help them gauge their own reactions to any claim: do they feel like their feelings are corroborated (“See, I knew it all along!”) or do they feel frightened (“There is no way that can be true!”). These two reactions demand deeper digging. It makes me think of that silly insurance commercial about “it must be true because it’s on the internet…” and then the woman meets her “French model” boyfriend. It is an odd paradox that in many ways we are simultaneously so trusting (we’ll believe what is said/printed as if were canon law) and so paranoid (don’t trust anyone!) as a culture.

  3. Maren Johnson

    I love this close analysis and going to the source! If, indeed, we can teach our students these skills, then we have achieved something. Modeling these reasoning and critical thinking skills as teachers is clearly vital–it is difficult to teach what we do not practice.

Comments are closed.