I have a confession to make. For most of my teaching career, I've drawn lines in the sand, jumped on soapboxes, and in some cases thrown time-out-worthy temper tantrums about data. My students cannot be reduced to numbers. What do you want me to do, count the number of adjectives they use in an essay to show their performance? Reading and writing are both so very complex that they cannot be reduced to a string of numbers.
That's not the confession. The confession is this: I have reduced my students to a series of numbers. Not just numbers, color coded ones in an Excel spreadsheet. And (deep breath), I like it. It has actually made me a better teacher for them.
It took me walking my talk and being willing to try new things in my classroom–and the new thing I've been playing with the last two years is the idea of "proficiency level scales," which are like scoring rubrics, but actually useful. The idea of proficiency scales is simple: what performance is present (not what is missing) when a student has emerging proficiency, what is present as they develop, and so on. This is not the same as a checklist and is more than a "never," "sometimes," "usually," and "always" continuum of adjectives. Instead, Level 1 of the scale answers "what does it look like when students try it for the first time?" Level 2 answers "what does it look like when students practice and make attempts at it?" and so on up to the top level which answers "what does it look like when students can apply and synthesize with fluency?"
This is not the same as a scoring checklist of "includes a topic sentence" or "uses vivid adjectives."
Here's an example of a simple scale around establishing a topic sentence (or claim) when analyzing a text, used with my mainstream ninth grade English class:
Level 4: The claim draws a subjective inference, interpretation, or opinion that is not overtly stated in the text, and is supportable/supported by a pattern of text evidence.
Level 3: The claim states a valid opinion or interpretation of the text based on overtly stated text details.
Level 2: The claim states an objective fact or a conclusion that is stated overtly in the text; evidence can be used to verify this fact.
Level 1: The claim states an opinion or interpretation that is not supportable by the text or reasoning; The claim states a fact that is inaccurate to the text.
Level 0: The claim does not address the topic or writing prompt; The claim is missing or unidentifiable.
When I think about how my students typically progress through their skills in establishing a claim, most of them enter consistently demonstrating skills at Levels 1 or 2, and by the end of the school year, they generally operate more fluently in Levels 3 and 4. I also didn't just "pre-test" and "post-test" them. This establishing a claim skill is something we do daily in my class–verbally and in writing. It is more complex than it might seem. It requires prior schema, critical thinking, the ability to draw inferences… and more. My goal for my students is that they do it consistently and with fluency about a variety of texts and prompts.
Since my evaluation asked me to set a student growth goal and track and monitor student data, I used a scale like this one above to examine student progress. The scale I used included proficiency descriptors for the claim, context, evidence, and commentary (essentially, an argumentative or analytical paragraph). It was the work we'd be doing anyway, so I just started logging my students' performance more intentionally.
Before long, I had over two dozen "data points" (which in the previous dozen years of my career I just referred to as "paragraphs" or "writing samples" or even "scores") and the simple act of putting those data points together in a string started to show me trends and patterns in students' performance.
This was work my students had done year after year for as long as I have taught English 9. The work didn't change–how I looked at the work did. Now, by intentionally tracking the same skill over time and in different contexts, with different texts, and in various modes, I had in front of me a literal picture of my students' progress:
So what? For one, this data helped me craft activities and assignments where students looked at their own patterns and trends in performance in order to analyze strengths and set goals. Because I have students keep a writing portfolio, all of their past writing is easy to find and reflect upon. In addition, I used this to intentionally group kids for peer review and small-group activities around the skill.
Yes, I had kids reflect and set goals before, and yes I grouped them intentionally before. Now, though, I have one more tool to help me do it well.
When I look at the spreadsheet–which by now has nearly 40 data points–I know I overdid it in terms of "Student Growth Criteria 3 and 6." I could have taught exactly the same way, never opened Excel, and just taken the first paragraph scores of the year and the last paragraph scores of the year, earned my "proficient" and been satisfied.
The value in this is that I spent the whole year better informed about my students' skills because I intentionally gathered data on just this one critical aspect of their learning. My excessive columns in Excel and the pretty color coding do not mean I am "distinguished." Honestly, I could not care less about those labels.
What I do know is this: my students have learned and my teaching has improved.