I was sitting at home the other day, perusing the Spokesman-Review, and I came across an interesting editorial, criticizing Randy Dorn's recent proposal to make the state math test more reasonable. You can read it yourself, but here's the money quote:
"The state will only institute math and science requirements after it’s been demonstrated that a higher percentage can pass. This is like watching high jump practice and then deciding where to place the bar so that most competitors will clear it. When the consistent message is that the state will call off accountability, then it’s impossible to gauge students’ best efforts."
When I read this, I thought to myself, "Isn't that exactly how a high jump competition is supposed to run?" I mean, like most Americans, I only catch snippets of high jumping every four years, so I'm no expert, but that seems like the way I remember it. So I looked it up, and found that:
"In a competition, the bar is initially set at a relatively low height, and is moved upward in set increments … The competitor who clears the highest jump is declared the winner."
That sounds right. You set the bar low and then raise it until only one jumper is left. But unlike a high jump event, our goal in education is not to designate a single winner. Despite the fact that you hear it all the time, high jump competitions are a really bad metaphor for educational standards and assessments. So I decided to find a better one.
I turned first to the medical community. We've all taken eye exams, and we're familiar with the fact that if you can read the fourth line from the bottom, your eyes are fine. If you can't, then you should consider corrective lenses. So if we wanted to improve Vision In America, could we simply raise the standard by lowering the line? Require people to read the third line from the bottom? No, because first of all, it's unnecessary. People who can read the fourth line up can also see and respond to street signs, they can recognize faces from across a room and they can watch a ballgame in a bar and know what the score is. They don't really need better vision. And besides, eye exams measure something intrinsic. The only way to achieve better vision is to get glasses. Or squint. Educational assessments, on the other hand, measure something that we can actually change with more or better education. Raising the standards works, as long as you also do something to raise the capacity of the students who need to meet those standards. So the eye-exam metaphor doesn't work, either.
So it was on to the DMV. That's where you go to take your driver's test. Surely I could learn something about raising educational standards from these people. So what would we do if we wanted to improve Driving In America? Couldn't we just create a tougher drivers' test? But when I thought about it, I realized that the problem with most drivers isn't that they lack the capacity to drive well, it's their lack of attention to the task of driving. Making the test harder would do nothing to address that. In fact, when I remember my own driver's test, it seemed like the actual driving was the easy part. It was the extra, silly stuff that made it hard. Things like backing around a corner. The only way to make a drivers' test harder would be to add on seldom-used tasks like that. Like maybe launching a boat down a ramp. That's the last thing we need in education. Another bad metaphor.
Then it hit me. Swimming! Every spring we take our students down to the local pool to go swimming. Not because it targets any of the state standards, but because it's fun to go swimming in the spring with your friends. And the first thing the lifeguards do, after they explain all the rules, is give the kids a swimming test. They have a rope that separates the shallow end from the deep end. If you want to swim in the deep end, you have to swim across the deep end, under the observation of a lifeguard. That's the test. If you can do it, they'll let you swim there. If you can't, they fish you out with a long pole and make you stay in the shallow end. It's that simple. It's an assessment that no one can argue with. And it's in the best interest of everyone involved to uphold the standard and score the assessment accurately. The perfect metaphor.
That's the way our state math assessments should be. They should figure out exactly what mathematical concepts and knowledge a person living in this country, right now, should possess. And after they teach those things they should test the students on them. No more talk about raising or lowering the bar. That doesn't help. We have a state math test which only 45% of our high school kids can pass. And the test has math material that people can function just fine without knowing. What we need to do is get a bunch of math teachers and a bunch of people who use math outside of high school all together in a room and let them figure out what the test should look like. And then tell the kids what's on the test. And then teach them.
Thanks, Brian, for at least pretending to agree (I respect pretending; it’s a way to try on a behavior pattern) with what public school policy makers and potential major industrial employers of public school alumni say they consider appropriate background for future employees.
To Tom’s point and question: They use math accomplishment as one metaphore for disciplined academic behavior.
To Brian’s Q about my confidence in state assessments: For policy purposes, state standards denote minimum academic performance expectations for students. Formal assessment construction procedures establish the level of confidence to have in these standards. Each item in the assessment represents a category of possible items. Empirical methods identify items that most reliably discriminate among correct and other responses in the assessment. Separate from these procedures, political processes confirm the utility of the assessment for government purposes.
We also likely agree, Brian, that businesses in part use math accomplishment as an index of the likelihood a job applicant will add to their profit vs. cost. That background includes math (however prescribed by state standards) taught in your classroom as measured by whatever assessment the state requires.
Buinesses use math at least as an index of the level of minimum complexity in disciplined academic behavior an alum has handled successfully, much in the same way as do Tier 1 and 2 university admission officers, labor market forecasters, and some government officials who describe their labor pool while trying to recruit businesses into their tax bases.
As teachers know, third parties don’t usually care about particulars learned as much as the level of alums’ disciplined accomplishments. That’s because they know something about competitive labor skills their businesses project as well as expecting that their employees will hold multiple as yet uncreated (is that a word?) jobs during their working lives.
And, no, Mark, a link in a longer process than schooling, not necessarily the problem, although it’s insightful of you to mention it.
Business, philanthrophy, and government policies accept minimum academic performance by all students as the task public school teachers contract to accomplish.
I’m sure you already knew these things, and asked as a reminder to others. Glad to oblige.
I hope this speaks to your point, Brian.
Except that teachers are the problem, right?
Bob, let’s say we agree. I still want to know how you are sure that the test in question does what you want it to do? I suspect you have never seen it. What are those minimum skills that major U.S. industries need their employees to have? (I really don’t agree with that part of your argument, but we’re just pretending anyway). Would it include calculating the volume of a trapezoidal sandbox? Because that was on the test. You seem to have a lot of faith that the people making these policy decisions and writing the tests are the best and the brightest, and we teachers in the classroom should acquiesce to their superior decision making ability. It seems like an underlying theme of much of what you say is that we teachers should shut up and teach.
Brian asked, probably rhetorically, “How can anyone believe that … without questioning where the math bar has been set.”
In the spirit of comity, here’s the line of logic I’ve heard used by employers in major U.S. industries and U.S. education policy makers. The reason has three parts.
1. Set the math bar at the minimum level that prospective employers expect to require of job applicants in major U.S. industries.
2. Require math and other teachers to adjust their instruction, so all students pass that minimum performance threshold.
3. Reducing that performance level increases the risk that more public school alumni will not be employable much beyond grunt work in those industries.
Users of this logic rely on demonstrations by educators of how to meet these criteria. They expect that all teachers know these ways and can adjust instruction accordingly, but some will not do so adequately.
I’m guessing teachers know this logic and these parts, but some want to assert another position. Yes?
Brian, I once held up the proceedings at a boat launch for over forty-five minutes, getting all manner of “advice” from every fisherman and waterskier in western Washington.
Tracy, the kids who stay in the shallow end don’t really have it so bad; there’s plenty of stuff to do over there. Besides, most of them go home and ask there folks for swim lessons, which is a good thing. As for the kids who test into the deep end, (your veterinarians) they can always swim over to the shallow end and drive trucks for awhile. The more education you have, the more choices you get.
This week’s EduCarnival is live at http://uncomfortableadventures.blogspot.com/2009/12/educarnival-v2-issue-15.html
You can submit an article to the next issue by emailing me or using the form linked on the page.
I love getting to read posts from people I’m not familiar with, so it’d be awesome if you’d put up a quick note encouraging your readers to submit as well!
Tom, thanks. That’s what I’ve been trying to say.
I especially like launching the boat as part of the driving test. (I had a boat for a while; I never met standard.) That’s exactly what asking a student to calculate the volume of a trapezoidal sandbox is like (I’m not making this up).
How can anyone believe that the same cohort of students that can pass the reading and writing parts of the WASL at an 80-90% rate can only pass the math WASL at 50% without questioning where the math bar has been set. Since it is illegal to disclose anything that is on the WASL the editorial boards that are defending “high standards” don’t even know what they’re defending.
Tracey, life is long. Every child is gifted. Some just open their packages later than others.
You might be right, Mark. But I’m still leery of tracking or having kids make such life-determining decisions at such a young age. It might be because I encounter so many kids who want to be both a truck driver and a veterinarian. I like that kids have the flexibility to change their “tracks”, if they can manage it.
I think it is okay to have kids decide between technical college, university or other postsecondary options at age 14. I’m not for tracking into a career, but tracking into a kind of postsecondary option already happens. The decisions a kid makes from 14 on will determine what kind of post-high school options will even be plausible. If a kid wants to go to UW but doesn’t take his freshman year seriously, then they’ve already cancelled that dream, in reality. I’m actually in favor of tracking for a lot of reasons: 1. kids can change tracks through demonstrated effort, 2. it creates classrooms where skills differences are lessened and skills needs can be more accurately and efficiently targeted, 3. in reality, you can predict with a great degree of accuracy the future (post-HS) trajectory of a student simply by looking at their transcript at the end of 9th grade (age 14 or 15).
I like how your message sounds much more positive than “sink or swim”. It’s “swim or splash around safely in the knee-high kiddy pool.” But, what happens to those who are stuck in the kiddy pool? Oh, no! I just followed where this logic leads and I’m not sure I like it. It’s the reason other countries track students for different careers, having kids make decisions about which track they’re taking (technical college or university) at the age of 14. Did your metaphor just open a new can of worms?
Coupla Things:
-Kristin, you’re absolutely right. When they first rolled out the WASL, it was supposed to have been regularly reviewed and recalibrated. That doesn’t seem to be happening.
-No, Bob; I have no intention of convening the meeting that I so rightously declared needs to be convened. I’ll designate Brian as my proxy. He’s everything I’m not; he teaches high school, he’s worked outside of school and he pays attention during meetings.
This is an awesome post, Tom. This perspective makes so much sense…and at times there is so little out there that makes sense, it seems.
Are you announcing, Tom, that you will be calling a meeting of your proposed math standards group and submitting the results for public policy consideration? You never know who might read it and find it useful for adjusting ed policy.
Great job on the high jump theory. High jumping rules are meant to thin the field, but in education we can’t continue to thin the field by telling kids they didn’t meet standards so they’re done and telling schools they didn’t meet standards so they’re going to lose out on much-needed resources.
In the early days, when our state assessment was in its piloting years, they analyzed the test, student scores, and got teacher feedback about what was realistic and what wasn’t. That seems to have fallen away and now the test has been stagnant for awhile and, after years of kids failing to meet standards, they’re trying to start over.
I agree that we need to keep an assessment, and that it’s okay to teach to the assessment if it’s assessing necessary skills. The reading and writing WASLs (Washington’s NCLB swim test) are actually pretty good, except that they’re useless for a teacher who wants the opportunity to use the information to improve a child’s skills because we never see their test, and by the time the test is scored the child has moved on. The reading and writing WASLs work hard to prevent a gap in one skill from affecting a child’s communication of another skill. A child’s writing is not assessed on the reading test, and the writing test doesn’t depend on the ability to read.
What I’ve seen from proctering the math and science test is that they are assessing more than math and science, and I’m sure our state test isn’t the only one with that problem. There is clear disconnect, in Washington State, between what a child needs to learn to be a functioning member of society, what is taught in the district-mandated curriculum and what is assessed as the state standard. And we shouldn’t be testing “College Readiness” in tenth grade. For reasons I am not going to go into, because they’re pretty obvious, that swim test should be determined by the Universities in a child’s senior year.
I liked this! Can I use it in this week’s EduCarnival?