Some "discourse" about all the failing seniors in Washington State wants us to believe (using Washington as a proxy) that schools are continuing to fail.
This Reuters article seems to suggest they aren't, at least in terms of "closing the achievement gap." (Here is the link to the source data.) In the Reuters digestion, though, one key passage stood out:
The only scores to stagnate were the overall averages for 17-year-olds. While black and Hispanic students improved quite dramatically, the overall averages for the age group barely budged in either reading or math.
Peggy Carr, a federal education analyst, said the flat trendline among older students was actually good news.
More 17-year-olds with shaky academic records are staying in school rather than dropping out, which makes them eligible to take the NAEP exams, she said.
Even though some groups showed significant gains, the overall average was the same. My math knowledge tells me that if gains happened somewhere and the average stayed the same, some group's performance decreased. That decrease is being explained as a change in the survey sample–kids who otherwise would have dropped out are now part of the pool. Makes sense. That might figure in to the "high" number of "failing" seniors on Washington State math assessments. In that first article linked above, Randy Dorn even alludes to the fact that a priority in schools today is to keep kids from dropping out: keeping them in the system longer. This is a good thing, but does have an affect on our "data."
So, wait a minute. Where else might this matter?
Obviously, whenever schools in the U.S. are held up to be "failures" compared to other countries.
Now, as compared to thirty years ago, more kids are sticking through their education longer. Instead of feeling defeated by lower skills (and likely other struggles), these kids are staying in school and taking the test the generations before might not have. To me, in actually analyzing this data, I see strong evidence that schools are working and are actually much more successful that even this data suggests.
Different sample demographics means the data must be analyzed.
Different sample demographics are why so many of us teachers get frustrated when our system is compared with others' and then stamped as failing–usually using the TIMSS and the PISA assessments.
I'm considering assigning myself some homework about the U.S. and some of our oft-compared-to academic "rivals." Using only the web, and my limited-and-sporadically-engaged attention span, I'm going to do some digging. Here are my research questions…obviously they carry certain assumptions, but I will be open minded if the information I find contradicts my assumptions.
- What age groups are served by the public schools of the country in question, and what percentage of all the resident individuals of that age group are served (and therefore tested) in these public schools?
- What options are present for school-age children (private schools, charter schools, government-funded public schools, vocational or training schools), which are compulsory, and which are subject to the international assessments?
- What proportion of the tested population is classified as one or more of the following: (1) a non-native speaker of the language in which the test is administered, (2) living in poverty, or (3) served by modifications to the learning environment or in non-standard educational settings such as special education services in the United States?
Data cannot simply be accepted in chart form. It must be digested, analyzed, examined, and contextualized. I'm hoping that as I dig for the answers to these questions, I will find someone much smarter than me who has already done this. If I make any progress, I will post in the comments below (add what you know as well)…with links to sources for verification.
Data can be shopped for and gathered to support anything you want to say about public education in the United States. I take it like I do the weather forecast – you’ve got to see enough of it to make an informed choice.
And I think we’re capable of that. The alternative, and one some educators choose, is to discount all data as unreliable and meaningless. Well, some of it is, probably. But to refuse to look means you’re packing a bag full of tank tops for a trip to a city expecting rain for the next week. A rational assessment of available data means you’re prepared for any eventuality.
Another interesting question is to see if we’re testing the same skills and knowledge. Personally, I see my own students doing things in fourth grade that we used to do in high school.