My district has two elementary schools, one middle school, and a single high school. Three of the four schools did not make Adequate Yearly Progress (AYP), and are now designated as Schools in Improvement by the federal government. (One of our elementary schools did hit the mark.) This happens when a school does not meet standard in one of the categories the federal government uses to parse the student population; for example too few students from low-income families pass the test. Out of 2,133 schools in Washington, 1,286 did not make AYP. That's 60%, a supermajority. There are over 1,100 Schools in Improvement in Washington State. Remember Lake Woebegone, where all the children are above average? Evidently we're not there.
But can it really be that bad? I have a 1994 Toyota 4Runner, and the oil pressure gauge says that at highway speeds I have no oil pressure. If that were really true my engine would blow up. So I think it's not my oil pump that's the problem. It must be the gauge, or the sending unit.
So what gauge is the federal government using to determine our progress? The WASL. And how do they determine whether or not AYP is made? They compare the scores last year's fourth graders got with this year's fourth grade scores. Last year's sophomores with this year's. The progress (or lack of progress) of each individual student is irrelevant in the current system. And every year the bar is raised because the No Child Left Behind (NCLB) law, passed in 2002, says that by the 2013-2014 school year every child in the United States will be reading and doing mathematics at the level that their state has deemed proficient. I think it is fair to say that every school in America will be a School in Improvement by 2015.
So are we really that bad? NO!! In August the College Board released the scores for the Scholastic Aptitude Test (SAT) and for the seventh consecutive year, Washington State SAT averages are the
highest in the nation among states in which more than half of the
eligible students took the tests. In my high school, a School in Improvement, our students have consistently scored above the state average on the SAT test. Last year three of the candidates for our legislative district's Washington Scholars program were from our high school. One was an alternate, who received the full award when her classmate chose to accept the full ride he was offered to Notre Dame. Another student is attending Yale. In 2008 our Knowledge Bowl team won the State Championship for 2A schools. Last year 17 of our 19 athletic teams made it to the State Tournament, and we had two state champions. Our band is phenomenal; we have 3 alumni in the Husky Band this year. I could go on.
So we did not meet AYP, but my school is not going to blow up. We are a good school, working hard to be even better. What kind of gauge would truly measure the impact we are having on our students' achievements? No Child Left Behind is a noble phrase, but an empty promise. How can we really assure that each child will reach his or her potential?
Can i use this blog as reference in my college report
Regards
Barren
Kristin, you describe convincingly a practical reason for more frequent testing. I agree with your reasoning and recognize such situations. Perhaps this is a topic for a future post. I join with others who hope you will not have to type or deal with cholesterol again. 🙂
Yes, you did address my question adequately. Thank you.
I see your logic, but the problem still remains that you can’t pin a child’s substandard skills on any particular teacher. In areas with high student transiency, and in my district those are the students who tend to fail state assessments, it’s even harder to pin down the cause of a child’s poor skills unless you do it September – June. Or better, September, December and June.
And I am so glad you bring up the lawsuit scenario, contractual obligation, and the disbelief that teachers don’t connect NCLB to our obligations as teachers. I haven’t met a teacher (and I have met and work with bad ones) who doesn’t see his job as bringing a child to standard by June, but I know A LOT of teachers who aren’t real sure about the impact they have on a child’s skill, because the standardized assessments don’t track a child from September to June. In Washington, the state assessment tracks skills at 4th grade, 7th grade, and 10th. I can do assessment in my own room, of course, but they aren’t recognized by the state.
Imagine your doctor tells you that you have high cholesterol, and have to change your lifestyle to lower it. You make the changes, you try to get those numbers down, and hey! The next test results they give you aren’t measuring your cholesterol levels, they’ve tested the guy behind you! His cholesterol is off the charts! Could you sue the doctor for not lowering your cholesterol? No, but you could sue him for having a useless test, and maybe some parents will. Again, I’m not against standardized testing. I’m for it. I’m against the current way it’s given, and the inaccurate way student scores are used to assign blame to a school.
And if I never have to type the word cholesterol again, I will be a happy person.
Yes, Brian, “adequate.” Thanks for the correction. I use the word academic, because it refers to academic progress or change as measured by state tools.
Glad to read, Mark, that some standardized test data help you and I assume other teachers who understand test construction and interpretation. As to “fair,” that’s operationally defined by the state as aggregated student responses to the state test.
Great question, Kristin, about changes in operational definitions of “good” teacher and teaching. I suspect that you and others have thought about this many times as parent and teacher.
Nominal definitions of public school teachers remain the same, those who sign teaching contracts to instruct students in public schools.
The operational definition of a “good” teacher changed with the introduction of formal, measurable expectations that students will meet or exceed minimum state academic performance standards. That means, all students of good teachers meet or exceed such standards. Yes, districts may use other criteria also, but a technical case exists that they do not have a choice to ignore the implied state operational definition of a good teacher, if Federal money is used by that district.
I find it curious that public school educators have not picked up on that change and its impact on the meaning of teacher contracts. That is, that teachers agree by signing contracts that they shall instruct in ways that make it more likely that all students in each classroom will meet or exceed state measured minimum academic performance standards. It’s hard to imagine any teacher or organization defending the opposite in a court case.
This goes back to Mark’s point about teacher choices. The range and depth of these choices changed with the change of operational definitions of “good” teaching and the implied
“shall” instruct in ways that all students at least meet minimum standards. Teachers must work more efficiently, that is constantly against clock time, etc. to raise the chances that all students will perform adequately on the state test. While these practices have not yet passed into case law, I’m guessing it’s just a matter of time to find the right case to bring against a teacher, district, union, … That’s not a threat, just an observation.
And yes, third parties in and out of education have ways to increase learning efficiency with and without conventionally defined good teachers. Some third party observers and policy makers use these efforts to compare against results of public school instruction.
Did I address your questions “adequately”?
Bob, I’m curious about your comment that State Standards Assessments “… also changed the operational definition of a “good” teacher.”
I am with you on the value of assessments. I am in favor of a test that measures whether or not a student is at standard in 10th grade. I like that it’s in 10th grade, because then you have two years to help a child catch up if he is not at standard. But a few things need to change about the current system. Besides the value-added assessment Brian mentioned, the testing data needs to be used more effectively. Right now, it’s slap-dash and lazy. How easy, to look at a percentage and then threaten or punish a school, and how meaningless. Who does that hurt? The principal who can’t be bothered to evaluate his teachers? No. The teacher who can’t be bothered to pull her students up to standard? No. The parents who can’t be bothered to encourage their child to succeed? No. The current consequences for not meeting AYP hurt the future students of a school. They don’t improve teaching practices, because there’s no direct correlation being made between the weak link in a child’s education and the test score. They don’t improve administrative practices, because districts tend to simply move weak administrators around instead of move them out of the profession.
So what I would like to see, as a teacher, is a standards based assessment that I give my students in September and that I give to them again in June. I’d like that data externally analyzed, and I’d like the results attached to my effectiveness as a teacher. As a parent whose child is in public school, I’d like to have my daughter externally assessed in September, then again in June, and I’d like to know what her strengths and weaknesses were so that I could work with her through the summer to strengthen her weak areas. The current assessment system tells me nothing, other than the group of students attending my daughter’s elementary school tend to be at standard, as a group, for whatever that means.
I actually do think that our state test for reading and writing was a more than fair and valid assessment of the *minimum* skills a 10th grader should have. I find the disaggregated data and feedback about my learners very useful.
However, I do not think it is reasonable to assess the quality of a school by assessing a single assessment of student group “A” in one year to a single assessment of a student group “B” in the next year. Then, to reward or penalize a school based upon this quality assessment is fundamentally flawed logic. Post hoc ergo propter hoc? Causation fallacy? In reality, those being assessed are different learners. If we are looking for teacher contributions to academic performance changes, shouldn’t we be looking longitudinally at the same cohort, not comparing utterly dissimilar groups…especially in districts where there are massive changes in enrollment as families enter or leave the community?
And then there’s this… I don’t recall the exact dates and quantities here, but am estimating… in the year before the reading and writing state assessments became required for graduation, the pass rates were at about 60% in my district. The year the graduation requirement was instituted, the pass rate jumped to 88%. The following year, it only increased to 89%. Clearly, the teachers did better teaching between year 1 and year 2, then somehow failed to adequately serve their students between year 2 and 3. The data cannot be argued against…if we are to believe the AYP argument.
We live in a world of bewildering acronyms, but for the record it’s Adequate, not academic, Yearly Progress. It sounds even worse when it’s not ‘adequate’.
You’re right, Mark, a number of evaluation designs exist. Each has its strengths and offsets. None alone satisfies interests of everyone concerned with how and what people learning in schools. It’s good that you and other teachers know how to conduct independent analyses in addition to preparing students for state assessments.
State standards assessments monitor student learning in the aggregate, thereby accounting for (among other things) learners and teachers having good and not so good days. It also changed the operational definition of a “good” teacher. And, it’s a relatively reliable index of academic yearly progress useful for some policy decisions.
Test builders and learning analysts also agree with you that no direct connection necessarily links a school assessment result to any given student. As we all know, that does not negate the need for or the value of state assessments as external validity checks on student and teacher contributions to academic performance changes.
And, yes, we agree that teacher choices of instruction, etc. preceed (and some assume, at least influence) student learning rates. These choices are observable and analyzable.
Does this clarify my view? I think we share it.
Full disclosure: Vince is the principal in my district whose school met AYP. He is my friend. And he knows of what he speaks.
Bob, no. Thank you for your congratulations, but I respectfully decline your condolences. “External validity checks” would be useful to students and parents if they were meaningful; I’m saying they’re not. My wife is an entrepreneur and talks about “value added” to her products. Like Kristin and Mark and Vince are saying, that’s what I want to measure. How much has this child gained this year, due to my effort? Where was she when she walked in the door; where is she in June? Until we ask and answer those questions we’re just looking at faulty gauges. (That’s why we call them idiot lights in cars:-)
Again with teacher choice, Bob…in your eyes, teachers are clearly the problem with all of education. Maybe the system would run more smoothly if we didn’t have any teachers?
As for AYP, how about the fact that we are not comparing actual yearly progress? That would require the same bodies taking the preassessment as the postassessment. When dealing with small segments of the population, my successes with the class of 2011 have no impact on the class of 2012, who enter with different needs and different aptitudes and deficits. If the class of 2012 made great gains, perhaps I can take some credit. But if fewer in the class of 2012 passed than in the class of 2011, I have to be realistic and consider how much each of these factors might have impacted student performance: my teaching, the entry point of the students (for example, the class of 2011 entered my room with an average RGL of 6.8, whereas the class of 2012 entered with an average RGL of 4.4), changes in the assessment instrument…just to name a few. It is certainly possible that for some reason I was a worse teacher, though I have to be realistic and consider that perhaps my students did grow, perhaps even covering as much ground as the previous year’s students, but simply did not arrive at the same endpoint as the previous year because their starting point was already at a deficit.
It would make more sense to assess a STUDENT’S progress to determine student learning as opposed to comparing one student one year to a different student the next year. I suppose I could use statistics like these to make me look like a stellar teacher…I’ll preassess my second period sophomores, which is a weaker group, and then postassess my first period, who entered with stronger core skills. I’ll look like a rock star!!
Brian, you’ve addressed the biggest problem with AYP.
Good teachers are ready to take a child – wherever his skills are in September – and have him at standard by June. This success is invisible to a system that compares one class to the next. If you’re not going to track individual children from year to year, and take on the record keeping challenge of following kids from school to school and district to district, then you’re not tracking anything but each class’s success rate on the WASL. You’re not tracking the effectiveness of the teachers, or the school, or even the test.
My school has a sizable population of English Language Learners. When they take the WASL in April, as tenth graders, some of them have been speaking English for a few months. By the time they are seniors they are fluent and most are filling out applications for university, but AYP doesn’t take this into account.
Congratulations, Brian, for helping students gain entrance to Tier 1 universities. Condolences for your school not meeting academic yearly progress. Best wishes as you show other teachers how their students, too, may meet or exceed minimum state standards, so your school will not remain on “improvement” status.
You’ve offered a useful example of why external validity checks such as state minimum academic standards are useful to students. From this view, increased learning is a civil rights issue, not a teacher choice, effort or other concern that blocks at least minimum academic yearly progress for all students, no matter how anyone labels them. Yes?
Brian,
I loved your comments. There are so many ways to measure the success of schools. Learning of course is at the center! Similar to your blog – stories about kids, and growth of the same students from one year to the next via the MAP tool are great examples.
We are certainly living in the devilish details of NCLB. I thought it was going to change with a new President and Sec. of Education, but the Race to the Top funds are filled with NCLB principles.
As practioners we must be stronger voices about our schools (data, stories about kids) that tell accurate and realistic growth of our students. That being said, it’s vitally important for teachers, grade level teams, and departments to have common data or assessments that demonstrate learning.
Thanks, Vince
My high school did not meet AYP either…by about three students in one subset cell (special education mathematics). Considering also that the building pass rate for one of this year’s tests was 97%, I anticipate we won’t make AYP this year either…as last year’s group was stellar and this year’s is adequate. One more year after that and we’ll also be in improvement, despite having the highest test scores of traditional public schools in the region.
AYP measuring different cohorts of students is ludicrous.