By Tamara
WELPA, MSP, MAP…. What I used to think of as second semester (or even spring) has now become in my mind “Alphabet Soup Season”. It is also when my instructional year is put on hold for seven weeks. For the next four weeks my “teaching” day will consist of nothing but proctoring the annual language proficiency test. It takes four weeks because I have sixty-eight students in seven grade levels to test in four sub-sets: Reading, Writing, Listening, and Speaking (which is one-on-one with each student). Then in late April (as we all know) instruction crashes to a halt again for MSP. I don’t think losing seven weeks of instruction (more, really, when you factor in MAP) was what the feds had in mind when they crafted the assessment requirements for NCLB.
Like Kristen, I believe assessment plays a critical role in the education process-especially when said assessments (like MAP) give immediate formative feedback. I do have an issue though when assessments actually limit student access to instruction. My English Language Learners really can’t afford to lose those seven weeks of instruction. Especially for tests that don’t assess state and district English Language Development Standards or are beyond their linguistic ability to access.
There is some talk of extending assessment exemption for ELLs to three years (currently they are exempted from the likes of MAP, MSP/HSPE for the first year in country, except for the math portion because-listen for the sarcasm here-math is a “universal” language). While that would allow students to build a broader base in English, three years is still short of the 7-10 years research (Krashen, Cummings) shows it takes to develop cognitive-academic fluency in a second language. Wisconsin, along with 28 other states just received a grant to develop new language proficiency tests based on common core. It will be interesting to see how they address the exemption cutoff.
Regardless of what national/state/local standard is used to develop assessments, I would like to see ELLs assessed in a standardized manner solely on language proficiency for a minimum of three years. That is not to say they should not have content based assessments-they absolutely should, both formative and summative. In the classroom. As a part of the instructional day. But save the high stakes assessments for when they have a reasonable base of English to adequately show what they know. There are few things more demoralizing than knowing you are an intelligent person and not having the language to demonstrate that knowledge. Happy Alphabet Soup Season.
Tom-I don’t blame you one bit for shaking your ELD specialist off. I cringe every time I walk into a room to pick up kids for WELPA and see them engaged in something meaningful. At least your ELD specialist has the ability to flex her testing schedule to accomodate your instruction. With the number of kids I have to test, I will barely get it done within the window. So I bite my lip and march them off to be assessed while silently raging against policies that put assessment before instruction. It wouldn’t kill me so much if that data was remotely usable or could spark the kind of conversations Mark mentioned. Unfortunaely this version of WELPA is utterly disconnected from our English Language Development Standards or anything that is taught in the classroom. Nothing more than a checked box at the cost of four weeks of ELD instruction.
I don’t know if you’ll applaud me or scorn me Tamara, but yesterday our ELL specialist entered my room in order to round up six or seven kids who are nominally ELL (but function just fine) for the annual WELPA tests.
I shook her off because we were doing something important. She left, but assured me during lunch that she’ll be back on Monday.
Like you, I see the need for data, but when collecting data supersedes teaching, it’s time to take a look at our priorities.
It is frustrating. The use of the test data is also frustrating. In our PLCs in my building, we are constantly told to gather data in order to promote discussion… but that doesn’t seem to be the philosophy of data-use beyond my building: rather than using test data to spark reasoned conversation and strategizing, this data is used to praise or punish only. Case in point, my building, where our reading and writing scores have been high for years… in the high 90%s even. Knocking on the door of 100%. We experienced a dip in writing score last year, yet still remained high achieving (I attribute the dip to the lame writing prompts, personally) and an even slighter dip in reading score.
Normal teacher would use this data to start a conversation: why did this dip happen? Let’s look… oh, it turns out that this cohort of students struggled more on their middle school reading and writing assessments than the cohorts who preceeded and followed them. Okay, it looks like this particular group might simply struggle a bit more, and though the comparison to their peers shows a drop, when we track this cohort longitudinally, we see growth. Great! The data shows student growth! Unless you are the state who has now sent out alerts about our dropping test scores. Our school name is listed in red on one report as a school of concern. Despite the fact that we are still above the state average, and despite the fact that a close examination of longitudinal data actually reveals growth within the cohort.
Add this to the disruption testing causes to every classroom (including the grade levels not being tested, since at my level testing means a change in schedule for everyone to accommodate our 10th graders’ test schedules), and to me all this testing is unjustifiable. If we were using the data to learn about our learners and craft a response–not for us to be red-flagged, punished, or put on special lists–then I’d have less an issue.