WASL: New and IMPROVED! 25% more for 45% less.

Picture 4

By Travis & Tom 

The plan to replace the WASL was
unveiled on the OPSI site yesterday. For many, this headline will create a
smile. But dig deeper, read the press release and overview and see what you think.

It’s hard to get the image of shiny, brightly colored cereal packages
emblazoned with cartoon characters out of our heads, each with their own
self-indulgent promise of  “Tasting
great!”
or “10% more!” or “New and Improved!”  It seems like the world’s use of exclamation marks can be
found in the cereal aisle. But of course, when you open the new box and pour it
into your old bowl and eat it with your old spoon, it usually tastes a lot like
the old cereal.

This was our gut reaction. Is the
new WCAP just a repackaging of the WASL, in which case there is not a reason to
do so? Or is the WCAP a thin whisper of the former WASL, in which case, why
bother? Or, as State Superintendent Randy Dorn hopes, is the WCAP the version
that brings change and improvement to the state student assessment system?

There are many issues
to consider when thinking about the WCAP, least of all is getting used to a new
initialism. To do this, we think it is best to take a step back and first
consider the intent of a state assessment. Is it a formative test, used by
teachers to decide where their students are during the course of a unit; or is
it a summative assessment, like the WASL it’s replacing, used to determine
where a student stands at a given point in time in relation to a set of
standards? Frankly, at this point it looks like either a shallow summative
test, less valid than the WASL it’s replacing; or a useless,
one-size-trying-to-fit-all formative assessment system.

Shallow Summative Test? We’re all
for “fairer” and “less expensive”, but the phrase “less time spent on written responses
concerns us. Writing is a recursive, time consuming process where the product
is worked and reworked to show the skill of the student in carrying a topic
through multiple paragraphs connected by transitions and aided with openings
and closings, strengthened with figurative language and ideas. That is why
6-Traits works as a means to assess writing. What will the WCAP writing look
like? Will it be a paragraph response? On a computer?

A shortened writing piece would be more efficient to
assess and the results could be returned to the teacher and family to inform everyone that
the student can write less than an essay. But would it be valuable? Washington's students need to develop ideas and construct arguments–will a short assessment measure what we value as Washingtonians? We have the same
concern over the how the WCAP “will contain significantly fewer extended
response questions.
” Can an assessment be valid, or show deep student thinking
when the results can be returned within two weeks? The time it took for
the WASL results to be returned was almost too long, but the WASL had extended
response questions where the students had the time and space to demonstrate
higher-level thinking. In nearly every district mission that I have read, there is some incarnation of . . . working to help students become independent, higher-level problem solvers. Will the WCAP encourage higher-level thinking?

Useless Formative Assessment? A useful formative assessment must provide data to guide instruction as well as be a vehicle for
informing and planning with families. The purpose of the assessment must shadow
the goal of teaching—to make an impact on student learning. If the results of a formative assessment are not used to plan future instruction, there is not a reason to have one. One benefit, as promoted by the OSPI information on the WCAP
is that because of the turn around time “ . . . Teachers can assess a student’s
strengths and weaknesses and adjust the student’s learning plan accordingly.”
That comes across as sound and we agree with using assessments FOR learning.
However, if the intent of the WCAP is to guide instruction, why is it not
administered six times a year, or even three? Do teachers only need to change
the plan for each student’s learning only once each year? “Oh my, what shall I do to
help Jimmy with his math until the WCAP comes out? I have no idea where his
strengths and weaknesses are.”
As capable
teachers know, you need to constantly do in-class assessments to refine and
direct future instruction. Teachers do this. How will the WCAP improve upon this

Caveat: The WCAP (Washington
Comprehensive Assessment Program) has just been introduced to the public and as
such it is not a fully realized. It has not been designed or implemented so it is hard (maybe even delicate) to make judgments. Time will tell. However, we have concerns.

Before you compose a witty comment
filled with how much you abhor the WASL and anything will be better, know that we
agree with you on many facets of your argument. We’ve had our complaints with
the WASL. However, our concern is not over the WASL; it is whether or not its replacement is a better assessment and provides useful data. Is the WCAP a panacea or
can we call it out as being the emperor’s new clothes? Or perhaps more
importantly, what kind of test do we want it to be?

So what is your next step? Start a dialogue with us here. What do you think? Check out the OSPI website for WCAP documents and view the press conference video (it is about 48 minutes long, but worth a view).

Follow Stories from School: Practice meets Policy on Twitter.


26 thoughts on “WASL: New and IMPROVED! 25% more for 45% less.

  1. Mark Gardner

    Re: http://www.curewashington.org/WASLinfofinal.pdf
    Holy cow, man. I can buy some of what this document is selling, but talk about a great lesson for teaching tone in writing! Or hyperbole. As well as the need for providing evidence to support propositions, and the dangers of excessive ellipses in quoted details…I wonder what crucial information was left out from expert commentary when the authors decided to include ellipses? Makes me a bit more skeptical than I otherwise might be.
    Regardless of the ethics, legality, all that: what the WASL assesses in reading and writing seems reasonable at the 10th grade level. I can see their argument about comparing results from year to year, comparison to ITED, etc. I do take issue with their proposition that the EALRs are neither educational nor academic (one of the propositions for which their evidence is weakest). There has to be some guiding language…what would they suggest other than our EALRs and GLEs?
    Interesting stuff. To me, another example of the good old American pastime of griping and offering no real solution to resolve the gripes other than “change it!”

  2. Travis A. Wittwer

    @ Mark, nice bit of writing, which you wrote, and was read by me. Cheers. The announcement of the WCAP is not even a week old. This is a topic that I will revisit in late February, April, summer, and right before the opening of school when there is certainly going to be a push.

  3. Mark Gardner

    I certainly do hope the WCAP evolves each year. If it didn’t, that means its time for me to leave the profession, as my philosophy would be too dissonant from the dominant. I do believe that higher level critical thinking can be assessed to a degree by multiple choice assessments–if perhaps there is not but one correct answer, but each answer offered different points based on degree of thinking demonstrated by one who connects with that response (sound familiar?)
    I do not believe in some transcendental Assessment which, upon its discovery, shall remain in place ad infinitum to reign from a golden throne. Good educators are always refining their craft and their assessments.
    Overall, though, I will dig my heels in against any assessment which claims to be a “writing assessment” but does not require the assessed to actually write and have that writing read by a live human being… which, after all, is the whole purpose of writing.

  4. Travis A. Wittwer

    @Mark Gardner, thanks for the thoughts. I too find the EALRs/GLEs well crafted and clear focus for all teachers as students move through the grades. As such, I want the assessment to reflect the strength of the standards (EALRs/GLEs). Thanks for the thoughts? Do you think the WCAP will go through several changes over several years–much like the WASL did?
    In the end, the greatest irony will be if the WCAP changes and items are added to the assessment (probably due to community outcry–The assessment is too simple to assess critical thinking!) and, in turn, gets closer and closer each year to what he WASL did.

  5. Mark Gardner

    Bob and Travis: interesting dialogue. I don’t know if this will add anything, other than to dip my oar in the water here. Early on, this line hit me: “I’m looking on your blog for indications of how teachers will adjust instruction to meet revised measured minimum expectations.” (Bob)
    This statement begs the question: is the assessment changing or are our standards changing, or both? And further, if the changed assessment makes people believe that the minimum standards have changed, this is a step backward. The teachers of Washington need to be reassured that the assessment that will be put in place does not represent a wholesale overhaul of the whole standards-and-assessment paradigm, but rather simply a different endpoint by which the same culminating goals and standards can be measured. After all, I can use a yardstick or a tape measure to assess my height, and both can be equally viable to assess the same quality.
    I believe that the state standards are well crafted, relevant, and represent what effective education already aims for. If these form the template for the assessment, I will be satisfied. The way I make my own assessments is often by working backward from the language of the EALRs: if 2.1.5 says “Make inferences based on implicit and explicit information drawn from prior knowledge and text; provide justification for inferences,” I then work backwards from that and develop a question or set of questions which enable the student to demonstrate this. This can be done via multiple choice, but it seems that in the EALR/GLEs, it clearly states when the skill should be measured via multiple choice assessment (i.e., GLE under EALR 2.1.3: “Select, from multiple choices, a sentence that best states the theme or main idea of a story, poem, or selection.”) In short, I don’t teach to the WASL or any other test. I teach to the standards while having faith that doing so will enable my students to excel on the assessment (which I blindly assume is founded in the standards).
    Like Travis, I see the EALRs and corresponding articulated GLEs as clear, concise, reasonable, and highly valuable for me as an educator. To lose this standard language would be a huge detriment to the system as a whole.
    For writing, I will never be convinced that an assessment other than a student-composed multi-paragraph text will be effective.
    If authentic assessment of composition is abandoned or turned over to some computerized conventions tally, we’ve taken a tremendous step backward. If we are satisfied that a multiple choice assessment is enough (even modeled after AP or Applied Practice style error analysis or composition analysis), we are doing a tremendous disservice to our students by de-emphasizing the act of writing, the act of communicating, in favor of as Travis states, text deconstruction. That ain’t the real world.
    We need to know these kids can communicate authentically. The only way to assess this is to make them write.

  6. Travis A. Wittwer

    @ Bob Heiny (others), yes. Agreed and shadowed. We are on the same side of the room looking at each other, sipping tea. However, we may not be within arms length :O)
    Truly, having students take an assessment for assessing whether or not a student has achieved the minimum state standards is great. In fact, in my classroom we would do minimum (exceeding minimum) assessments on a regular basis. Assessments are good for everyone. Here is the kicker—
    In my classroom, I used these assessments both OF learning (what have the students learned) and FOR learning (what is the next goal–specifically–for Susy?). The WCAP, according to the press release and FAQs, states that it will be able to inform teachers of student performance “for” instruction. This would be great. I would love the objective data in addition to my own. I like data. WCAP states that it will be able to do this both because it is (1) more efficient than the WASL which may be true, WCAP has various proposed ways it will do this and (2) the results can be returned in a timely manner which is the goal.
    However, I have not read anything in any of the WCAP documents and FAQs and releases that says HOW it will be able to provide useful data for me, to drive my instruction, in the area of writing.
    Reading, I can understand. It can be a process in the head, not on paper. Math, similarly (more or less, work with me). However, writing (and you could argue science lab write ups which is a state standard), I am not so sure.
    So while you and I agree with the Purpose of assessments, the Design of assessments, the Validity of assessments, the Need/They-are-not-evilness of assessments, we are arms length away in the area of I do not think that the WCAP, as it is envisioned and explained now, will be able to provide data to drive my instruction so that I can target specific needs of specific students in the area of writing a whole composition, using the writing process, for a variety of audiences, for a variety of purposes.
    Again, I am not against assessments of for the WCAP to work, but I will also not sit by when I can create change to make the WCAP purposeful. I want the WCAP to be useful because we are going to use it and I want the best for my students, always.
    Anyone out there in the blogosphere following the WASL –> WCAP situation and has information on proposed methods of assessment for writing? I would welcome that information.

  7. Bob Heiny

    That’s an interesting comparison you request, Travis. I’ll respond with technically based responses as should employed public policy implementers, which I am not now.
    I take it that you somehow relate the comparison of authors to assessing minimum academic performance through WCAP. That Q exceeds the point of standardized measured minimum performance likely used with WCAP. (I did not review WA standards to confirm this.)
    As I understand WCAP, it measures performance against state approved standards. All students should meet these minimum standards. Instructional methods exist for this to happen.
    Based in part on teacher certification requirements, policy makers expect that teachers know and will use at least these methods until they develop more effective ways.
    Policy makers further expect that most students will build on tested minimums to learn more in each academic subject. That makes common sense, given the reliance of educators in public schools on spiral curricula.
    On the other hand, if your Q represents a state standard, a test builder can construct a standardized assessment to address it. They use the same protocols as with any other standardized assessment.
    In general, they would empirically identify (through panels of experts, however defined technically, etc.) principles each author used in writing selected works and then rank order those principles against appropriate state approved academic standard(s), empirically identify test items that discriminate among test takers, etc.
    Teachers learned about these processes in their certification required intro to ed and ed psych, as well as test and measurements classes.
    I’m guessing that not all teachers think about instructing their subjects in such mechanistic ways, but policy makers do anyway. And, therein rests a set of unresolved political issues that appear to generate more heat than light that increases measured minimum learning rates.
    Yes?

  8. Travis A. Wittwer

    @Bon Heiny, and for others reading,
    question for you….who is the better writer? Steinbeck, Dr. Suess, or Bradbury? One has to be better as they all have different styles and as such are not the same. And when you choose the order, with what did you make that decision, what were the qualities of the writing that decided that?
    Or can we say they are all equal because they have exceeded the minimum level of writing expertise and as such, it is not necessary (or possible?) to put them in order of greatest writing to least ….

  9. Bob Heiny

    Yes, Travis, we’re thinking about similar ways of standardized assessing academic performance. Thanks for asking.
    I’d offer a refinement to these similarities with the principle, If it exists, we can measure it.
    Assessments do not require free form write-in answers.
    Multiple choice QAs have more power, reliability, and validity measures to assess writing and other school subject learning than do single reader’s interpretations.
    To go a step further, differences between constructive and deconstructive writing appear more rhetorical than real. A writer does both, constantly, but it’s measurably a matter of refining vocabulary and logic than anything else. I know, that doesn’t fit teacher self-images, but almost 100 years of objective, empirical experimental data support that conclusion about standardized testing.
    With respect, here’s an example that goes beyond WCAP: Teachers and others who write with PCs know and use electronic tools that illustrate ways I expect future academic test builders to model in order to assess writing without acknowledging academic English rhetorical distinctions.
    Yes?
    Keep up the good work. You and other team members offer a great blog.

  10. Travis A. Wittwer

    @Bob, good points, thank you for clarifying and increasing. I have a question for you and other readers.
    You said: “For example, a test of 100 word analogies (a vocabulary test) can reliably measure the likelihood of someone performing at “A” level in a graduate course. In the same way, a vocabulary test of 100 words can accurately measure the IQ and real world academic performance of a student.”
    I see this as feasible for vocabulary, spelling, or analogies (more or less). In these assessment situations, I am presuming short question with a series of answer options or a line for a write-in answer. I am thinking what you are at this point? Maybe the aforementioned assessment question has a short write in part for the assessment taker to explain their response. Am I still following you?
    Okay, I agree that an assessment like this–100 spelling or grammar or syntax or vocabulary questions–would have data that could be used. However, how will the student’s ability to write a cohesive, multiparagraph essay be shown in a short question/answer format? A student could have an assessment where they are asked to choose the simile; or the best opening for a paragraph and explain why that opening is the best; or edit a selection of writing with grammar/syntax/convention mistakes. However, this still does not show the process of writing or the unique quality of putting all of those distinct pieces together. It shows the deconstructive model rather than the constructive model.
    In this way, I could state that I can make a cake as long as I know what eggs are (check), and flour (yes, I spelled that correctly), and sugar (perfect use of a comma), or milk (I do have an extended list going but for style rather than correct sentence construction) ….. But can I make a cake? No.
    What do you think Washington?

  11. Bob Heiny

    I accept your point, Travis, Tom, and other teachers, that you’re looking out for learner interests and that by inference, you accept that test makers are also. Two points:
    1. WCAP measures what every student should know. That’s what minimum learning means. Accordingly, teachers have an implied duty to see that every student scores 100%, at a minimum. I’m guessing that expectation leads to teachers’ squirming: How am I going to do that, because I don’t see the connection between what I do and what assessment processes measure?
    2. A standardized test need only measure indices of what a student can do. These indices intentionally distinguish among discrete student response patterns. Only a few indices are necessary in order to judge the adequacy of meeting a standard, EALR, or other criterion for adequacy.
    For example, a test of 100 word analogies (a vocabulary test) can reliably measure the likelihood of someone performing at “A” level in a graduate course. In the same way, a vocabulary test of 100 words can accurately measure the IQ and real world academic performance of a student. Computer and manual (face-to-face) administration of these tests can provide similar reliability and validity scores.
    I don’t know specifically how the WCAP test was constructed and standardized. A test construction manual will likely be released with those descriptions, if it hasn’t already. Teachers will likely find descriptions of links between EALR and WCAP. I’ve found that knowing these links has helped me adjust instruction so that student learning matches more closely what others expect.
    Respectfully, teachers know these things, at least at an introductory level. Given that fact, that’s why I look for how teachers will adapt instruction to fit the revised expectation.
    Yes?

  12. Travis A. Wittwer

    @Bob Heiny, Yes…
    Here is a phrase from your comment on which I have a question, ” . . . means to assure that every student learns, at a minimum, what the state determines.”
    Question: I agree that students should learn a minimum of what the state determines, hopefully more. However, if what the state determines does not match the assessment because the assessment is too thin to match the minimum expectations of the state, is that an issue?
    Background: In Washington, we have state standards for subjects and they are called EALRs: Essential Academic Learning Requirements. I happen to love the EALRs. They are simple, straight forward, and do provide a cohesive vision of how teachers can direct their instruction. The EALRs with which I am familiar, Reading and Writing, are solid. These Reading and Writing EALRs are solidly demonstrated in the current version of our state test, the WASL.
    My concern is that the new assessment (WCAP) will not work in conjunction with the state standards.
    For example, EALR #3 for writing is: student understands and uses the steps of the writing process. The writing process is deep, long, and involved. And given EALR #3 for writing and all of its 5 subcategories, it is my concern that the state standards, which are reasonable but lofty and involved, can be demonstrated in an assessment like what I have read about the WCAP.
    Again, the WCAP has not been designed or implemented so I am sure that it is possible, but the press release and FAQs imply that the test will be short (too short), possibly computerized, and not have nearly as many extended response questions (higher-order thinking).
    I am open to having the WCAP work. I truly am. At the same time, I feel it is my responsibility to have a discussion with other interested people on this issue. Hopefully through our discussion our understandings will strengthen, and hopefully if there is a problem with the WCAP, it can be avoided by discussing and sharing concerns.
    I am not against a change for the better. I am against a change that does not help our students.

  13. Bob Heiny

    Kudos to those who assembled this revised package. I’m looking on your blog for indications of how teachers will adjust instruction to meet revised measured minimum expectations.
    We all know that the presence of this package highlights the fact that the state, through elected public officials, has the duty to determine what public school students shall learn at a minimum and how to confirm that learning has occurred.
    In turn, as we all know, public school teachers have the primary duty, in exchange for salaries and benefits, to use available means to assure that every student learns, at a minimum, what the state determines.
    Given these facts, it seems appropriate for teachers to discuss how to implement these duties, irrespective of its weaknesses and strengths, teacher personal opinions and preferences, etc.
    Implementation is a matter of teacher resource management, meeting requirements with what we have. To consider it otherwise is to make teachers into politicians.
    Then, who will make sure my relatives learn at least what state elected public officials consider minimum skills and knowledge?
    Yes?

  14. Travis A. Wittwer

    At this point in the discussion I think it is appropriate for me to mention that I do not believe that students should spend their whole school careers testing. Or worse yet, always practicing for the TEST. I do not want huge chunks of my instruction time with students taken up by tests and preparing for the tests (assessments).
    I focused my instruction on student needs and individual goals. I did not give practice assessments in my classroom for months on end, hoping to improve my students results. I think this is what many parents resent–class time spent test prepping, and they would have a right to be frustrated.
    My students do well on stte assessments. Well compared to the state, well compared to similar classrooms. This is due to my students focus on learning and my attention to the student.
    And I did appreciate the deep data that I could gain from the WASL. For example, two years ago, from the data it was clear that a number of my students had difficulty with two items: non-fiction text (reading) and fluency (writing). With this data I could steer my direction of instruction. I hope that the WCAP provides useful data because useless data is useless data regardless of how fast I get it back.
    What do you think? Have a thought or opinion, share it. Pass this discussion on to someone else and bring them into the mix. I would love to hear the thoughts of teachers across this great state.

  15. Tom

    I’m with you, Sandi. The WASL wasn’t perfect, but it did measure, at least to some extent, what students could do in regards to the State Standards. I’m curious to see what Dorn’s plan actually looks like, but I’ll keep my skepticism handy.

  16. Sandi

    My husband called me at school to joyously let me know that Randy Dorn was eliminating the WASL. He was a bit shocked to hear me ask the same questions that Tom & Travis raised. I wonder if it will be “repackaged” to make the haters happy, and how the abbreviated assessment would measure high standards (or not). Personally, I do think most students (I teach 4th) should be able to meet standards. I believe that our students should be able to write at the level required to pass the WASL. I am wondering how they will water the WASL down and what value (and validity) the new one will have. I have always had 2 complaints about the WASL: 1) not getting results back in time to work with the kids you have to remediate, and 2) the requirements for students with special needs and/or ELL students.
    I’m taking a wait and see approach. I don’t normally respond online to things, but considering most of the teachers I work with see absolutely no value in the WASL and think the assessment is “too much to ask of students,” I am glad to see the perspectives here.

  17. Travis A. Wittwer

    @Mark Gardner, I too felt that the WASL reasonably assessed student writing skills. I believe that a 7th grade student should be able to carry an idea through multiple paragraphs, with supporting details, and convey the message in a way that is either understandable or great to read because of the style. However, it was a long day on the writing days. But the goal was reasonable, attainable, and worth while.

  18. Travis A. Wittwer

    @Mark Gardner, just like TL before you, you have an angle of optimism that the WCAP will provide given certain caveats, of course. I hope it does. I understand what you say when you wrote that if you just keep teaching the wonderful way that you are, the assessment, whatever it is, will reflect that. However, I think it is possible for an assessment to show that students are learning but not provide ay useful data for instruction, e.g., the students are answering the questions correctly each year so the assessment is saying that students are learning when in fact they are because the teaching is superb and the student learning sound, but the questions are surface questions so students who are not making gains (like the students in your class) are shown to have made gains.

  19. Mark Gardner

    I’m not a WASL hater. I do think there was the need for revision toward a less time consuming, less expensive option, but as for what the WASL assessed in Rdg and Wrg at 10th grade, I felt it was more than fair, if not {perhaps} too easy… I can’t speak for the math and science.
    I supported Bergeson in the election, but am trying to give Dorn a chance here, as much as I can. He’s obviously bowing to the popular (albeit, perhaps not fully informed) sentiment against the WASL. If it turns out that this assessment maintains the alignment with state EALRs/GLEs, and it is less costly and time consuming: great, pie in the sky promises fulfilled, everyone is fixed by the WCAP magic wand…the WASL one apparently didn’t work (who was it that equated the testing of kids to stepping on the scale when you are trying to lose weight? Changing from one scale to another or weighing yourself with more gusto will not change your weight unless you change your habits…and you can’t be mad at the scale if it isn’t giving you the number you want…it’s not the scale, its what happens between weigh-ins that matters more, and eventually the scale isn’t needed?)
    Maybe I’m already lulled into cynicism, but I figure if I keep doing my job well, aligning toward what I do think are well-outlined and reasonable state standards, whatever test they decide to push on my students will be fine–and in all likelihood, that test will soon inspire ire and likely be replaced as well before too long. I’m impressed the WASL, in its various incarnations, was around for as long as it was. Mind you, I came of age in the CIM/CAM/Every-rural-school-will-be-a-magnet-school era in Oregon where every year I heard something different.
    Clearly, I’m not the one to send to testify in front of the legislature.

  20. Travis A. Wittwer

    @TL, I think the idea of people talking about positive changes, as there were some struggles with the WASL, is a positive way to look at the possible leaping-before-looking situation. Nice spin.

  21. TL

    1 – No state test will ever be the right state test.
    2 – This was Mr. Dorn’s attempt to pacify or rally more support from the people who put him in office. He needed to act quickly to show that he did have a plan in mind. However, Karen has a good point about arrogance and/or ignorance. Surely Mr. Dorn realizes the legislature holds the final apporval on any test changes?!
    3 – But at least people are willing to talk about (hopefully) postive changes.

  22. Karen

    I wonder if Randy Dorn would have been eligible to run for office if he’d had to meet a standard or two regarding the workings of Washington State Government. It seems to me that he is ready to move ahead without fully realizing that the state Legislature needs to approve his plan. He’s either arrogant or ignorant. Both of which are ‘great’ qualities for a person in charge of the education of our children, don’t you think?

Comments are closed.