by John Jensen, PhD
The debate over high-stakes testing pits the need for assessing student progress against the negative effects of doing so. Three recent articles offer a glance into it.
In a guest post for Education Week (“Monty Neill: Building a Successful Test Reform Movement”, May 14, 2013), Monty Neill proposes halting or reducing state-level testing, citing as reasons teaching to the test, cost, school climate, time from teaching, narrowing the curriculum, and increased juvenile incarceration.
In the same issue, Michael Petrilli (“Am I Part of the Cure … or the Disease?”, May 14, 2013) maintains that not testing but student achievement is the point, but that even small gains in test-verified reading and math enhance life trajectories, and teaching quality is what limits better instruction. Acknowledging that testing can generate temptations of cheating, a culture of fear, and narrowing of the curriculum, he would retain it nonetheless but suggests a goal of improving mediocre schools even a little, and teaching systematically the skills making the most difference.
Deborah Meier (“Problem vs. Solution: A Response”, Education Week, May 16, 2013) regards the testing issue as a distraction from more fundamental problems such as a public polarized by a growing gap between rich and poor, and that the wealthy steer resources to the schools their own children attend. She holds that a competitive education marketplace produces outcomes woefully wrong for children, that public education should address problems one at a time in light of the entire spectrum of needs.
So apart from altering the nation’s political makeup, we face two immediate problems–one improving education and the other finding out how well we do it. Both matter. Though a school’s quality may be low, how we test may depress even that.
There are many dogs in the fight about testing. Picture a round table discussion of stakeholders. At the table are a parent, teacher, district administrator, state legislator, and federal official. Each asserts, “I need to know X, and here’s why.” They are arguing over competing priorities when one of them points her thumb over her shoulder.
Seated against a wall is a student. Everyone falls silent as they realize he heard everything they said. Someone addresses him.
“So what do you want?”
“I just want to learn something,” he answers quietly.
The stakeholders try to resume their discussion but find no traction. Their urgency evaporates as they realize how superficial are their demands compared to the substance of the student’s need. The student is the elephant in the room. They look at each other and wonder, “How can we even begin to find a way to resolve this?”
By way of answer, consider a different analogy. Imagine you are on a research team investigating gases rising from the earth in a remote location. Your helicopter malfunctions and sets you down unexpectedly close to the emissions, and disembarking, your team realizes that it is in danger. Everyone must rapidly grab something and move away quickly. Before you are three canisters, one labeled AIR, another WATER, and a third FOOD.
Which do you seize? Your life may depend on your choice, and you recall the rule of three, that in general humans can live 3 minutes without air, 3 days without water, and 3 weeks without food. Knowing that in the toxic air of your surroundings you could be dead in three minutes, you grab the AIR canister first. Only after you have air under control do you pick up anything else. You secure your prime value before even considering a secondary one.
Back in the classroom, we search among the canisters concerned with testing to find the one labeled AIR. What is the most essential factor, the one we wish to establish with certainty, the one we refuse to sell off for the sake of a lesser value, the one to which we add others only if they do not detract from the first?
Finding an answer everyone can accept is, I believe, a direction that eventually resolves the dispute over testing. We first agree on our criterion value. I would like to nominate one on the basis of two axioms:
Axiom 1. Students progress through their own effort. Instruction works as it enables students to focus attention and apply effort on tasks that generate learning. The essence of instruction is directing students’ attention and effort.
Axiom 2. Effort is propelled by motivation. Aside from the sheer time available for their effort (jeopardized by countless intrusions including test-associated tasks), how students apply themselves arises directly from their interest, enthusiasm, ownership, sense of progress, and so on–signals of their motivational state directly preceding effort. If kids are bored and distracted and you want to teach them something, you either alter their motivation or forget about accomplishing anything. If in a psychological sense all behavior originates from a state that makes the behavior possible, we settle on students’ inner motivation as the key condition we must enhance.
A common complaint about testing, however, is exactly its effect on motivation. For teachers to appreciate this better, I would like them to experience an activity I often presented in training workshops in the 1970s. It goes like this. I’ll trust your imagination to figure out the lesson involved:
‘“We’re going to start off by giving you a spelling test for college freshmen,” the consultant announces to start off a morning. ”We’ll assign you to activities later based on the scores you get. Please take out a blank sheet of paper.”
People groan but cooperate. In a serious tone the consultant then reads the words while people write them:
asinine, braggadocio, accommodate, diarrhea, chauffeur, desiccate, impostor, inoculate, hors d’oeuvres, liquefy, mayonnaise, moccasin, obbligato, narcissistically, rococo, benefited, rarefy, resuscitate, sacrilegious, supersede, titillate, and paraphernalia.
“Please exchange papers,” the consultant says crisply, and then spells each word on the board. Checkers mark off wrong answers on the paper they have, and hand it back to its owner.
“How many got none wrong?” the consultant asks, writing a zero on the board. I’ve never seen zero wrong, but if people miss none, their number is jotted beside the zero. Under it the consultant lists numbers 1-20.
“How many got (number) wrong?” he or she says, going down the column. Everyone raises their hand at some point to acknowledge their number of mistakes. Most scores tend to fall around half wrong with some missing as many as 17 of 20.
People laugh, moan, and remember emotionally how it felt to be measured by their mistakes. The exercise concludes with a discussion of its implication for instruction–how discouraged they remembered feeling when they were in school, how they may have refused to try, how they preferred to be graded down than be humiliated by trying and failing, how disheartened they were at being labeled poor at anything, and so on.
If we wish both to teach and assess in a way that enhances motivation, how can we?
Competency-based instruction offers a clue. You declare it acceptable for students to have different competencies to practice even if they do much work together. You identify a discrete skill or chunk of knowledge you want them to know, tell them exactly the work needed and the signal marking its completion, and check it off when it’s done. Developed this way, their record shows unbroken success. Wherever they are on the continuum, they just work steadily at the next step.
This approach frees students from a peculiar psychic burden. If I have five units of knowledge to acquire and accomplish that, my working memory tells me “I got five.” My score matches my effort. I own the five and take pride in it.
This changes if I am told, “We expected you to get ten but you only got five.”
Only? My success becomes failure for a reason beyond my control, and my effort is devalued. I feel like a failure solely because someone measures me against a standard that does not serve me personally.
Think about yourself. Intuitively, do you mark your knowledge by knowing something or by not-knowing something else? Surely the former. Not-knowing measures are inherently antithetical to students’ natural motivation. While they spontaneously compare themselves to peers, they regard this measure of their not-knowing as fair. They are constituted to emulate standards demonstrated by peers, but for this they only need objective information.
For schoolwork, a wall chart serves adequately by counting up cumulatively the contents of each one’s growing bank of knowledge. They can use the differences between them if they wish, but no one drives them to feel bad. (And check me if I’m wrong about this, but do not some teachers still believe that imposing bad feelings on students is their bottom-line motivator? I infer this from observing students who actively fear their teacher.)
Once acknowledging positive motivation as our preferred long-term resource, we don’t even hint to a student that his effort is of secondary importance. We are clear that if we organize his effort so it’s effective, recognize the effort, and count up its outcome objectively, he is more likely to repeat it. The objective count of his progress on the specified tasks reveal exactly what he has learned. If his motivation and effort-driven success remain our primary values, we have no need to confine him under someone else’s web of meaning.
In my next article, I will show how to arrange effort for optimal motivation while accounting for its results in a way that fulfills stakeholders’ needs for information.
John Jensen is a licensed clinical psychologist and author of the three-volume Practice Makes Permanent series (Rowman and Littlefield). He will send a proof copy of the volumes to anyone on request: firstname.lastname@example.org