Are Robo-Graders the Answer to Student Writing Problems?

Although scantron sheets have been a staple of tests for many years, grading written assignments always required a human with a red pen. This may change with the growing popularity of robo-graders: computer programs that scan student papers and produce a grade. Such technology was first proposed in 1960s, but with the advance in software since then, the idea might soon become a widespread reality.

The need for robo-graders is driven by the fact that teachers are loath to assign too much essay writing because those kinds of projects take a long time to mark. As a result, many high schoolers are graduating without adequate writing and spelling skills to tackle college-level work.

The theory is that teachers would assign more writing if they didn’t have to read it. And the more writing students do, the better at it they’ll become – even if the primary audience for their prose is a string of algorithms.

That sounds logical to Mark Shermis, dean of the College of Education at the University of Akron. He’s helping to supervise a contest, set up by the William and Flora Hewlett Foundation, that promises $100,000 in prize money to programmers who write the best automated grading software.

Versions of the robo-graders are already at work marking writing assessment tests in South Dakota, and the written portion of the TOEFL exam, used to gage foreign students’ proficiency in English. It is humans, however, who still read the essay portion of the SAT and ACT tests, and many teachers seem reluctant to turn over this responsibility to a collection of software algorithms.

Thomas Jehn, a writing instructor at Harvard University, finds the idea of robo-graders “horrifying,” and feels that the efforts will actually backfire, because knowing that their writing isn’t being read by a human, would discourage students from using alliterative language and metaphors that make essays so rich.

He argues that the best way to teach good writing is to help students wrestle with ideas; misspellings and syntax errors in early drafts should be ignored in favor of talking through the thesis. “Try to find the idea that’s percolating,” he said. “Then start looking for whether the commas are in the right place.” No computer, he said, can do that.

Even the technology itself, however, isn’t without limitations at present. Software currently on the market can not differentiate between a coherently written essay and a “nonsensical” jumble of clauses that are relevant to the topic, but don’t make any sense together. Furthermore, a computer can’t successfully cope with formats other than straight prose.

“They hate poetry,” said David Williamson, senior research director at the nonprofit Educational Testing Service, which received a patent in late 2010 for an Automatic Essay Scoring System.