In a paper published this spring, MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) analyzed the videos used for massively open online courses (MOOCs) to answer one simple question: Are they of any value?
Researchers found that, in general, users enjoy the informal, brief video sessions that offer “web-friendly lessons” specifically created for an online audience, writes Adam Conner-Simons for MIT News.
The findings allowed researchers at CSAIL to create LectureScape, “a YouTube for MOOCs,” that offer videos in a more effective format by highlighting the most popular videos, allowing users to search by keyword, and summarizing not only an entire presentation, but also its parts. All of this allows the user to skip particular sections, or review the ones they are having trouble with without having to view the entire presentation.
With the debut of the MOOC came the worry from professors across the country of the value of these courses. Worrying about being replaced by a video screen, the philosophy department of San Jose State University wrote a letter published in The Chronicle of Higher Education, calling MOOCs a “serious compromise of quality of education.”
The arrival of the MOOC came at a time when many colleges were facing a number of problems, including having less money available to give raises to professors as the administrative process becomes more competitive, requiring higher budgets. Colleges are responding to the problem by replacing tenure-track professors with low-cost adjunct professors. All of this requires tuition to continually increase.
EdX, an online learning platform started by MIT and Harvard University in 2012, offered researchers at MIT data pertaining to the 6.9 million videos from 29 institutions, and the viewing habits of their more than 100,000 users.
In addition to the online videos, edX also offers the software necessary to grade the work done by students, from multiple choice quizzes to full-length essays. The “machine learning” works by correlating student responses graded by humans to specific features in the student answers, allowing edX to use algorithms to arrive at a student’s grade, writes Benjamin Winterhalter for The Atlantic.
The computer can even grade essays, taking into account length, vocabulary use and punctuation.
This ability to offer automatic feedback has a direct correlation to how well students retain the information. According to Piotr Mitros, edX’s chief scientist, “We have study after study suggesting that you learn very little as a result of me talking at you for an hour. Whereas if I convey information to you for five minutes and then assess you on it, and repeat that for an hour, you learn a lot more.”
However, Mitros does concede that this automatic grading will never replace human interaction.
“Human grading and automated grading aren’t in conflict,” he said. “It’s less a question of ‘Will machines grade instead of humans?’ It’s more a question of ‘When do you use machine grading versus when do you use human grading?’”
Mitros even suggested that for certain assignments, human grading will always be the best option:
“They (machines) just aren’t capable of actually weighing the quality of a literary argument,” he said.
He went on to further emphasize that nothing will ever take the place of the quality of an education gained from human interaction:
“Closeness to teachers,” he said, “really does help student outcomes. If I know somebody’s going to look at it, I’m going to do a better job. Machines are never going to replace the need for the human connection—the idea that I created something and someone cares about it, someone cares about me.”