You might say no to a driverless car or a pilotless plane, but it is time to trust computers to do your marking
Question: A boy climbs slowly to the top of a slide and then slides down. At which point will his kinetic energy be a maximum?
One student wrote as his answer:
When the speed is greatest – which is at the lowest point in the swing cycle.
The response was marked as incorrect, and the feedback given to the student was this:
This question is about a child on a slide not a swing.
What is remarkable about this feedback is that a computer generated it.
Computers can mark pretty much anything; from open-ended text responses to algebraic equations. So why do we still expect teachers to mark using a model developed before the first industrial revolution, let alone before the second?
First, computers used to be very bad at marking. Pretty much all you could trust them to mark was multiple-choice questions. A sensible student answers multiple-choice questions in reverse. You check the answers for plausibility before reading the question. You select and verify and discount distractors, and often get the right answer for the wrong reasons. Worse, you may even remember the distractors as the right answers! We certainly wouldn’t want technology to drive what is assessed and how it is assessed. The good news is that now you can throw away your multiple-choice tests, with your marking.
Second, computers used to be very bad at giving feedback. Two examples should suffice to put this bad record straight.
Consider this question from the Open University’s PMatch system, available as a plug-in for Moodle:
If the distance between two electrically charged particles is doubled, what happens to the electric force between them?
And this answer:
The force is deceased by a factor of four.
If you were marking, you would probably correct the spelling if you spotted it – unless you were feeling uncharitable, in which case you would simply put an angry red cross by it. Using PMatch, the infinitely patient computer notifies the student politely that he or she has made a spelling mistake and asks him or her to have another go before submitting an answer. So, not only are your students getting feedback, they are practising the correct spelling. In an era when mathematical and scientific literacy are key, such small improvements in performance could be very beneficial.
The same process of trial, error, feedback and correction is available from the open-source computer-aided mathematics assessment system Stack, developed for Moodle by Chris Sangwin at Loughborough University. Stack not only knows that the derivative of your answer should be equal to the expression that you were asked to integrate, but also shows you what the derivative of your (wrong) answer would be. Not many maths teachers would go that far!
You are told if you forget your constant of integration, and, in case you think this could lead to sloppiness, you can penalise a student for his or her errant constant while still allowing some credit for finally supplying it.
Finally, there is a perception that auto-marking will make teachers less aware of the mistakes their learners are making, when quite the reverse is true. To set up and maintain auto-marking systems you need a deep understanding of formative processes, of how and why students make mistakes. Interaction with auto-marking systems will make teachers more acute observers of the kinds of mistakes that students make, which can only improve their teaching.
So let’s not pretend we are climbing into driverless cars or pilotless planes with auto-marking. The teacher is very much present in auto-marking systems, watching the dials, ready to take over and guide us in to a safe landing.
Chris Weadon is founder of No More Marking Ltd, a company that uses comparative judgment to assess work more accurately than traditional marking techniques
Your thoughts