Tuesday, March 11, 2014

AI: Artificial Intelligence or Appallingly Idiotic

Here it is, another article that talks about using computers to machine-score student writing.

I spent more than 20 years teaching English language arts to junior high students, so I know exactly how cumbersome writing assignments can be.  I have graded more than my fair share of essays, research papers, short stories, poems, and written exam responses.  It always takes a long time to evaluate student writing -- IF the evaluation is done thoroughly and well.  Feedback on writing should be designed to inform the students where their strengths and weaknesses are and what can be done in the future to improve their writing skills.  That kind of feedback requires more than just a cursory reading, a grade at the top of the paper, or checks or circles on a rubric.  (And on a side note, the sad thing is that far too many students read the feedback then ignore it.  They pitch their papers or put them in a folder or portfolio, never to be looked at again.  So all that meaningful feedback is wasted.)

Writing needs to be evaluated for many things -- format, spelling, punctuation, grammar, usage, structure, clarity of thought, depth of understanding and analysis, quality of information, bias (or lack thereof), level of detail used, and style, and I'm sure I'm leaving things out.  Some of those elements -- grammar, spelling, maybe even structure, can be evaluated by a computer.  But how can any of those other elements actually be evaluated by a machine?

Machine scoring of writing can scan for things like key words or phrases to attempt to assess a level of detail or analysis, but it truly can't be determined without context, and a computer can't evaluate context.

Machine scoring of writing can scan for number of sentences in a paragraph, number of words in a sentence, and advanced sentence structures, but it can't necessarily determine of all those words and sentences strung together actually make sense.

Machine scoring of writing certainly can't find things like bias or evaluate style, and those things can have a deep impact on quality of writing and information being conveyed in that writing.

To have a truly meaningful response to writing is going to require human eyes.  Period.  Sure, a teacher can use his or her own actual evaluation in conjunction with machine scoring, but how many teachers are truly disciplined enough to do that? Far too many are sadly willing to leave the evaluation up to the computer because they read articles like the one above and start to believe that artificial intelligence can get the job done -- at the very least -- adequately.  Companies prey on teachers by promising great quality feedback and dangling more free time in front of them, all while making money off those teachers and allowing a gross disservice to be done to the students who really need good feedback on their writing.

Machine-scoring of student writing isn't the least bit intelligent.  It's ludicrous, lazy, and irresponsible.