On 31/1/23 09:47, Tom Worthington wrote:
> More seriously, AI is already routinely used for checking for plagiarism in 
> student assignments, and analysis of medical scans. Provided the AI has been 
> tested, is at least as good as a human, and there is human oversight, I don't 
> have a problem.

Neither do I as long as there's am appropriately qualified & responsible human, 
the Head of Department or a Medical Specialist in the cases you mention, to 
check the output of the AI system.  Turn-it-in probably uses this technology, 
but any cases I've seen have been thoroughly reviewed in a meeting between the 
relevant HoD and Tutor before approaching the student.  And medical use of AI 
should always be thoroughly & critically reviewed by a Specialist doctor, 
regardless of whether the results are positive or negative.

> But we have to be careful where the AI encodes biases hidden in human 
> decision making, or masks deliberate discrimination under a cloak of 
> impartial tech.
Not just careful, an appropriately responsible human should _always_ have the 
last word.  And such decision systems should be kept right out of the Courts.

_David Lochrin_
_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to