On 27/1/23 11:01, David wrote:

... I think most people are pretty quick to detect when they're talking to a machine ...

ChatGPT is text based, making it much harder to tell your are communicating with a machine. With audio, it is easier to tell its is a machine. But if it provides a cheap and convenient service, do you mind?

Recently I watched someone put dinner in the oven and start to walk out of the kitchen. I thought it odd they did not set the oven timer. But as they walked they said "Alexa, set an alarm for 30 minutes". Alexa's response was far from human sounding, but would you be willing to pay for a human butler to tell you when dinner was ready? Also with the "Uncanny valley" we don't want our machines to be almost like people.

We also have to consider how much responsibility and authority an AI system carries. ...

Organisations are run by rules. The teacher, judge, or police officer,
has only limited discretion. Most of the time they are running through a
complex rules base, much as AI does. The human has more flexibility, but are frequently using out of date rules, forgetting some, or acting on conscious, or unconscious, bias.

Does the machine which allows a student an extension of time for their end-of-semester submission ...

There is a danger in using AI to treat the symptoms of a problem, rather than the underlying cause. Applications for student extensions is an example. Rather than automate the process, is it better to improve the teaching and assessment design, so students rarely have to ask.

Teachers use "scaffolding", with the student doing an assignment a piece at a time. Those who are struggling can be identified, and provided with help, long before the end of semester. https://blog.highereducationwhisperer.com/2013/09/how-what-and-when-of-improving-student.html

At an AI workshop a few years I learned to build a TutorBot to respond to student requests for extensions. This used IBM Watson, to interpret what the student was asking. But whatever they asked, my Bot answered "No!". ;-)
https://blog.highereducationwhisperer.com/2018/12/chatbot-tutors-for-blended-learning.html

... explain their judgements to some human who ultimately carries
the can?  Or will they not be given power to make those judgements?

Judging by the evidence given to the RoboDebt inquiry, AI would do a
better job of explaining its decisions than humans. AI would say: "The government wanted to get the support of rich people, by persecuting poor people, so that is what we did."

AI could be used to patiently explain the reasons for a decision. Of course the client should be able to appeal to a human, but just explaining why a decision was made would help in a lot of cases.

It seems to me there's rather a divergence in our social licensing here. ...

I am happy to have self-driving cars, when they are safer than human
drivers. Even my decade old car has automated systems which override
my inputs if it is going to skid, or not stop quickly. The car can do this better than I can.

But I wouldn't like to try telling a bank manager they're personally
responsible for the autonomous decisions of some AI system.

Have a look at the evidence to previous royal commissions into the
financial sector: they stole money from dead people. Could AI do worse?

More seriously, how often does a bank manager make a decision, based purely on their own judgement? The bank manager applies a set of rules, or just enters the details onto a system which applies the rules. Also, when is the last time you talked to a bank manger, for me it was about 40 years ago.


--
Tom Worthington http://www.tomw.net.au
_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to