Interesting and thougtful discussion, thanks all!

On 29/1/23 3:22 pm, David wrote:
> Delegating  human affairs to AI systems on the scale [Tom] suggests is
simply incompatible with human society in my view.

A while back, I did a segment on the various levels of autonomy, in the
particular context of drones:
http://www.rogerclarke.com/SOS/Drones-E.html#DCA

I recently co-opted that directly into the AIU context:
http://www.rogerclarke.com/EC/AITS.html#GT

____________________

On 29/1/23 3:22 pm, David wrote:
> Tom,
> 
> On 29/1/23 10:47, Tom Worthington wrote:
>> On 27/1/23 11:01, David wrote:
>>> ... I think most people are pretty quick to detect when they're talking to 
>>> a machine ... 
>> But if it [ChatGPT] provides a cheap and convenient service, do you mind?  
>> Recently I watched someone put dinner in the oven and start to walk out of 
>> the kitchen.  I thought it odd they did not set the oven timer.  But as they 
>> walked they said "Alexa, set an alarm for 30 minutes".  Alexa's response was 
>> far from human sounding, but would you be willing to pay for a human butler 
>> to tell you when dinner was ready?
> 
> Of course not, I would just go on setting the oven timer!  It's cheaper than 
> a butler or an Alexa too.
> 
> I think most of your counter-examples rely on some claimed utility or 
> efficiency of AI.  But I argue that argument masks an overarching complexity, 
> and each such application surrenders a part of humanity's autonomy and 
> accumulated wisdom to machines.  It's certainly not a problem with Siri now.  
> But suppose AI machines like ChatGPT get better and begin to be used in 
> decision roles which Society traditionally confers on educators, the 
> judiciary, the medical establishment, the Parliament, and so on, in other 
> words, responsible human agents.  How do you think these AI decisions would 
> evolve over time?
> 
> Who would provide the ongoing training?  Not the human agents who are 
> currently responsible because they've been dealt out of the decision-making 
> loop in any practical sense.  Would the Russian or American judicial system 
> have a training input to the box which "hears" Court cases here?  Would these 
> AI systems train one another?  And of course "training" can still be 
> subverted by naughty humans...
> 
> How does humanity handle a situation where three AI "judges" I'll call 
> ChatGPT, ArgueGPT, and ChargeGPT manufactured and pre-trained by three 
> different Corporations differ in their judgements?  For that matter, suppose 
> the Tesla, Volvo, and Worthington AI-based driving computers differ in their 
> decisions at a relative speed around 200 kph on the Hume Highway, with fatal 
> results to the vehicle occupants?
> 
> Delegating  human affairs to AI systems on the scale you suggest is simply 
> incompatible with human society in my view.
> 
>>>> But I wouldn't like to try telling a bank manager they're personally 
>>>> responsible for the autonomous decisions of some AI system.
>> Have a look at the evidence to previous royal commissions into the financial 
>> sector: they stole money from dead people. Could AI do worse?   More 
>> seriously, how often does a bank manager make a decision, based purely on 
>> their own judgement?  The bank manager applies a set of rules, or just 
>> enters the details onto a system which applies the rules.  Also, when is the 
>> last time you talked to a bank manger, for me it was about 40 years ago.
> Er, no, it's not just a matter of applying rules.  The bank managers, the 
> judiciary, the medical professionals, educators, police, politicians, et 
> cetera have two things the AI system does not: insight and responsibility for 
> their actions.
> 
> I'll finish with a quote from the Wikipedia article:  What the quote 
> describes as "hallucination" (in a technical sense) I would say represents 
> the difference between a fast correlation processor and an insightful human.
> 
> QUOTE
> ChatGPT suffers from multiple limitations. OpenAI acknowledged that ChatGPT 
> "sometimes writes plausible-sounding but incorrect or nonsensical 
> answers".^[6] <https://en.wikipedia.org/wiki/ChatGPT#cite_note-OpenAIInfo-6>  
> This behavior is common to large language models 
> <https://en.wikipedia.org/wiki/Language_models> and is called hallucination 
> <https://en.wikipedia.org/wiki/Hallucination_(NLP)>.^[19] 
> <https://en.wikipedia.org/wiki/ChatGPT#cite_note-19>  The reward model of 
> ChatGPT, designed around human oversight, can be over-optimized and thus 
> hinder performance, otherwise known as Goodhart's law 
> <https://en.wikipedia.org/wiki/Goodhart%27s_law>.^[20] 
> <https://en.wikipedia.org/wiki/ChatGPT#cite_note-20>  ChatGPT has limited 
> knowledge of events that occurred after 2021. According to the BBC, as of 
> December 2022 ChatGPT is not allowed to "express political opinions or engage 
> in political activism".^[21] 
> <https://en.wikipedia.org/wiki/ChatGPT#cite_note-21>  Yet, research suggests
> that ChatGPT exhibits a pro-environmental, left-libertarian orientation when 
> prompted to take a stance on political statements from two established voting 
> advice applications.^[22] 
> <https://en.wikipedia.org/wiki/ChatGPT#cite_note-22>  In training ChatGPT, 
> human reviewers preferred longer answers, irrespective of actual 
> comprehension or factual content.^[6] 
> <https://en.wikipedia.org/wiki/ChatGPT#cite_note-OpenAIInfo-6>  Training data 
> also suffers from algorithmic bias 
> <https://en.wikipedia.org/wiki/Algorithmic_bias>, which may be revealed when 
> ChatGPT responds to prompts including descriptors of people. In one instance, 
> ChatGPT generated a rap indicating that women and scientists of color were 
> inferior to white and male scientists.^[23] 
> <https://en.wikipedia.org/wiki/ChatGPT#cite_note-23> ^[24] 
> <https://en.wikipedia.org/wiki/ChatGPT#cite_note-24>
> UNQUOTE
> 
> David Lochrin
> _______________________________________________
> Link mailing list
> [email protected]
> https://mailman.anu.edu.au/mailman/listinfo/link
> 


-- 
Roger Clarke                            mailto:[email protected]
T: +61 2 6288 6916   http://www.xamax.com.au  http://www.rogerclarke.com

Xamax Consultancy Pty Ltd      78 Sidaway St, Chapman ACT 2611 AUSTRALIA

Visiting Professor in the Faculty of Law            University of N.S.W.
Visiting Professor in Computer Science    Australian National University

_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to