G'idday Roger,

There's so much potential discussion in your Paper "A Re-Conception of the 
Field [of AI and Robotics, 2023]" http://www.rogerclarke.com/EC/AITS.html#RAI 
that any comment which covered the ground properly would be too big for a post 
to Link (and it would take me too long to get together!).  You could easily run 
a one-day professional seminar on the subject along the lines of those run by 
Sydney Ideas at SydUniv.

Of one thing I'm confident: public, legal, and legislative awareness of AI 
urgently needs to be improved, and "a re-conception of the Field [of AI and 
Robotics]" seems right on the mark.

Neural networks have been around since the late 1960's so "AI" isn't anything 
new.  However I suspect most people's conception of "AI" is still so vague as 
to be quite misleading, and when this extends to those in the legislature and 
judiciary we have a problem.  The very expression "artificial intelligence" is 
unfortunate because it suggests a computer which works rather like the human 
brain, capable of human judgements and emotions.  This probably isn't helped by 
a statement attributed to Elon Musk I noticed somewhere recently to the effect 
that he couldn't be sure the computers in Tesla cars were not sentient to some 
degree.

So... presentation & emphasis depends on the audience.  But I suggest putting 
more initial emphasis on the differences between any AI system (at least for 
the foreseeable future) and the human brain.  The big one of course is 
consciousness and how it arises, a problem for which there's not yet any 
plausible theory as far as I'm aware.  But there's no getting away from the 
fact we're conscious beings, with a complex range of social and intellectual 
responses and a genetic memory of our relations with other humans.

But an AI system is a fast, multi-factor correlation engine: if the sensors 
show this pattern, do that.  (I think a Linker pointed out some time ago that 
the CAPTCHA challenge commonly employed to demonstrate a person is not a robot 
is used to train the AI computer in self-driving vehicles.)

In summary, I suggest a general "re-conception" of AI has to begin by 
clarifying why it is in no way like the human brain.  "Re-conception is a 
strong word so it needs a strong response IMO.  I think we should drop the term 
"AI" altogether if possible; "autonomous system" is better but probably not 
much clearer to the non-specialist.  Maybe some Linker could suggest a 
colourful phrase which would catch on in the public mind...!

_David Lochrin._


On 31/1/23 19:04, Roger Clarke wrote:
> Thoughts on the criteria to use in deciding which level of device autonomy is 
> applicable to any particular situation are in:
>
> s.2.2 Drone Control (a) Autonomous Control [Drones specifically, 2014]
> http://www.rogerclarke.com/SOS/Drones-E.html#DCA
>
> s.4.1 Artefact Autonomy [AI generally, 2019]
> http://www.rogerclarke.com/EC/AII.html#TAA
> in particular the para. just after the Table
>
> s.5.1 Complementary Artefact Intelligence
> http://www.rogerclarke.com/EC/AII.html#CI
>
> 11. A Re-Conception of the Field [of AI and Robotics, 2023]
> http://www.rogerclarke.com/EC/AITS.html#RAI
> proposing the primary focus be on decision support, incl.:
> -   Complementary Artefact Intelligence
> -   Augmented Intelligence
> -   Complementary Artefact Capability
> -   Augmented Capability
>
> Critical consideration of the above much appreciated!
>
_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to