On 05/01/2024 13:00, Roger Clarke wrote:

> If, and only if, we re-conceive and re-orient can we can get the threats back 
> under control, and reap benefits.
>
> What's needed is not 'AI', but 'Complementary Artefact Intelligence'.
>
> And that requires the use of decision-support thinking, envisioning Artefact 
> Intelligence as being *designed to* integrate with Human Intelligence, to 
> produce Augmented Intelligence.
>
> And, while we're at it, we need to build an explicit linkage with robotics - 
> or better still with 'co-botics' - and talk about complementary artefact 
> capability combining with human capability to deliver augmented capability: 
> http://www.rogerclarke.com/EC/AITS.html#F2

Without being in the least disparaging of that argument as a theoretical 
framework, I strongly suspect it would have no traction whatever with the usual 
forces of feral capitalism, the politics of self-interest and the conservative 
Right, even if they were prepared to think constructively about AI in the first 
place.

The major part of the solution is probably legal.  Could the Commonwealth 
legislate under their Human Rights powers to make citizens ultimately 
responsible for all decisions, advice, and control *actively* made to another 
citizen by non-human agents in their name, or under their control, or owned by 
them?  Suitable penalties, including custodial sentences, would apply as though 
the responsible citizen had made the decision personally.

I think this isn't too far from the situation now.  One way or another we have 
to build the solution on flesh-and-blood human society living on this planet 
with other sentient beings.

David Lochrin
_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to