On 3/10/22 18:44, Roger Clarke wrote:
Discussion here:
http://www.rogerclarke.com/EC/AII.html#TAA (2019)

Just as a comment, Roger's "discussion" states:
Of particular concern are assertions that empirical correlation unguided by theory is enough, and that rational explanation is a luxury that the world needs to learn to live without.  These cavalier claims are published not only by excitable journalists but also by influential academics (Anderson 2008, LaValle et al. 2011, Mayer-Schoenbeger & Cukier 2013).
This position has been advocated previously on Link in relation to AI and self-driving cars.  And while lane-keeping and automatic-braking technology isn't usually blessed with the holy oil of "AI", once enabled they are clearly, to use Roger's analysis, Level-7 Decision Systems in the heat of an emergency.

The "cavalier claims" above reflects an old argument in the philosophy of brain, mind & freewill which holds that a person's actions can only be considered in terms of their external, observable effects, and never in terms of an actor's "intentionality" or "state of mind" because these are hypothetical, unobservable, internal states.  Thus "anger" is interpreted as a propensity to act in a certain way, not as a state of mind.

One of the main proponents of this view was the British Philosopher Gilbert Ryle.

The argument that self-driving cars AI-controlled are OK if they result in some nett reduction of "harm" (take your pick as to how we might assess that!) seems to me to be quite untenable.  How such an interpretation might fit into all human legal frameworks given that the notion of personal responsibility goes out the window is surely a mystery, and I'd argue the whole behaviourist argument is flawed anyway.

We could discuss this much further but I'll stop now.

David Lochrin
----------------------

_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to