On Monday, April 29, 2019 at 7:27:26 AM UTC-5, Bruno Marchal wrote:
>
>
> On 26 Apr 2019, at 15:33, [email protected] <javascript:> wrote:
>
>
>
> *AIs should have the same ethical protections as animals*
>
> *John Basl is assistant professor of philosophy at Northeastern University 
> in Boston*
>
>
> https://aeon.co/ideas/ais-should-have-the-same-ethical-protections-as-animals
>
> ...
>
> A puzzle and difficulty arises here because the scientific study of 
> consciousness has not reached a consensus about what consciousness is, and 
> how we can tell whether or not it is present. On some views – ‘liberal’ 
> views – for consciousness to exist requires nothing but a certain type of 
> well-organised information-processing, such as a flexible informational 
> model of the system in relation to objects in its environment, with guided 
> attentional capacities and long-term action-planning. We might be on the 
> verge of creating such systems already. On other views – ‘conservative’ 
> views – consciousness might require very specific biological features, such 
> as a brain very much like a mammal brain in its low-level structural 
> details: in which case we are nowhere near creating artificial 
> consciousness.
>
> It is unclear which type of view is correct or whether some other 
> explanation will in the end prevail. However, if a liberal view is correct, 
> we might soon be creating many subhuman AIs who will deserve ethical 
> protection. There lies the moral risk.
>
> Discussions of ‘AI risk’ normally focus on the risks that new AI 
> technologies might pose to us humans, such as taking over the world and 
> destroying us, or at least gumming up our banking system. Much less 
> discussed is the ethical risk we pose to the AIs, through our possible 
> mistreatment of them.
>
>
> The humans are still the main threat for the human. The idea to give human 
> right to AI does not make music sense. It is part of the work of the AI to 
> learn to defend themselves. We can be open mind, and listen, but defending 
> their right can only threat the human right, I would say. In the theology 
> of the machine, it can be proved that hell is paved with the good 
> intentions … (amazingly enough, and accepting some definitions, of course).
>
>  
>
> My 'conservative' view: information processing (alone) does not achieve 
> experience (consciousness) processing.
>
>
> Mechanism makes you right on this, although it can depend how information 
> processing is defined. Consciousness is not in the processing, but in 
> truth, or in the semantic related to that processing,. The processing 
> itself by is only a relative concept, where consciousness is an absolute 
> thing. 
>
> Bruno
>
> https://codicalist.wordpress.com/2018/10/14/experience-processing/
>
>
> -
> @philipthrift 
> <https://www.google.com/url?q=https%3A%2F%2Ftwitter.com%2Fphilipthrift&sa=D&sntz=1&usg=AFQjCNHxMatNNL0zmgjAMcrtu2m1txb6_A>
>
>
On "Consciousness is not in the processing, but in truth, or in the 
semantic related to that processing, ..." I address in the next article:

https://codicalist.wordpress.com/2018/12/14/material-semantics-for-unconventional-programming/
 

But my mode of thinking is that of an engineer, not a truth-seeker.

@philipthrift 
<https://www.google.com/url?q=https%3A%2F%2Ftwitter.com%2Fphilipthrift&sa=D&sntz=1&usg=AFQjCNHxMatNNL0zmgjAMcrtu2m1txb6_A>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to