This actually makes sense, before you experiment on it ask for the AI’s 
consent. It technically makes sense, picture experimenting on for example, 
Microsoft Windows kernel. Or other systems like, cloud networks, or database 
servers, or hardware systems, all have layers put in place for experimentation. 
The layers essentially simulate consent. For AI/AGI perhaps there should be 
standardized layers… since at some point there might be a sentience. And I can 
understand how Google dismisses this at a corporate philosophical level. A path 
forward might be using consensus mechanisms instead of decisions made in a 
backroom.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fd28660e528bdc-M7471e3b36ef56f40f47fb839
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to