On Thursday, September 09, 2021, at 11:22 PM, Matt Mahoney wrote:
> If you program your AGI to positively reinforce input, learning, and output, 
> will it develop senses of qualia, consciousness, and free will? I mean in the 
> sense that it is motivated like we are to preserve the reward signal by not 
> dying. Do we need this in AGI, or can it learn a model of the human mind that 
> has these signals without having the signals itself?
> 
> I think it is possible and would be safer to build an AGI that can model the 
> survival instinct in humans without having a survival instinct itself. It 
> would be existentially dangerous to make AGI so much like humans that we give 
> human rights to competing machines more powerful than us.

I already know how to make it emerge a desire to maintain input, learning, and 
output. You need this ability if you want AGI. Without it, it is not going to 
adapt and truly learn. Not saying you can't use it without the ability, but I'm 
not going to avoid adding this powerful ability to make it truly AGI, it is one 
part of the puzzle solved how to make DALL-E into AGI. I can only tell a few 
people, it is scary to tell the public.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5e30c339c3bfa713-Mce56c50f9f5cc8f974755c4e
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to