It seems to me that the essential characteristic of the sensation of consciousness is not empathy but a survival instinct. The authors also suggest that conscious machines should have human rights or at least a right to life. This is a very dangerous combination that a lot of people don't realize. We are creating a species that will be competing with us for resources (atoms, energy, land) and is smarter than us. We are engineering our own extinction.
We already know how to engineer empathy. We do this all the time. It's called user friendliness. The software anticipates what we will want and does the right thing. We even invent new symbols to do it, like menus, icons, and touch screen gestures. I say "sensation of consciousness" because that is something we can objectively test for. We just ask it. How else would we know? On Thu, Jun 3, 2021, 6:28 PM John Rose <[email protected]> wrote: > That’s a very thoughtful post Matt. > > In the first paper they’re talking about the emergence of consciousness. > They argue consciousness is important to AI (assume AGI too) for at least > on very important thing - empathy. > > This consciousness/communication structure is very close to what I’ve been > working on for a while. To put it bluntly consciousness exists because > things are separated. The topology of physical reality is that way, things > go multi-agent. It is what it is. Researchers get stuck in the single agent > one mind scenario often in their own mind without seeing consciousness > outside of a first-person narrative. There is a multi-agent systems > communications function of consciousness in addition to any reproductive > fitness mechanism since agents do die, and new agents are born that need to > communicate since they are physically separated. I’m glad researchers are > writing about this now. > > Their propositions seem reasonable and could be fleshed out mathematically > and technically. I would try to describe the agents in detail in terms of > systems, complexity, communications, and environment. > > Proposition 1: For consciousness to emerge, two AI agents capable of > communicating with each other in a shared environment must exist. > Proposition 2: For consciousness to emerge, AI agents must exchange novel > signals. > Proposition 3: For consciousness to emerge, AI agents must turn novel > signals into symbols. > Proposition 4: For consciousness to emerge, AI agents must have an > internal state. > Proposition 5: For consciousness to emerge, AI agents must communicate > their internal state of time-varying symbol manipulation through a language > that they have co-created. > Proposition 6: For the emergence of consciousness to be concluded, an > onlooker should be able to observe two agents reaching an agreement about > at least one of their state of time-varying symbol manipulation. > > > *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + > delivery options <https://agi.topicbox.com/groups/agi/subscription> > Permalink > <https://agi.topicbox.com/groups/agi/T06c11d0b87552585-M6e1c6dd81652ce5ea279807b> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T06c11d0b87552585-M4172bdbb7fcba4a1e3476dfd Delivery options: https://agi.topicbox.com/groups/agi/subscription
