> > I don't think you can escape an AGI system having some emotional content. >
My assessment is that emotions register the equilibrium of the system. > Affect indicates whether the > system has been recently successful overall or struggling overall with > goal attainment. In addition there > are other dimensions as well. For me Mood, is the average of affect over > time. Affect and Mood are > important in memory formation and behavior selection for systems that I > develop. You're correct. The bare minimum emotional content of an AGI is the ability to distinguish good choices from bad ones. Pleasure/pain. Reward/punishment. Goal/non-goal. This gives the system direction. Even if you choose to limit all other forms of emotional simulation, this positive/negative duality must be represented in the system in some way, or the system will fail to behave intelligently, even if it conceptually understands everything better than us, because it won't have any criteria by which to choose one option over another. Knowing something can be labeled "good" isn't enough by itself, either. The system has to have some sort of *behavioral preference* for good vs. bad, and behavioral preference is intrinsically emotional in nature. On Tue, Jan 29, 2013 at 1:26 PM, Piaget Modeler <[email protected]>wrote: > > Let's assume Sentience is a continuum, with lesser and greater degrees. > > Big dog is probably aware of it's orientation in space, it's goals, and > its available operators, > and its sensor readings. I don't know whether the designers have given it > reflective capabilities, > I'd have to see the architecture diagrams and inspect the code to know > this. > > Based on the above assumptions I would say that Big Dog probably has a > very low level of sentience > if any at all. It may not have self-awareness (consciousness) at this > time. That does not however > preclude it from gaining more sentience and consciousness at some point in > the future with a software > upgrade. > > I would know if Big Dog were sentient if I could converse with it, and if > it indicated that it was concerned > for its' own survival and it could relate to me what threats existed for > its survival. > > I think emotion, not necessarily human emotion, but emotion is a > computation of the general state and > well being of a system. I think that emotions, as studied by many > computer scientists, aid in the > indexing and retrieval of behaviors appropriate to various situations. > People like Eric T. Mueller studied > and modeled emotions as part of his DAYDREAMER system and attached > emotional valences to goals > so that they could be better indexed for later retrieval. > > I don't think you can escape an AGI system having some emotional content. > > My assessment is that emotions register the equilibrium of the system. > Affect indicates whether the > system has been recently successful overall or struggling overall with > goal attainment. In addition there > are other dimensions as well. For me Mood, is the average of affect over > time. Affect and Mood are > important in memory formation and behavior selection for systems that I > develop. > > ~PM. > > > ------------------------------------------------------------------------------------------------------------------------------------------------ > > > Date: Tue, 29 Jan 2013 12:03:09 -0500 > > > Subject: Re: [agi] Robots and Slavery > > From: [email protected] > > To: [email protected] > > > > On Tue, Jan 29, 2013 at 3:48 AM, Piaget Modeler > > <[email protected]> wrote: > > > The other question is what happens when some warbots (like Big Dog, or > some of the ones with armaments), > > > and the aerial or undersea drones become sufficiently "sentient" (or > intelligent)? Perhaps via (inadvertent or > > > clandestine) software upgrades. That's the real "Oh sh*t" moment. > > > > > > What then? > > > > You still have not answered my question. How would you know if Big Dog > > was sentient? > > > > Do you think that the only way we can solve hard problems like > > language, vision, robotics, and predicting human behavior is for the > > algorithm to also be constrained by a model of human emotions? In > > fact, none of the partial solutions that we have today to these > > problems need any such constraints. > > > > What is so hard about *not* programming a robot to have human > > emotions? It seems like a much easier problem to me if you don't > > program it to not want to do what you tell it. > > > > -- > > -- Matt Mahoney, [email protected] > > > > > > ------------------------------------------- > > AGI > > Archives: https://www.listbox.com/member/archive/303/=now > > RSS Feed: > https://www.listbox.com/member/archive/rss/303/19999924-5cfde295 > > Modify Your Subscription: https://www.listbox.com/member/?& > > Powered by Listbox: http://www.listbox.com > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
