Philip Sutton wrote: > If in the Novamente configuration the dedicated Ethics Unit is focussed > on GoalNode refinement, it might be worth using another term to > describe the whole ethical architecture/machinery which would involve > aspects of most/all (??) Units plus perhaps even the Mind Operating > System (??). > > Maybe we need to think about an 'ethics system' that is woven into the > whole Novamente architecture and processes.
My perspective is that, by creating ethics-guided GoalNodes, the Ethics Unit will *implicitly* create an ethics system that is woven into the whole Novamente system. But this ethics system will be a kind of emergent mind-structure, not an engineered structure. > I wonder if the top of the ethics hierarchy is the commitment of the AGI > to act 'ethically' - ie. to have a commitment to modifying its own > behaviour to benefit non-self (including life, people, other AGIs, > community, etc.) > > This means that an AGI has to be able to perceive self and non-self > and to be able to subdivide non-self into elements or layers or whatever > that deserve focussed empathetic or compassionate consideration. > Maybe the perceptual and pattern recognition architecture can be > enhanced to make it easy for a non-trained AGI to grapple with these > issues. These properties and enhancements (to the extent that they're necessary) should follow automatically from the adoption of ethical goals. The system will tweak (and ultimately profoundly modify) its various other components in order to enhance its goal-achievement. > Maybe the experience of biological life, especially highly intelligent > biological life, is useful here. Young animals, including humans, seem > to depend on hard wired instinct to see them through in relation to > certain key issues before they have experienced enough to rely heavily > or largely on learned and rational processes. > > Another key issue for the ethics system, but this time for more mature > AGIs, is how the basic system architecture guides or restricts or > facilitates the AGI's self modification process. Maybe AGIs need to be > designed to be social in that they have a really strong desire to: > > (a) talk to other advanced sentient beings to kick around ideas for self > modification before they commit themselves to fundamental change. > This does not preclude changes that are not approved of by the > collective but it might at least make an AGI give any changes careful > consideration. If this is a good direction to go in it suggests > that having > more than one AGI around is a good thing. My feeling is that this is a very high-level thing, which is far more appropriately TAUGHT through experiential interaction, than hard-wired into the system... > (b) to spend quite a bit of time/mental effort in contemplation before > committing to fundamental self modificatation. This should be learned through experience with less fundamental self-modifications. It should learn quickly that making hasty self-modifications fucks it up, and adapt its parameters accordingly. > (c) maybe AGIs need to have reached a certain age or level of maturity > before their machinary for fundamental self-modification is turned > on...and maybe it gets turned on for different aspects of itself at > different times in its process of maturation. Yeah, that's clear. The system will have to gain a lot of experience modifying small bits of itself (e.g. improving its inference rules, its activation spreading functions, etc.) before we allow it to modify large bits of itself. -- Ben G ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
