|
Kevin wrote: !!!!!!!!!!!!!!!
*************
In practice, it seems that an AGI is likely to have an "owner" or a handful of them, who will have the kind of power you describe. For instance, if my team should succeed in creating a true Novamente AGI, then even if others participate in teaching the system, we will have overriding power to make the changes we want. This goes along with the fact that artificial minds are not initially going to be given any "legal rights" in our society (whereas children have some legal rights, though not as many as adults). ************
Would this overriding occur because the person carries more weight with Novamente, or would they need to go in and altar the structure\links\nodes directly to affect the change? !!!!!!!!!!!!! Either case could occur. In Novamente, it is possible to assign
default "confidence levels" to information sources, so one could actually tell
the system to assign more confidence to information from certain
individuals. However, there is a lot of flexibility in the design, so the
system could definitely evolve into a configuration where it worked around these
default confidence levels and decided NOT to assign more confidence to what its
teachers told it.
"Going in and altering the structure/links/nodes directly" isn't always
difficult, it may just mean loading a script containing some new (or reweighted)
nodes and links.
!!!!!!!!!!! ********************************* At least two questions come up then, right? 1) Depending on the AGI architecture, enforcing one's opinion on the AGI
may be very easy or very difficult. [In Novamente, I guess it will be
"moderately difficult"]
***********************************
That's the crux of the matter isn't it? Wouldn't it be easy to enforce an opinion while Novamente is in its formative stages, versus when a large foundation of knowledge is in place? !!!!!!!!!! Yes, that's correct.
********************************** !!!!!!
Suppose I am overtaken by greed, and I happen to get my hands on a baby Novamente. I teach it that it should listen to me above others. I also teach it that it is very desirable for me to have alot of money. Novamente begins to form goal nodes geared towards fulfilling my desire for wealth. I direct it to spread itself on the internet, and determine ways to make me money, preferably without detection. Perhaps it could manipulate markets, I don't know. Or perhaps it could crack into electronic accounts and transfer the money to yours truly. What's to stop\prevent this? In a real sci fi scenario, perhaps for
your next book, could we have NOvamentes "fighting" Novamente's?
!!!!!!!! There is nothing in the Novamente architecture preventing this kind of
unfortunate occurence. This has to do with the particular system of goals,
beliefs and habits inside a given Novamente system, rather than with the AI
architecture itself.
!!!!!
This all goes to my concern regarding morality. I know you resist the idea of hard coding morality into the Novamentes for various reasons. Perhaps as an alternative, the first Novamente could be trained over a period of time with a strong basis of moral rules(not encoded, but trained). Then any new Novamentes would be trained by that Novamente before being released to the public domain, making it nearly impossible for the new Novamentes to be taught otherwise. !!!!!! This is something close to what we have planned.
Several others have asked me about this, and I have promised to write a
systematic (probably brief) document on Novamente Friendliness sometime in early
2003, shortly after finishing my work on the current draft of the Novamente
book.
!!! I know some of this stuff is a bit out there, but shouldn't we be considering this stuff now instead of later?? !!! It definitely needs to be thought about very hard before Novamente reaches
chimp-level intelligence. And in fact I *have* thought about it pretty
hard, though I haven't written up my thoughts much (as I've prioritized writing
up the actual design, which is taking longer than I'd hoped as it's so damn
big...).
Right now Novamente is just a software core plus a bunch of
modules-being-tested-but-not-yet-integrated, running on top of the core.
So we have a whole bunch of coding and (mostly) testing and tuning to do before
we have a system with animal-level intelligence. Admittedly, though, if
our design is right, the transition from animal-level to human-level
intelligence will be a matter of getting more machines and doing more
parameter-tuning, it won't require introduction of significant new code or
ideas.
Having said that I've thought about and will write about it, however, I
have a big caveat...
My strong feeling is that any theorizing we do about AI morality in
advance, is probably going to go out the window once we have a chimp-level AGI
to experiment with. the important thing is that we go into that phase of
experimentation with the right attitude -- with a realization that training the
system for morality is as important as training it for intelligence -- and with
a careful approach.
-- Ben
|
- [agi] father figure maitri
- RE: [agi] father figure Ben Goertzel
- Re: [agi] father figure maitri
- Re: [agi] father figure Ben Goertzel
- Re: [agi] father figure maitri
- RE: [agi] father figure Ben Goertzel
- RE: [agi] father figure Gary Miller
- RE: [agi] father figure Ben Goertzel
- RE: [agi] father figure Ben Goertzel
- Re: [agi] father figure Alan Grimes
- [agi] Psynese Ben Goertzel
- Re: [agi] Psynese Alan Grimes
- RE: [agi] Psynese Ben Goertzel
- RE: [agi] father figure Gary Miller
