Title: Message
 
I forgot to say: Because Novamentes will learn to communicate with humans in human language, they will have concepts corresponding to English words, and could end up doing some "thinking in English".  But my suspicion is that this will be awkward for Novamentes and will rarely happen. 
 
Of course, thinking about how to express things to humans is a different story, and will involve a different kind of "thinking in English" (or another appropriate human language).
 
-- Ben
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Ben Goertzel
Sent: Saturday, November 30, 2002 4:33 PM
To: [EMAIL PROTECTED]
Subject: RE: [agi] father figure

 
Actually, I don't envision a Novamente doing much thinking in English.
 
The use of sequential utterances with grammars for communication, is a result of our limited capability for more direct mind-to-mind information transfer. 
 
Novamentes will be able to communicate with each other more directly, using a system I call Psynese, in which "hunk o' mind" are directly transferred, using "standard concept vocabularies" (PsyneseVocabularies) to translate from one  mind's internal language to another's.
 
I think that a significant bit of Novamente thinking may make use of PsyneseVocabulary concepts, which is the rough analogue of humans thinking in a human language.
 
But it's only a rough analogue.  Psynese is less restrictive than a linear language, which I believe is a good thing: it should be able to manifest the good aspects of language, without the painfully constricting aspects...
 
-- Ben G
 
 
 
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Gary Miller
Sent: Saturday, November 30, 2002 3:09 PM
To: [EMAIL PROTECTED]
Subject: RE: [agi] father figure

It seems that a lot of human concious thinking takes place in English and has corresponding subvocalizations. 
 
In doing higher order thinking would you AGI be also subvocalizing it's musings and decisions internally to the point that they could be logged and monitored for accuracy or dangerous paths of thinking?  Symptons of paranoia, meglamania, and obsessive compulsive and psychotic behavior could probably be caught at this level if it exists.
 
If so external guidance and positive input material could be provided to the AGI in order to counteract negative content or mental patterns it may have stumbled across or fallen into.
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of maitri
Sent: Saturday, November 30, 2002 12:31 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] father figure

Ben,
 
Thanks for the reasoned and clear response.
 
I was certain you had thought about these issues deeply from a philosphical standpoint as well as an implementation standpoint.  Nonetheless, I wanted to pose the questions for my own clarification as well as for others on the board.
 
From my very elementary understanding, I agree that not too much will be know about outcomes once this thing starts expanding on its own and reaches "chimp" level intelligence.  In fact, it occured to me that given the potential ensuing complexity that can rapidly emerge, it is so very important that the starting point be as "good" as can be.  Its hard enough for the average programmer to figure out what the hell happened when their program completes running.  I imagine once NM gets cranking, it may be significantly harder to trace back and figure out what the hell happened as well.
 
Kevin
 
----- Original Message -----
Sent: Saturday, November 30, 2002 11:25 AM
Subject: RE: [agi] father figure


Kevin wrote:
 
!!!!!!!!!!!!!!!
 
*************
In practice, it seems that an AGI is likely to have an "owner" or a handful of them, who will have the kind of power you describe.  For instance, if my team should succeed in creating a true Novamente AGI, then even if others participate in teaching the system, we will have overriding power to make the changes we want.  This goes along with the fact that artificial minds are not initially going to be given any "legal rights" in our society (whereas children have some legal rights, though not as many as adults).
 
************
Would this overriding occur because the person carries more weight with Novamente, or would they need to go in and altar the structure\links\nodes directly to affect the change?
!!!!!!!!!!!!!
 
Either case could occur.  In Novamente, it is possible to assign default "confidence levels" to information sources, so one could actually tell the system to assign more confidence to information from certain individuals.  However, there is a lot of flexibility in the design, so the system could definitely evolve into a configuration where it worked around these default confidence levels and decided NOT to assign more confidence to what its teachers told it.
 
"Going in and altering the structure/links/nodes directly" isn't always difficult, it may just mean loading a script containing some new (or reweighted) nodes and links.
 

!!!!!!!!!!!
*********************************
At least two questions come up then, right?
 
1) Depending on the AGI architecture, enforcing one's opinion on the AGI may be very easy or very difficult.  [In Novamente, I guess it will be "moderately difficult"]
 
***********************************
That's the crux of the matter isn't it?  Wouldn't it be easy to enforce an opinion while Novamente is in its formative stages, versus when a large foundation of knowledge is in place?
!!!!!!!!!!
 
Yes, that's correct.
 

**********************************
 
!!!!!!
Suppose I am overtaken by greed, and I happen to get my hands on a baby Novamente.  I teach it that it should listen to me above others.  I also teach it that it is very desirable for me to have alot of money.  Novamente begins to form goal nodes geared towards fulfilling my desire for wealth.  I direct it to spread itself on the internet, and determine ways to make me money, preferably without detection.  Perhaps it could manipulate markets, I don't know.  Or perhaps it could crack into electronic accounts and transfer the money to yours truly.
 
What's to stop\prevent this?  In a real sci fi scenario, perhaps for your next book, could we have NOvamentes "fighting" Novamente's?
!!!!!!!!
 
There is nothing in the Novamente architecture preventing this kind of unfortunate occurence.  This has to do with the particular system of goals, beliefs and habits inside a given Novamente system, rather than with the AI architecture itself.
 
!!!!!
This all goes to my concern regarding morality. I know you resist the idea of hard coding morality into the Novamentes for various reasons.  Perhaps as an alternative, the first Novamente could be trained over a period of time with a strong basis of moral rules(not encoded, but trained).  Then any new Novamentes would be trained by that Novamente before being released to the public domain, making it nearly impossible for the new Novamentes to be taught otherwise.
!!!!!!
 
This is something close to what we have planned.
 
Several others have asked me about this, and I have promised to write a systematic (probably brief) document on Novamente Friendliness sometime in early 2003, shortly after finishing my work on the current draft of the Novamente book.
 

!!!
I know some of this stuff is a bit out there, but shouldn't we be considering this stuff now instead of later??
!!!
 
It definitely needs to be thought about very hard before Novamente reaches chimp-level intelligence.  And in fact I *have* thought about it pretty hard, though I haven't written up my thoughts much (as I've prioritized writing up the actual design, which is taking longer than I'd hoped as it's so damn big...).
 
Right now Novamente is just a software core plus a bunch of modules-being-tested-but-not-yet-integrated, running on top of the core.  So we have a whole bunch of coding and (mostly) testing and tuning to do before we have a system with animal-level intelligence.  Admittedly, though, if our design is right, the transition from animal-level to human-level intelligence will be a matter of getting more machines and doing more parameter-tuning, it won't require introduction of significant new code or ideas.
 
Having said that I've thought about and will write about it, however, I have a big caveat...
 
My strong feeling is that any theorizing we do about AI morality in advance, is probably going to go out the window once we have a chimp-level AGI to experiment with.  the important thing is that we go into that phase of experimentation with the right attitude -- with a realization that training the system for morality is as important as training it for intelligence -- and with a careful approach. 
 
-- Ben

Reply via email to