>However, part of the key to intelligence is **self-tuning**.

>I believe that if an AGI system is built the right way, it can effectively
>tune its own parameters, hence adaptively managing its own complexity.

I agree with Ben here, isnt one of the core concepts of AGI the ability to 
modify its behavior and to learn?

This will have to be done with a large amount of self-tuning, as we will not be 
changing parameters for every action, that wouldnt be efficient.  (this part 
does not require actual self-code writing just yet)

Its more a matter of finding out a way to guide the AGI in changing the 
parameters, checking the changes and reflecting back over the changes to see if 
they are effective for future events.

What is needed at some point is being able to converse at a high level with the 
AGI, and correcting their behaviour, such as "Dont touch that, cause it will 
have a bad effect" and having the AGI do all of the parameter changing and link 
building and strengthening/weakening necessary in its memory.  It may do this 
in a very complex way and may effect many parts of its systems, but by multiple 
reinforcement we should be able to guide the overall behaviour if not all of 
the parameters directly.

James Ratcliff


Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > Conclusion:  there is a danger 
that the complexity that even Ben agrees
> must be present in AGI systems will have a significant impact on our
> efforts to build them.  But the only response to this danger at the
> moment is the bare statement made by people like Ben that "I do not
> think that the danger is significant".  No reason given, no explicit
> attack on any component of the argument I have given, only a statement
> of intuition, even though I have argued that intuition cannot in
> principle be a trustworthy guide here.

But Richard, your argument ALSO depends on intuitions ...

I'll try, though, to more concisely frame the reason I think your argument
is wrong.

I agree that AGI systems contain a lot of complexity in the dynamical-
systems-theory sense.

And I agree that tuning all the parameters of an AGI system externally
is likely to be intractable, due to this complexity.

However, part of the key to intelligence is **self-tuning**.

I believe that if an AGI system is built the right way, it can effectively
tune its own parameters, hence adaptively managing its own complexity.

Now you may say there's a problem here: If AGI component A2 is to
tune the parameters of AGI component A1, and A1 is complex, then
A2 has got to also be complex ... and who's gonna tune its parameters?

So the answer has got to be that: To effectively tune the parameters
of an AGI component of complexity X, requires an AGI component of
complexity a bit less than X.  Then one can build a self-tuning AGI system,
if one does the job right.

Now, I'm not saying that Novamente (for instance) is explicitly built
according to this architecture: it doesn't have N components wherein
component A_N tunes the parameters of component A_(N+1).

But in many ways, throughout the architecture, it relies on this sort of
fundamental logic.

Obviously it is not the case that every system of complexity X can
be parameter-tuned by a system of complexity less than X.  The question
however is whether an AGI system can be built of such components.
I suggest the answer is yes -- and furthermore suggest that this is
pretty much the ONLY way to do it...

Your intuition is that this is not possible, but you don't have a proof
of this...

And yes, I realize the above argument of mine is conceptual only -- I haven't
given a formal definition of complexity.  There are many, but that would
lead into a mess of math that I don't have time to deal with right now,
in the context of answering an email...

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


       
---------------------------------
Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74588401-fe7760

Reply via email to