On 23 Oct 2006 at 13:26, Ben Goertzel wrote:
> Whereas, my view is that it is precisely the effective combination of 
> probabilistic logic with complex systems science (including the notion of 
> emergence) that will lead to, finally, a coherent and useful theoretical 
> framework for designing and analyzing AGI systems... 

You know my position on 'complex systems science'; yet to do anything
useful, unlikely to ever help in AGI, would create FAI-incompatible
systems even if it could. We don't really care about the global dynamics
of arbitrary distributed systems anyway. What we care about is finding
systems that produce useful behaviour, where 'useful' consists of
a description of what we want plus an explicit or implicit description of
behaviour or outcomes that would be unacceptable. Creating a general
theory of how optimisation pressure is exerted on outcome sets, through
layered systems that implement progressive (mostly specialising)
transforms, covers the same kind of ground but is much more useful
(and hopefully a little easier, though by no means easy).
 
> I am also interested in creating a fundamental theoretical framework for 
> AGI, but am pursuing this on the backburner in parallel with practical work 
> on Novamente (even tho I personally find theoretical work more fun...).

I prefer practical work, but I've accepted that to have a nontrivial chance 
of success theory has to come first, and also that theory about what you
want has to come before theory about how to get it. My single biggest
disagreement with Eliezer is probably that I think it's possible to proceed
with a description of how you will specify what you actually want, rather
than an exact specification of what you want (i.e. that it's possible to
design an AGI that is capable of implementing a range of goal systems,
including the kind of Friendly goal systems that I hope will be invented).
Thus I'm doing AGI design rather than researching Friendliness theory
(though I /would/ be doing that if I was better equipped for it than AGI
research).

> I find that in working on the theoretical framework it is very helpful
> to proceed in the context of a well-fleshed-out practical design... 

Our positions on experimental work are actually quite close, but still
distinct in some important respects. For example, the likelihood of being
able to extrapolate experimental results on goal system dynamics;
at least these days you accept that any such extrapolation is futile
without a deep and verifiable understanding of the underlying functional
mechanisms. I mostly agree with Eliezer there in saying that if you had
an adequate understanding for extrapolation the experiments would
(probably) only be useful for additional confirmation, but conversely I do
think experimentation has an important role in developing tractable
algorithms.

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to