I understand that pursuing AGI designs based closely on the human
mind/brain has certain advantages ... but it also has certain obvious
disadvantages, such as intrinsically inefficient usage of the (very
nonbrainlike) compute resources at our disposal ... and the minor
problem that once you're done, all you have is a mere virtual
quasi-human rather than something massively superior ;-)
Anyway, I'd like to reserve the OpenCog list for **specific** discussion
of OpenCog or systems built on top of it, such as OpenCogPrime.
General, conceptual discussions of why you think the whole global
approach of the project is wrong, should be restricted to the AGI list
rather than this list.
(By analogy, on a mailing list devoted to the particulars of
evolutionary theory, discussion of evolution vs. creationism would be
out of place; such discussions would be better positioned on a more
general philosophical/theoretical list...)
thx
Ben
On Thu, Jul 31, 2008 at 10:51 PM, Richard Loosemore <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
Ben Goertzel wrote:
>
>
>
> Richard,
>
>
>
> First, as far as I can see there is no explicit attempt to
address the
> complex systems problem (cf
>
http://susaro.com/wp-content/uploads/2008/04/2007_complexsystems_rpwl.pdf).
> In practice, what that means is that the methodology (not the
design
> per se, but the development methodology) does not make any
allowance
> for the CSP.
>
>
>
> I actually don't think this is true. The CSP is addressed but not by
> that name.
>
> The CSP is addressed implicitly, via the methodology of **interactive
> learning**.
>
> The OCP design is complex and has a lot of free parameters, but the
> methodology of handling parameters is to identify bounds within which
> each parameter must stay to ensure basic system functionality -- and
> then let parameters auto-adapt within those bounds, based on the
> system's experience in the world.
>
> So, the system is intended to be self-tuning, and to effectively
explore
> its own parameter-space as it experiences the world.
>
> The design is fixed by human developers in advance, but the
design only
> narrows down the region of dynamical-system-space in which the system
> lives. The parameter settings narrow down the region further,
and they
> need to be auto-tuned via experience.
>
> I understand this is different than your (sketchily) proposed
approach
> for addressing what you call the CSP, but still, it is a serious
attempt
> to address the problem.
>
> It seems that you think the CSP is more severe than I do, that's
all. I
> think we can fix the design and let the parameters auto-adapt.
You seem
> to think that the design needs to be arrived at by more of an
iterative,
> adaptive process; and I think this overcomplicates things and is
> unnecessary.
>
> Neither of us can prove our perspective correct in any simple
way. You
> offer heuristic arguments in favor of your view, I offer heuristic
> arguments in favor of my view. This is early-stage science and
to some
> extent we each have to follow our own intuitions.
No, this does not address the complex systems problem, because there is
a very specific challenge, or brick wall, that you do not mention, and
there is also a very specific recommendation, included in the original
CSP paper, that you also gloss over.
First the challenge, or brick wall. The challenge is to find one other
example of a system that has the same superabundance of "complexity
ingredients" that are to be found in intelligent systems (things like
massive amounts of adaptivity, connectivity, sensitivity to initial
conditions, and tangled nonlinearity in the constraints between system
components), which has been made to work by the technique of (a)
choosing a system design that seems like it should give the desired
global properties, then (b) doing some extensive parameter tweaking to
move it to the desired global behavior.
I refer to this as a brick wall because, to my knowledge, nobody has
ever done such a thing. The point of the complex systems problem is
that we know plenty of artificial systems where the above technique does
not appear to work, and on the other hand we know zero examples where
the above technique has been made to work in the case of a system with
as many complexity-causing ingredients as we find in intelligent systems
... and so it would be prudent to assume that this technique might not
work in this case either. Using this technique appears to be a brick
wall: nobody has ever engineered this kind of complex system, using
this technique.
Now, for someone to actually *address* the complex systems problem, that
person would have to say why, exactly, there is reason to believe that
intelligent systems are not going to hit that same brick wall, in spite
of the fact that they appear to have all the ingredients that, in any
other case, would make us immediately suspect that they would indeed hit
the wall.
Faced with that challenge, it is not sufficient to simple wave one's
hands and say "It seems that you think the CSP is more severe than I do,
that's all".
The second point is that you gloss over (in fact, trivialize) the
specific aspects of the proposal made in the CSP paper.
That proposal stated that there was ONE situation where it might be
possible to engineer a system by tweaking parameters, and that was when
the initial design was chosen to be as close as possible to a known
design that, without a shadow of a doubt, did have the desired global
behavior. It is an almost trivially obvious point: if you try to copy
a working design, you stand more chance that the tweaking of parameters
will get you from your initial best guess, to something that actually
does work.
That existing design, of course, is the human cognitive system. Not
necessarly the brain and its specific processors and wiring pattern, but
the cognitive level of the human mind. Nobody is asking the
neuroscientists to deliver a complete circuit diagram of the brain, what
is suggested is that a cognitive-level copy of the human mind would be
quite sufficient.
This is a very, very specific proposal. It directly addresses the CSP
because it says why would expect this to work, where we would not have
the same confidence in an approach that started with a non-mindlike
design. The reasoning is simple, and crystal-clear: if we start near
to a design that we absolutely know is working now, then we stand more
chance of reaching that working design than if we start a long way away
in the design space, where we have no reason to believe that *any*
successful design actually exists.
Instead of commenting on the logic behind this proposal, you again wave
your hands and say "Neither of us can prove our perspective correct in
any simple way." Later on in your post, down below, you are even more
dismissive, and also completely inaccurate, when you say:
> You have hinted at a different approach, E), which seems to involve
> building a novel sort of software framework for interactively
> exploring the behavior of complex dynamical systems. This is an
> interesting approach too. But of course, it runs into the same
> objections as any other approach. That is, you can't prove
> mathematically or empirically that your novel interactive software
> framework will lead to powerful AGI ... so you just have to try it.
> But how can you convince skeptics that your novel interactive
> software framework is a good one?
>
> You can't.
You focus here on something completely irrelevant, namely my attempt to
use a particular kind of software tool to implement the strategy of
starting close to the human cognitive system. This is such a peculiar
interpretation of what was said in the CSP paper that it almost looks as
though you did not really read it carefully. I believe I was very, very
clear about the core idea of the strategy.
That software tool has nothing to do with the fundamental strategy of
starting close to the design of the human cognitive system in order to
increase the chance that parameter tweaking will actually work.
That strategy has a sound logic behind it, so I can indeed do what you
say that I can't do: I can give specific reasons why the approach would
be expected to succeed. I have a very specific argument to put in front
of a skeptic.
I will not make a detailed response to your other comments below,
because I think what I have written so far is sufficient to make my
point.
Richard Loosemore
>
>
>
> Second, the crucial question is whether we should believe
that the
> collection of ideas that define this project will actually
work, and
> that question is confronted on the
OpenCogPrime:EssentialSynergies
> page. The summary of what is said on that page is that Ben
hopes that
> a variety of mechanism will synergize in such a way as to
damp down
> the exponential explosions implicit in the mechanisms taken
> separately.
>
>
>
> Yes. In the pages after that in the book, I explain in moderate
detail how
> I would go about attempting to demonstrate these "essential
synergies"
> mathematically. I think this would be a tractable, and exciting,
programme
> of mathematical research. However, I have opted to spend my time
working
> on getting the OCP system built, rather than on mathematically
exploring
> the interesting synergies underlying the design. Actually, I
would enjoy
> working on the mathematics more (my original background is in
math); but
> my intuition that the system will work is strong enough to
override that
> bias in taste on my part.
>
>
>
>
>
> However, when we look closely at this idea that synergies
will resolve
> the problem, we find that this is stated as a "hope", or an
> "expectation", but without any data or theoretical analysis
to support
> the idea that synergies will actually occur.
>
>
> Actually, the other sections in that chapter DO provide a theoretical
> analysis
> in support of the idea -- but it's not a complete theoretical
analysis
> ... it's just
> a bunch of theorem-statements that I haven't proven. And that,
furthermore,
> probably will need to be tweaked a bit to make them actually true ;-)
>
> Anyway, I think I know how to go about making a theoretical
justification of
> the "essential synergies" underlying the design -- but that looks
like
> years of
> hard mathematical work to me. In those years we could build out
a lot
> of the
> system instead.
>
> This comes down to the quotation I've been using as my email
signature
> lately:
>
> "Nothing will ever be attempted if all possible objections must
be first
> overcome "
> -- Dr Samuel Johnson
>
>
> I have to say that the phrasing here is equivalent to "we
have a hunch
> that the synergies will save us when we scale up to real world
> systems". I don't mean to be negative or uncharitable, but
if there
> is no theoretical reason to believe, and if there is no data to
> support the idea that the synergies will work at the larger
scale,
> there really is only one other statement that one can make,
and that
> is that one has a hunch, or intuition, that things will work
as hoped.
>
>
>
> Yes, there is no formal proof that the system will work; nor is there
> empirical
> data that the system will work.
>
> Please note:
>
> 1)
> for **no** AGI system will there be any empirical data that the
system
> will work, before it is built and tested
>
> 2)
> for **no** reasonably complex AGI system will there be any formal
proof that
> it will work, anytime soon ... because modern math just isn't
advanced
> enough
> in the right ways to let us prove stuff like this. We can barely
prove
> average-case
> complexity theorems about complex graph algorithms, for example
-- and
> proving
> useful stuff about complex AI systems is way, way harder.
>
> 3)
> Only for very simple AGI designs is it going to be possible to cobble
> together
> a combination of fairly-decent-looking theoretical and empirical
> arguments to make
> a substantive case that the system is going to work on the large
scale,
> before
> actually trying it. [This point is more controversial than points 1
> and 2 above, but
> I think history bears it out.]
>
> So, it seems to me, the only things we can do are:
>
> A)
> throw our hands up in despair where AGI is concerned, and work on
something
> simpler
>
> B)
> wait for the neuroscientists and cognitive scientists to
understand the
> human
> mind/brain, and then emulate it in computer software
>
> C)
> work on pure mathematics, hoping to eventually be able to prove
things about
> interesting AGI systems
>
> D)
> choose a design that seems to make sense, based on not-fully-rigorous
> analysis
> and deep thinking about all the issues involved, and then build it,
> learn from the
> experience, and improve the design as you go
>
>
> The idea underlying OCP is to take approach D.
>
> You have hinted at a different approach, E), which seems to involve
> building a
> novel sort of software framework for interactively exploring the
behavior of
> complex dynamical systems. This is an interesting approach too.
But of
> course,
> it runs into the same objections as any other approach. That is,
you can't
> prove mathematically or empirically that your novel interactive
software
> framework
> will lead to powerful AGI ... so you just have to try it. But
how can
> you convince
> skeptics that your novel interactive software framework is a good
one?
> You can't.
> What you provide are qualitative, heuristic arguments. And so
far, from
> what I've
> seen, your qualitative heuristic arguments are less carefully
formulated
> than
> mine are... but I don't doubt you can improve them ... but even
so, that
> won't
> change them into mathematical or empirical proofs.
>
>
>
>
> The design document does admit that some things are expected
to appear
> in an emergent manner. The problem with emergence is that
emergence is
> what happens when a complex system shows some global behavior
that one
> does not (indeed cannot) see ahead of time.
>
>
> There may be significant differences between our respective
understandings
> of emergence, hidden within the polysemy of various words in your
sentence.
>
> Emergence in a complex system does **not** necessarily involve global
> behaviors
> that in principle cannot be foreseen ahead of time.
>
> For instance, the properties of water would standardly be said to
emerge
> from
> the properties of the component molecules -- yet, it's not
impossible to
> predict
> them.
>
> The properties of a software team emerge from the properties of the
> component
> programmers -- yet, one can still predict many of these
properties in a
> useful
> statistical way, even if I can't predict the details in each case.
> (This is a more
> explicitly complex-systems-ish example.)
>
> Similarly, for the properties of an immune network: Jerne
explained the
> network
> behavior of the immune system based on analysis of the properties of
> antibodies,
> plus some fairly simple mathematical and conceptual thinking
about the
> network
> as a whole. He didn't tell us how to predict exactly what a
particular
> immune
> network is going to do at a particular point in time, but he did
tell us
> how to predict
> some overall properties of an immune network based on the
properties of the
> antibodies going into it. Rob de Boer and Alan Perelman and
others extended
> this work in interesting ways.
>
>
>
> Taken together, then, the policy seems to be that the
designers have a
> hunch that certain things will emerge. However, the very
definition
> of "emergence" is that a hunch will never let you see it
coming. If
> hunches were good enough to allow you to see emergent
behaviors purely
> as a result of looking at the design of a system, the
behaviors would
> not be emergent, they would be predictable.
>
>
> This is a false dichotomy.
>
> You seem to think emergence is more magical than it really is.
>
> The extremely high productivity that we see in some software
teams may
> seem to emerge, as if by magic, from the dynamics between the team
> members. But yet, we can still predict some things about how to
create
> teams in which this sort of high productivity is more likely to
occur. We
> can create a team with the right structure -- an experienced team
lead,
> some junior programmers with high energy, some senior programmers
> with deep, appropriate technical experience. We can make sure
the team
> members are reasonably compatible by culture and personality. These
> things don't let us make detailed prediction of the emergent team
dynamics,
> but they let us predict a lot about it, in a "probably approximately
> correct"
> way.
>
> Similarly, by creating an AGI design with the right structure,
and with
> components that are designed to be intercompatible, one can
qualitatively
> predict a lot about the behavior of the overall system -- even
though it's
> hard to make such predictions truly rigorous.
>
> After that, as noted above, the question is not whether the system as
> designed, and with exact human-tuned parameter values, is going to
> be a human-level intelligence. The question is whether the region of
> dynamical-system-space delimited by the overall system design
contains
> systems capable of human-level-intelligence, and whether these can be
> found via dynamic automatic parameter adaptation guided by embodied,
> interactive system experience.
>
>
>
>
>
> This is a fundamental contradiction.
>
>
> There is no fundamental contradiction. You manufacture a
contradiction
> by defining "emergence" in an extreme way. There may be some members
> of the complex systems research community who define it in your
way, but
> there are also many such researchers who define it in my way.
>
>
>
> It is the complex systems
> problem. And at the moment I do not see anything here that
suggests
> the problem has been addressed. The designers only hope
(like all AI
> designers who came before them) that their hunches about the
design
> will (miraculously) turn out to be winning hunches.
>
>
> Yes, at the moment, any attempt to make an AGI is going to be
based to
> some extent on what you call "hunches."
>
> Similarly, before the modern science of aerodynamics existed, any
attempt
> at mechanized flight was based to some extent on "hunches."
>
> The Wright Brothers followed their hunches and built the plane,
rather than
> listening to the skeptics and giving up, or devoting their lives to
> developing
> aerodynamic mathematics instead.
>
> Of course, many others in history have followed their hunches and
failed --
> such as may prior AI researchers; and many flight researchers
prior to
> the Wright Brothers.
>
> It is precisely where there is not enough knowledge for
definitive proofs,
> that the potential for dramatic progress exists ... for those whose
> intuitive understandings happen to be on target ;-)
>
> -- Ben
>
>
>
>
> >
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google
Groups "OpenCog General Discussion List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to
[EMAIL PROTECTED]
For more options, visit this group at
http://groups.google.com/group/opencog?hl=en
-~----------~----~----~----~------~----~------~--~---