> I don't feel like getting into an argument about whether my ideas are "nutty" 
> or not.

My comment was probably not well thought-out. This Google Alert I've
got for 'artificial intelligence' returns all kinds of stuff, enough
to make me cynical. It seems to me that if you've written any code at
all, you're ahead of the game!

Am I just imagining an influx of zero-point/free-energy loon types to
AI? Or is that really happening?


On 8/1/08, Bruno Frandemiche <[EMAIL PROTECTED]> wrote:
> hello agiers
> hello ben,i read all the wikibook ocp(i shall command "the hidden pattern)
> greats work -"RESPECT"
> hello richard,what is finally your inquiry and your anwers?
> i think that you have a path to a solution,what is it?good day
> bruno
>
>
>
> ----- Message d'origine ----
> De : Richard Loosemore <[EMAIL PROTECTED]>
> À : agi@v2.listbox.com
> Envoyé le : Vendredi, 1 Août 2008, 19h41mn 27s
> Objet : [agi] Re: OpenCog Prime wikibook and roadmap posted (moderately
> detailed design for an OpenCog-based thinking machine)
>
> Ben Goertzel wrote:
>>
>> Richard,
>>
>> We've been having this same argument for years -- it's not specifically
>> about OpenCogPrime but about the engineering approach to AGI versus your
>> approach...
>>
>> I don't feel it's worthwhile for us to burn any more time repeating the
>> same arguments back and forth.  There are just too many other demands on
>> my time ... and yours as well I presume.
>
> Our discussions, in fact, always reach this point.
>
> First, you make a number of extremely inaccurate statements that
> misrepresent the complex systems problem, as it was described in the
> original paper.
>
> Then, after I sort out all of the confusion, carefully reiterate the
> real issues, and then ask you if you would avoid the distractors and
> give me your response on those real issues, you terminate the discussion
> without giving a response.
>
> And, usually, as you terminate the discussion, you throw out some more
> distortions as a parting shot (see, for example, your irrelevant and
> inaccurate comments below about the disadvantages of pursuing an AGI
> design based on human cognition).
>
> I brought these comments to the OpenCog list because you *specifically*
> requested, on the AGI list, that any discussion of the OpenCogPrime
> project take place over here, rather than there.
>
> The complex systems problem, which you persistently deny and ignore,
> will eventually bring the OpenCogPrime project down into the dust, just
> like it has brought down many another AI project in the past.  That
> seems like something that most of the people who will put their time and
> effort into OCP would prefer to avoid.  I am trying to do those people a
> favor by drawing their attention to the problem, and by trying to get
> the issue discussed in enough detail that solutions can be found.  Or,
> if the problem turns out not to be as serious as I think it is, I would
> like to get enough people thinking about it that we can *discover* that
> it is not as serious as it appears to be.
>
> From that point of view, this discussion of the complex systems problem
> may end up being the most important of all the discussions about OCP
> that ever take place.
>
> Smothering the discussion - in fact, doing something that is tantamount
> to banning discussion of the topic on this list - strikes me as
> irresponsible.
>
> [I also note the quite deplorable attempt to insinuate that this is
> something like an evolution-creationism debate.  Pretty bad, that]
>
> This is all a great pity.  You and I could have worked together, perhaps
> quite productively, to see that the OpenCog project was immune to these
> problems.  Since the late summer of 2006, however, you have been
> implacably hostile to anything that I have said.
>
>
>
>
>
> Richard Loosemore
>
>
>
>
>
>
>
>
>
>
>
>
>
>> I understand that pursuing AGI designs based closely on the human
>> mind/brain has certain advantages ... but it also has certain obvious
>> disadvantages, such as intrinsically inefficient usage of the (very
>> nonbrainlike) compute resources at our disposal ... and the minor
>> problem that once you're done, all you have is a mere virtual
>> quasi-human rather than something massively superior ;-)
>>
>> Anyway, I'd like to reserve the OpenCog list for **specific** discussion
>> of OpenCog or systems built on top of it, such as OpenCogPrime.
>>
>> General, conceptual discussions of why you think the whole global
>> approach of the project is wrong, should be restricted to the AGI list
>> rather than this list.
>>
>> (By analogy, on a mailing list devoted to the particulars of
>> evolutionary theory, discussion of evolution vs. creationism would be
>> out of place; such discussions would be better positioned on a more
>> general philosophical/theoretical list...)
>>
>> thx
>> Ben
>>
>> On Thu, Jul 31, 2008 at 10:51 PM, Richard Loosemore <[EMAIL PROTECTED]
>> <mailto:[EMAIL PROTECTED]>> wrote:
>>
>>
>>    Ben Goertzel wrote:
>>      >
>>      >
>>      >
>>      > Richard,
>>      >
>>      >
>>      >
>>      >    First, as far as I can see there is no explicit attempt to
>>    address the
>>      >    complex systems problem (cf
>>      >
>>
>> http://susaro.com/wp-content/uploads/2008/04/2007_complexsystems_rpwl.pdf).
>>      >    In practice, what that means is that the methodology (not the
>>    design
>>      >    per se, but the development methodology) does not make any
>>    allowance
>>      >    for the CSP.
>>      >
>>      >
>>      >
>>      > I actually don't think this is true.  The CSP is addressed but not
>> by
>>      > that name.
>>      >
>>      > The CSP is addressed implicitly, via the methodology of
>> **interactive
>>      > learning**.
>>      >
>>      > The OCP design is complex and has a lot of free parameters, but the
>>      > methodology of handling parameters is to identify bounds within
>> which
>>      > each parameter must stay to ensure basic system functionality --
>> and
>>      > then let parameters auto-adapt within those bounds, based on the
>>      > system's experience in the world.
>>      >
>>      > So, the system is intended to be self-tuning, and to effectively
>>    explore
>>      > its own parameter-space as it experiences the world.
>>      >
>>      > The design is fixed by human developers in advance, but the
>>    design only
>>      > narrows down the region of dynamical-system-space in which the
>> system
>>      > lives.  The parameter settings narrow down the region further,
>>    and they
>>      > need to be auto-tuned via experience.
>>      >
>>      > I understand this is different than your (sketchily) proposed
>>    approach
>>      > for addressing what you call the CSP, but still, it is a serious
>>    attempt
>>      > to address the problem.
>>      >
>>      > It seems that you think the CSP is more severe than I do, that's
>>    all.  I
>>      > think we can fix the design and let the parameters auto-adapt.
>>      You seem
>>      > to think that the design needs to be arrived at by more of an
>>    iterative,
>>      > adaptive process; and I think this overcomplicates things and is
>>      > unnecessary.
>>      >
>>      > Neither of us can prove our perspective correct in any simple
>>    way.  You
>>      > offer heuristic arguments in favor of your view, I offer heuristic
>>      > arguments in favor of my view.  This is early-stage science and
>>    to some
>>      > extent we each have to follow our own intuitions.
>>
>>    No, this does not address the complex systems problem, because there is
>>    a very specific challenge, or brick wall, that you do not mention, and
>>    there is also a very specific recommendation, included in the original
>>    CSP paper, that you also gloss over.
>>
>>    First the challenge, or brick wall.  The challenge is to find one other
>>    example of a system that has the same superabundance of "complexity
>>    ingredients" that are to be found in intelligent systems (things like
>>    massive amounts of adaptivity, connectivity, sensitivity to initial
>>    conditions, and tangled nonlinearity in the constraints between system
>>    components), which has been made to work by the technique of (a)
>>    choosing a system design that seems like it should give the desired
>>    global properties, then (b) doing some extensive parameter tweaking to
>>    move it to the desired global behavior.
>>
>>    I refer to this as a brick wall because, to my knowledge, nobody has
>>    ever done such a thing.  The point of the complex systems problem is
>>    that we know plenty of artificial systems where the above technique
>> does
>>    not appear to work, and on the other hand we know zero examples where
>>    the above technique has been made to work in the case of a system with
>>    as many complexity-causing ingredients as we find in intelligent
>> systems
>>    ... and so it would be prudent to assume that this technique might not
>>    work in this case either.  Using this technique appears to be a brick
>>    wall:  nobody has ever engineered this kind of complex system, using
>>    this technique.
>>
>>    Now, for someone to actually *address* the complex systems problem,
>> that
>>    person would have to say why, exactly, there is reason to believe that
>>    intelligent systems are not going to hit that same brick wall, in spite
>>    of the fact that they appear to have all the ingredients that, in any
>>    other case, would make us immediately suspect that they would indeed
>> hit
>>    the wall.
>>
>>    Faced with that challenge, it is not sufficient to simple wave one's
>>    hands and say "It seems that you think the CSP is more severe than I
>> do,
>>    that's all".
>>
>>    The second point is that you gloss over (in fact, trivialize) the
>>    specific aspects of the proposal made in the CSP paper.
>>
>>    That proposal stated that there was ONE situation where it might be
>>    possible to engineer a system by tweaking parameters, and that was when
>>    the initial design was chosen to be as close as possible to a known
>>    design that, without a shadow of a doubt, did have the desired global
>>    behavior.  It is an almost trivially obvious point:  if you try to copy
>>    a working design, you stand more chance that the tweaking of parameters
>>    will get you from your initial best guess, to something that actually
>>    does work.
>>
>>    That existing design, of course, is the human cognitive system.  Not
>>    necessarly the brain and its specific processors and wiring pattern,
>> but
>>    the cognitive level of the human mind.  Nobody is asking the
>>    neuroscientists to deliver a complete circuit diagram of the brain,
>> what
>>    is suggested is that a cognitive-level copy of the human mind would be
>>    quite sufficient.
>>
>>    This is a very, very specific proposal.  It directly addresses the CSP
>>    because it says why would expect this to work, where we would not have
>>    the same confidence in an approach that started with a non-mindlike
>>    design.  The reasoning is simple, and crystal-clear:  if we start near
>>    to a design that we absolutely know is working now, then we stand more
>>    chance of reaching that working design than if we start a long way away
>>    in the design space, where we have no reason to believe that *any*
>>    successful design actually exists.
>>
>>    Instead of commenting on the logic behind this proposal, you again wave
>>    your hands and say "Neither of us can prove our perspective correct in
>>    any simple way."  Later on in your post, down below, you are even more
>>    dismissive, and also completely inaccurate, when you say:
>>
>>      > You have hinted at a different approach, E), which seems to involve
>>      > building a novel sort of software framework for interactively
>>      > exploring the behavior of complex dynamical systems.  This is an
>>      > interesting approach too.  But of course, it runs into the same
>>      > objections as any other approach.  That is, you can't prove
>>      > mathematically or empirically that your novel interactive software
>>      > framework will lead to powerful AGI ... so you just have to try it.
>>      > But how can you convince skeptics that your novel interactive
>>      > software framework is a good one?
>>      >
>>      > You can't.
>>
>>    You focus here on something completely irrelevant, namely my attempt to
>>    use a particular kind of software tool to implement the strategy of
>>    starting close to the human cognitive system.  This is such a peculiar
>>    interpretation of what was said in the CSP paper that it almost looks
>> as
>>    though you did not really read it carefully.  I believe I was very,
>> very
>>    clear about the core idea of the strategy.
>>
>>    That software tool has nothing to do with the fundamental strategy of
>>    starting close to the design of the human cognitive system in order to
>>    increase the chance that parameter tweaking will actually work.
>>
>>    That strategy has a sound logic behind it, so I can indeed do what you
>>    say that I can't do:  I can give specific reasons why the approach
>> would
>>    be expected to succeed.  I have a very specific argument to put in
>> front
>>    of a skeptic.
>>
>>    I will not make a detailed response to your other comments below,
>>    because I think what I have written so far is sufficient to make my
>>    point.
>>
>>
>>
>>    Richard Loosemore
>>
>>
>>
>>      >
>>      >
>>      >
>>      >    Second, the crucial question is whether we should believe
>>    that the
>>      >    collection of ideas that define this project will actually
>>    work, and
>>      >    that question is confronted on the
>>    OpenCogPrime:EssentialSynergies
>>      >    page.  The summary of what is said on that page is that Ben
>>    hopes that
>>      >    a variety of mechanism will synergize in such a way as to
>>    damp down
>>      >    the exponential explosions implicit in the mechanisms taken
>>      >    separately.
>>      >
>>      >
>>      >
>>      > Yes.  In the pages after that in the book, I explain in moderate
>>    detail how
>>      > I would go about attempting to demonstrate these "essential
>>    synergies"
>>      > mathematically.  I think this would be a tractable, and exciting,
>>    programme
>>      > of mathematical research.  However, I have opted to spend my time
>>    working
>>      > on getting the OCP system built, rather than on mathematically
>>    exploring
>>      > the interesting synergies underlying the design.  Actually, I
>>    would enjoy
>>      > working on the mathematics more (my original background is in
>>    math); but
>>      > my intuition that the system will work is strong enough to
>>    override that
>>      > bias in taste on my part.
>>      >
>>      >
>>      >
>>      >
>>      >
>>      >    However, when we look closely at this idea that synergies
>>    will resolve
>>      >    the problem, we find that this is stated as a "hope", or an
>>      >    "expectation", but without any data or theoretical analysis
>>    to support
>>      >    the idea that synergies will actually occur.
>>      >
>>      >
>>      > Actually, the other sections in that chapter DO provide a
>> theoretical
>>      > analysis
>>      > in support of the idea -- but it's not a complete theoretical
>>    analysis
>>      > ... it's just
>>      > a bunch of theorem-statements that I haven't proven.  And that,
>>    furthermore,
>>      > probably will need to be tweaked a bit to make them actually true
>> ;-)
>>      >
>>      > Anyway, I think I know how to go about making a theoretical
>>    justification of
>>      > the "essential synergies" underlying the design -- but that looks
>>    like
>>      > years of
>>      > hard mathematical work to me.  In those years we could build out
>>    a lot
>>      > of the
>>      > system instead.
>>      >
>>      > This comes down to the quotation I've been using as my email
>>    signature
>>      > lately:
>>      >
>>      > "Nothing will ever be attempted if all possible objections must
>>    be first
>>      > overcome "
>>      > -- Dr Samuel Johnson
>>      >
>>      >
>>      >    I have to say that the phrasing here is equivalent to "we
>>    have a hunch
>>      >    that the synergies will save us when we scale up to real world
>>      >    systems".  I don't mean to be negative or uncharitable, but
>>    if there
>>      >    is no theoretical reason to believe, and if there is no data to
>>      >    support the idea that the synergies will work at the larger
>>    scale,
>>      >    there really is only one other statement that one can make,
>>    and that
>>      >    is that one has a hunch, or intuition, that things will work
>>    as hoped.
>>      >
>>      >
>>      >
>>      > Yes, there is no formal proof that the system will work; nor is
>> there
>>      > empirical
>>      > data that the system will work.
>>      >
>>      > Please note:
>>      >
>>      > 1)
>>      > for **no** AGI system will there be any empirical data that the
>>    system
>>      > will work, before it is built and tested
>>      >
>>      > 2)
>>      > for **no** reasonably complex AGI system will there be any formal
>>    proof that
>>      > it will work, anytime soon ... because modern math just isn't
>>    advanced
>>      > enough
>>      > in the right ways to let us prove stuff like this.  We can barely
>>    prove
>>      > average-case
>>      > complexity theorems about complex graph algorithms, for example
>>    -- and
>>      > proving
>>      > useful stuff about complex AI systems is way, way harder.
>>      >
>>      > 3)
>>      > Only for very simple AGI designs is it going to be possible to
>> cobble
>>      > together
>>      > a combination of fairly-decent-looking theoretical and empirical
>>      > arguments to make
>>      > a substantive case that the system is going to work on the large
>>    scale,
>>      > before
>>      > actually trying it.  [This point is more controversial than points
>> 1
>>      > and 2 above, but
>>      > I think history bears it out.]
>>      >
>>      > So, it seems to me, the only things we can do are:
>>      >
>>      > A)
>>      > throw our hands up in despair where AGI is concerned, and work on
>>    something
>>      > simpler
>>      >
>>      > B)
>>      > wait for the neuroscientists and cognitive scientists to
>>    understand the
>>      > human
>>      > mind/brain, and then emulate it in computer software
>>      >
>>      > C)
>>      > work on pure mathematics, hoping to eventually be able to prove
>>    things about
>>      > interesting AGI systems
>>      >
>>      > D)
>>      > choose a design that seems to make sense, based on
>> not-fully-rigorous
>>      > analysis
>>      > and deep thinking about all the issues involved, and then build it,
>>      > learn from the
>>      > experience, and improve the design as you go
>>      >
>>      >
>>      > The idea underlying OCP is to take approach D.
>>      >
>>      > You have hinted at a different approach, E), which seems to involve
>>      > building a
>>      > novel sort of software framework for interactively exploring the
>>    behavior of
>>      > complex dynamical systems.  This is an interesting approach too.
>>      But of
>>      > course,
>>      > it runs into the same objections as any other approach.  That is,
>>    you can't
>>      > prove mathematically or empirically that your novel interactive
>>    software
>>      > framework
>>      > will lead to powerful AGI ... so you just have to try it.  But
>>    how can
>>      > you convince
>>      > skeptics that your novel interactive software framework is a good
>>    one?
>>      > You can't.
>>      > What you provide are qualitative, heuristic arguments.  And so
>>    far, from
>>      > what I've
>>      > seen, your qualitative heuristic arguments are less carefully
>>    formulated
>>      > than
>>      > mine are... but I don't doubt you can improve them ... but even
>>    so, that
>>      > won't
>>      > change them into mathematical or empirical proofs.
>>      >
>>      >
>>      >
>>      >
>>      >    The design document does admit that some things are expected
>>    to appear
>>      >    in an emergent manner. The problem with emergence is that
>>    emergence is
>>      >    what happens when a complex system shows some global behavior
>>    that one
>>      >    does not (indeed cannot) see ahead of time.
>>      >
>>      >
>>      > There may be significant differences between our respective
>>    understandings
>>      > of emergence, hidden within the polysemy of various words in your
>>    sentence.
>>      >
>>      > Emergence in a complex system does **not** necessarily involve
>> global
>>      > behaviors
>>      > that in principle cannot be foreseen ahead of time.
>>      >
>>      > For instance, the properties of water would standardly be said to
>>    emerge
>>      > from
>>      > the properties of the component molecules -- yet, it's not
>>    impossible to
>>      > predict
>>      > them.
>>      >
>>      > The properties of a software team emerge from the properties of the
>>      > component
>>      > programmers -- yet, one can still predict many of these
>>    properties in a
>>      > useful
>>      > statistical way, even if I can't predict the details in each case.
>>      > (This is a more
>>      > explicitly complex-systems-ish example.)
>>      >
>>      > Similarly, for the properties of an immune network: Jerne
>>    explained the
>>      > network
>>      > behavior of the immune system based on analysis of the properties
>> of
>>      > antibodies,
>>      > plus some fairly simple mathematical and conceptual thinking
>>    about the
>>      > network
>>      > as a whole.  He didn't tell us how to predict exactly what a
>>    particular
>>      > immune
>>      > network is going to do at a particular point in time, but he did
>>    tell us
>>      > how to predict
>>      > some overall properties of an immune network based on the
>>    properties of the
>>      > antibodies going into it.  Rob de Boer and Alan Perelman and
>>    others extended
>>      > this work in interesting ways.
>>      >
>>      >
>>      >
>>      >    Taken together, then, the policy seems to be that the
>>    designers have a
>>      >    hunch that certain things will emerge.  However, the very
>>    definition
>>      >    of "emergence" is that a hunch will never let you see it
>>    coming.  If
>>      >    hunches were good enough to allow you to see emergent
>>    behaviors purely
>>      >    as a result of looking at the design of a system, the
>>    behaviors would
>>      >    not be emergent, they would be predictable.
>>      >
>>      >
>>      > This is a false dichotomy.
>>      >
>>      > You seem to think emergence is more magical than it really is.
>>      >
>>      > The extremely high productivity that we see in some software
>>    teams may
>>      > seem to emerge, as if by magic, from the dynamics between the team
>>      > members.  But yet, we can still predict some things about how to
>>    create
>>      > teams in which this sort of high productivity is more likely to
>>    occur.  We
>>      > can create a team with the right structure -- an experienced team
>>    lead,
>>      > some junior programmers with high energy, some senior programmers
>>      > with deep, appropriate technical experience.  We can make sure
>>    the team
>>      > members are reasonably compatible by culture and personality.
>> These
>>      > things don't let us make detailed prediction of the emergent team
>>    dynamics,
>>      > but they let us predict a lot about it, in a "probably
>> approximately
>>      > correct"
>>      > way.
>>      >
>>      > Similarly, by creating an AGI design with the right structure,
>>    and with
>>      > components that are designed to be intercompatible, one can
>>    qualitatively
>>      > predict a lot about the behavior of the overall system -- even
>>    though it's
>>      > hard to make such predictions truly rigorous.
>>      >
>>      > After that, as noted above, the question is not whether the system
>> as
>>      > designed, and with exact human-tuned parameter values, is going to
>>      > be a human-level intelligence.  The question is whether the region
>> of
>>      > dynamical-system-space delimited by the overall system design
>>    contains
>>      > systems capable of human-level-intelligence, and whether these can
>> be
>>      > found via dynamic automatic parameter adaptation guided by
>> embodied,
>>      > interactive system experience.
>>      >
>>      >
>>      >
>>      >
>>      >
>>      >    This is a fundamental contradiction.
>>      >
>>      >
>>      > There is no fundamental contradiction.  You manufacture a
>>    contradiction
>>      > by defining "emergence" in an extreme way.  There may be some
>> members
>>      > of the complex systems research community who define it in your
>>    way, but
>>      > there are also many such researchers who define it in my way.
>>      >
>>      >
>>      >
>>      >      It is the complex systems
>>      >    problem.  And at the moment I do not see anything here that
>>    suggests
>>      >    the problem has been addressed.  The designers only hope
>>    (like all AI
>>      >    designers who came before them) that their hunches about the
>>    design
>>      >    will (miraculously) turn out to be winning hunches.
>>      >
>>      >
>>      > Yes, at the moment, any attempt to make an AGI is going to be
>>    based to
>>      > some extent on what you call "hunches."
>>      >
>>      > Similarly, before the modern science of aerodynamics existed, any
>>    attempt
>>      > at mechanized flight was based to some extent on "hunches."
>>      >
>>      > The Wright Brothers followed their hunches and built the plane,
>>    rather than
>>      > listening to the skeptics and giving up, or devoting their lives to
>>      > developing
>>      > aerodynamic mathematics instead.
>>      >
>>      > Of course, many others in history have followed their hunches and
>>    failed --
>>      > such as may prior AI researchers; and many flight researchers
>>    prior to
>>      > the Wright Brothers.
>>      >
>>      > It is precisely where there is not enough knowledge for
>>    definitive proofs,
>>      > that the potential for dramatic progress exists ... for those whose
>>      > intuitive understandings happen to be on target ;-)
>>      >
>>      > -- Ben
>>      >
>>      >
>>      >
>>      >
>>      > >
>>
>>
>>
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> CEO, Novamente LLC and Biomind LLC
>> Director of Research, SIAI
>> [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
>>
>> "Nothing will ever be attempted if all possible objections must be first
>> overcome " - Dr Samuel Johnson
>>
>>
>>
>> --~--~---------~--~----~------------~-------~--~----~
>> You received this message because you are subscribed to the Google
>> Groups "OpenCog General Discussion List" group.
>> To post to this group, send email to [EMAIL PROTECTED]
>> To unsubscribe from this group, send email to
>> [EMAIL PROTECTED]
>> For more options, visit this group at
>> http://groups.google.com/group/opencog?hl=en
>> -~----------~----~----~----~------~----~------~--~---
>>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
>
>
>
> _____________________________________________________________________________
> Envoyez avec Yahoo! Mail. Une boite mail plus intelligente
> http://mail.yahoo.fr
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to