OpenCog is not single algorithm/method based, which is IMO a qualitative
difference from most other approaches.   If we distinguish approaches via

A) single sort of complex algorithm at the center, with other stuff in a
supporting role

B) self-organizing, subsymbolic system, without complex algorithms
implemented

C) multiple complex algorithms integrated with each other & with a
self-organizing, subsymbolic system

then OpenCog falls into camp C ...

My hypothesis is that the emergent phenomena ensuing from the integration
implicit in C will bypass the problems found with scaling systems in
category A to deal with human-level AGi problems....  And that the use of
complex learning algorithms as well as subsymbolic stuff, will bypass the
massively difficult tuning required to get B to work on its own...

-- Ben

On Tue, Jun 19, 2012 at 9:54 PM, Steve Richfield
<[email protected]>wrote:

> Ben,
>
> I am in absolute agreement with your very eloquent posting. Just one
> question: Doesn't this also apply equally well to your OpenCog approach to
> AGI? Indeed, THIS (as described in your posting) has been my primary
> objection to the past AGI-related efforts that I have seen.
>
> Again, very well said.
>
> Steve
> ===============
> On Tue, Jun 19, 2012 at 6:31 PM, Ben Goertzel <[email protected]> wrote:
>
>>
>> There's a general fallacy that misleads many AGI people, of the following
>> form ...
>>
>> "
>> -- Capability or method X, if you could do it incredibly (i.e.
>> unrealistically) well, would enable arbitrarily great general intelligence
>> -- Simple versions of X, seem to lead to interesting "narrow AI" behaviors
>> THEREFORE...
>> -- By pursuing  more and more complex versions of X, we can get high
>> levels (e.g. human-level) of real-world general intelligence
>> "
>>
>> In the case we're discussing here X = Prediction ..
>>
>> In other cases, X = logical reasoning, or pattern recognition, or
>> automated program learning, or simulation, etc. etc.
>>
>> Unfortunately, things just don't work that way ;/ ...
>>
>> ben
>>
>>
>> On Tue, Jun 19, 2012 at 9:21 PM, Steve Richfield <
>> [email protected]> wrote:
>>
>>> Jim,
>>>
>>> I think we are in agreement here. Computing optimal action without a
>>> guiding prediction is NOT easy. I mentioned high speed trading because they
>>> appear to be doing just that, albeit within a narrow domain. I suspect that
>>> failure to grok this area is just one of many areas where AGI is going to
>>> have to make progress before it can become "serious".
>>>
>>> Steve
>>> =============
>>> On Tue, Jun 19, 2012 at 6:00 PM, Jim Bromer <[email protected]> wrote:
>>>
>>>> Steve,
>>>> High speed trading is something that people are not good at but narrow
>>>> AI can be.  It has to be narrow to keep it efficient.
>>>> Your idea of optimal action in the absence of prediction is a pretty
>>>> wild abstraction, and it would be difficult to implement.  Just using
>>>> correlation for example would tell you a lot about the relations of what
>>>> was obvious and previously identifiable but little about the relations of
>>>> causation and (ironically) co-occurrence.  Correlation can identify perfect
>>>> co-occurrence but it cannot be relied on -in itself- to identify imperfect
>>>> or conditional co-occurrence.
>>>> So for something like correlation to actually work to reliably identify
>>>> conditional co-occurrences it has to have some way to identify or speculate
>>>> on interactions that may be hidden.  (I think Abram was telling me that
>>>> hidden Markov processes were capable of doing this but here the guiding
>>>> form of a Markov process is a conditional premise of the efficacy of the
>>>> method.)
>>>>
>>>> In order then to generalize this example of effectively using
>>>> correlation just to identify interactive co-occurrence I feel the system
>>>> would have to be capable of a dealing with a great many possible
>>>> complications just to identify the few interactive co-occurrences that
>>>> might have impact on a subsystem that is being observed.  First, how do you
>>>> simultaneously watch multiple subsystems?  This is the easy part because
>>>> our ideas about simple observations are inherently absurd.  A simple system
>>>> can easily be a complicated system.  This relativistic view is the first
>>>> clue then to discovering new ideas about simultaneously watching multiple
>>>> complex systems.  Many of the systems that can be observed easily are
>>>> complex systems. So the real problem is how do we discriminate or recognize
>>>> subtle relations that are hidden in the seeming simplicity of an observed
>>>> event.  For example if a cognitive system was deriving insight from a
>>>> camera, the video of an observation event would contain all the complexity
>>>> that could be inferred from what the camera could capture.  So while a
>>>> cognitive system might make jump to some real time conclusions, reanalysis
>>>> of the recording of the event might provide more insight from a more
>>>> sophisticated cognitive basis.
>>>>
>>>> Although this sounds like nothing new, it is still new just because no
>>>> one has made much progress in identifying how representations of complexity
>>>> work.
>>>>
>>>> Jim Bromer
>>>>
>>>>
>>>> On Tue, Jun 19, 2012 at 5:08 PM, Steve Richfield <
>>>> [email protected]> wrote:
>>>>
>>>>> Just a comment to inject into this discussion:
>>>>>
>>>>> Prediction is great when it is possible, but that is rare in our world
>>>>> of imperfect information.
>>>>>
>>>>> Modern economics has brought us the concept of optimal action in the
>>>>> face if imperfect information. This leans on concepts like volatility,
>>>>> where tiny (and hence unpredictable) contributions can have huge effects.
>>>>>
>>>>> I think the emphasis should NOT be prediction, but rather on the
>>>>> computation of optimal action in the ABSENCE of prediction. Of course that
>>>>> is a more complex concept, so it has so far evaded deep discussion here.
>>>>>
>>>>> Note that this is the stock and trade of high speed trading software,
>>>>> so people ARE already making this work in the real world.
>>>>>
>>>>> Any thoughts?
>>>>>
>>>>> Steve Richfield
>>>>> ==================
>>>>>
>>>>> On Tue, Jun 19, 2012 at 1:50 PM, Jim Bromer <[email protected]>wrote:
>>>>>
>>>>>> Logan Streondj <[email protected]> wrote:
>>>>>> Infering is a form of carrying knowledge from one place and applying
>>>>>> it in a different place or time.
>>>>>>
>>>>>>
>>>>>> This is the real question but it does not provide us with an answer.
>>>>>> All the narrow forms of AI do offer solutions to certain kinds of 
>>>>>> problems,
>>>>>> but is there a general way to work from uncertainty (about most every
>>>>>> basis to make the determination) toward greater certainty that would 
>>>>>> allow
>>>>>> us to say that a particular kind of knowledge that worked in another
>>>>>> situation could work in this situation?  If you base inference on
>>>>>> similarities then the problem is how do you use automation (in other 
>>>>>> words
>>>>>> a program) to detect similarities without some absolute method that some 
>>>>>> of
>>>>>> the aspects of the two similar events are of a kind?
>>>>>>
>>>>>> Jim Bromer
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Jun 19, 2012 at 3:42 PM, Jim Bromer <[email protected]>wrote:
>>>>>>
>>>>>>> Logan,
>>>>>>> I also cannot predict the kinds of replies that I will get.
>>>>>>> Of course, prediction is something that human beings can do.  I
>>>>>>> mentioned that in my previous message.  However, I do not see this 
>>>>>>> group as
>>>>>>> swirling around the question of what kinds of things human beings can do
>>>>>>> but around the question of what can we do to make our computer programs 
>>>>>>> act
>>>>>>> smarter.  In that sense, intelligence is not a product of prediction,
>>>>>>> prediction is a product of intelligence.
>>>>>>> If anyone has proof that AGI is a product of prediction then he has
>>>>>>> solved the problems that are constantly being discussed in this group.
>>>>>>>
>>>>>>> I think there was another author who popularized probability and
>>>>>>> prediction in the 1970s.
>>>>>>>
>>>>>>> Your definition of inference is interesting and it is a more
>>>>>>> sophisticated way of understanding the problem then the usual refrain of
>>>>>>> the false promise of probability and prediction.
>>>>>>> Jim Bromer
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Jun 19, 2012 at 3:33 PM, Logan Streondj 
>>>>>>> <[email protected]>wrote:
>>>>>>>
>>>>>>>> I agree with Jim that there is too much focus on "prediction".
>>>>>>>>
>>>>>>>> Though here the obsession might be related to how Ray Kurzweil, a
>>>>>>>> man famous for his accurate predictions, chooses to define 
>>>>>>>> intelligence as
>>>>>>>> the ability to predict.
>>>>>>>>
>>>>>>>> If prediction was a normal thing, then Ray wouldn't have gotten
>>>>>>>> famous for it.
>>>>>>>> Amongst us ordinary humans, at least around here in Canada, and on
>>>>>>>> the internet, people rarely if ever mention the word "prediction". And 
>>>>>>>> very
>>>>>>>> few if anyone claims to make accurate predictions.
>>>>>>>>
>>>>>>>> Kurzweil mentions prediction as necessary for things like hunting
>>>>>>>> or gathering food.
>>>>>>>> For instance projecting the vector of an animal as is runs away,
>>>>>>>> or infering that if fruit was gathered in an area at a time of year,
>>>>>>>> it may be available at same place next year at similar time.
>>>>>>>>
>>>>>>>> However projecting, infering, and predicting are 3 different words,
>>>>>>>> with different meanings.
>>>>>>>>
>>>>>>>> Projecting is  a form of planning ahead.
>>>>>>>> Infering  is a form of carrying knowledge from one place and
>>>>>>>> applying it in a different place or time.
>>>>>>>> Predicting is a form of prophecy or foretelling based on special
>>>>>>>> knowledge.
>>>>>>>>
>>>>>>>> Most people infer and project, but very few predict.
>>>>>>>>
>>>>>>>> For instance in New Age cultures, many humans have had their
>>>>>>>> careers broken by making false predictions.  Also no one (but Ray) 
>>>>>>>> claims
>>>>>>>> to make predictions based on intelligence, but usually based on things 
>>>>>>>> like
>>>>>>>> intuition, and telepathic communications.
>>>>>>>>
>>>>>>>> Maybe I'm committing the fallacy of making a distinction without a
>>>>>>>> difference. Perhaps all those words mean the same thing.
>>>>>>>>
>>>>>>>> Though I'd like to state that there are other ways of getting food,
>>>>>>>> that don't relate to predictive ability.
>>>>>>>>  For instance how does an animal get food, they find a signature of
>>>>>>>> the food-item, for instance a smell, sound or shape,  and then move 
>>>>>>>> towards
>>>>>>>> it, until their jaws are clenching it.
>>>>>>>> I guess you could say they "predicted" the food was there, based on
>>>>>>>> the signature.
>>>>>>>> As it certainly is possible that following a signature, will lead
>>>>>>>> to a non-food item,
>>>>>>>> for instance if the signature is lost, or is being produced by
>>>>>>>> something else, i.e. carnivorous flower.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> It is also possible that everything can be classified as prediction,
>>>>>>>> for instance when saying the alphabet, we predict what the next
>>>>>>>> letter is, before saying it.
>>>>>>>> This would mean that every hello world program uses prediction, as
>>>>>>>> it loads the hello world string, before printing it to screen.
>>>>>>>> *shrugs*
>>>>>>>>
>>>>>>>> it depends, do we want prediction to lose all meaning, by applying
>>>>>>>> it just about anything.
>>>>>>>> Or do we want to be specific with what we are saying, and only use
>>>>>>>> words like "prediction" in the way that non AGI mailing list people use
>>>>>>>> it,   as a shorter-term version of prophesy, based on special 
>>>>>>>> knowledge.
>>>>>>>>
>>>>>>>> For instance even though I consider myself intelligence, I can't
>>>>>>>> predict the content of any reply or if there even will be replies,  
>>>>>>>> though
>>>>>>>> I can infer that there may be replies, as this is a mailing list, where
>>>>>>>> people often reply.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Jun 19, 2012 at 1:48 PM, Piaget Modeler <
>>>>>>>> [email protected]> wrote:
>>>>>>>>
>>>>>>>>>  Jim,
>>>>>>>>>
>>>>>>>>> Is this your actual belief or is this disinformation?
>>>>>>>>>
>>>>>>>>> ------------------------------
>>>>>>>>> Date: Tue, 19 Jun 2012 10:40:22 -0400
>>>>>>>>> Subject: [agi] Prediction Did Not Work (except in narrow ai.)
>>>>>>>>> From: [email protected]
>>>>>>>>> To: [email protected]
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> The original idea behind the use of "prediction" in AI was that
>>>>>>>>> the prediction could be compared against the actuality and that 
>>>>>>>>> comparison
>>>>>>>>> could be used to test the theory that produced the prediction.  (Some
>>>>>>>>> author popularized that model for AI but it was proposed by academic
>>>>>>>>> researchers before he did so. Karl Popper used the concept as part of 
>>>>>>>>> his
>>>>>>>>> model of scientific discovery in the 1930s, but his principles, which 
>>>>>>>>> were
>>>>>>>>> based on logical positivism, have become more dubious because logical
>>>>>>>>> certainty has become a more dubious principle of knowledge. And, oh, 
>>>>>>>>> by the
>>>>>>>>> way, Popper did not believe that AI was possible.)  So, continuing 
>>>>>>>>> with the
>>>>>>>>> march of the use of "prediction" in AI, AI people could see that our
>>>>>>>>> expectations were like "predictions" and so it did seem that the human
>>>>>>>>> mind did indeed use a method of prediction.  Of course the principle 
>>>>>>>>> that a
>>>>>>>>> prediction could be compared against an actuality in order to 
>>>>>>>>> evaluate the
>>>>>>>>> accuracy of a theory only works in narrow AI, and as narrow AI failed 
>>>>>>>>> to
>>>>>>>>> produce simple AGI that part of the cherished notion of "prediction" 
>>>>>>>>> has
>>>>>>>>> been gradually eroded.
>>>>>>>>>
>>>>>>>>> This group uses the term prediction to simply refer to something
>>>>>>>>> that is "known" and as such it is a concept which is pretty shallow 
>>>>>>>>> since
>>>>>>>>> its verification as a mental product is thereby based on the 
>>>>>>>>> experience
>>>>>>>>> that when we know something we act as if we were confident that it 
>>>>>>>>> would
>>>>>>>>> happen.  The problem with such concepts like "knowing" or 
>>>>>>>>> "prediction" is
>>>>>>>>> that they -do not- confirm the efficacy of theories that an AGI 
>>>>>>>>> program
>>>>>>>>> might produce, except in those circumstances which would be considered
>>>>>>>>> narrow AI by this group.
>>>>>>>>>
>>>>>>>>> Let me repeat that.
>>>>>>>>> The problem with such concepts like "knowing" or "prediction" is
>>>>>>>>> that they -DO NOT- confirm the effaciacy of theories that an AGI 
>>>>>>>>> program
>>>>>>>>> might produce, except in those circumstances which would be considered
>>>>>>>>> narrow AI by this group.
>>>>>>>>>
>>>>>>>>> So sure, when someone points out that the human mind uses
>>>>>>>>> "expectation" and expectation is a little like "prediction" I do 
>>>>>>>>> agree.
>>>>>>>>> But here the word prediction is just being used to describe "knowing
>>>>>>>>> something."  There is no principle of confirmation or disconfirmation 
>>>>>>>>> of
>>>>>>>>> the use of "prediction" that can be used to produce AGI, except for 
>>>>>>>>> special
>>>>>>>>> cases.  After years and years of the repetition of the word in these 
>>>>>>>>> types
>>>>>>>>> of discussions there is still no AGI so that should give you a hint 
>>>>>>>>> about
>>>>>>>>> how good an idea it was.
>>>>>>>>>
>>>>>>>>> If the use of prediction as a confirming method can only be used
>>>>>>>>> in a limited set of circumstances then its power in these discussions 
>>>>>>>>> has
>>>>>>>>> been so diminished that it should not be used as if it were a magical
>>>>>>>>> concept.  Without some efficacy the word should not be used as a 
>>>>>>>>> special
>>>>>>>>> technical term.  The word should be used in the way it is usually 
>>>>>>>>> used.
>>>>>>>>>
>>>>>>>>> As I implied, Popper originally used the word the concept in a
>>>>>>>>> logical model of scientific theory.  If a theory could be used to 
>>>>>>>>> predict a
>>>>>>>>> confirming or disconfirming observable event then the theory could be
>>>>>>>>> disconfirmed by the failure of the event to occur.  (If the event 
>>>>>>>>> occurred
>>>>>>>>> it still might be caused by a coincidence.)
>>>>>>>>>
>>>>>>>>> It is coming back to me. (Or else my creative memory is kicking
>>>>>>>>> in.)  The author who popularized the theory of confirmation through
>>>>>>>>> prediction had a model of probability and confirmation by prediction. 
>>>>>>>>>  That
>>>>>>>>> model is inherently contradictory.
>>>>>>>>>
>>>>>>>>> It amazes me that you guys don't get this.
>>>>>>>>>
>>>>>>>>> Jim Bromer
>>>>>>>>>    *AGI* | Archives<https://www.listbox.com/member/archive/303/=now>
>>>>>>>>> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295>|
>>>>>>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>>>>>>> <http://www.listbox.com>
>>>>>>>>>    *AGI* | Archives<https://www.listbox.com/member/archive/303/=now>
>>>>>>>>> <https://www.listbox.com/member/archive/rss/303/5037279-6ef01b0b>|
>>>>>>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>>>>>>> <http://www.listbox.com>
>>>>>>>>>
>>>>>>>>
>>>>>>>>    *AGI* | Archives<https://www.listbox.com/member/archive/303/=now>
>>>>>>>> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2>|
>>>>>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>>>>>> <http://www.listbox.com>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>>>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
>>>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>>>> <http://www.listbox.com>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Full employment can be had with the stoke of a pen. Simply institute a
>>>>> six hour workday. That will easily create enough new jobs to bring back
>>>>> full employment.
>>>>>
>>>>>
>>>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>>> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> |
>>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>>> <http://www.listbox.com>
>>>>>
>>>>
>>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>> <http://www.listbox.com>
>>>>
>>>
>>>
>>>
>>> --
>>> Full employment can be had with the stoke of a pen. Simply institute a
>>> six hour workday. That will easily create enough new jobs to bring back
>>> full employment.
>>>
>>>
>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> http://goertzel.org
>>
>> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Full employment can be had with the stoke of a pen. Simply institute a six
> hour workday. That will easily create enough new jobs to bring back full
> employment.
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to