On Tue, Jun 19, 2012 at 9:31 PM, Ben Goertzel <[email protected]> wrote:

>
> There's a general fallacy that misleads many AGI people, of the following
> form ...
>
> "
> -- Capability or method X, if you could do it incredibly (i.e.
> unrealistically) well, would enable arbitrarily great general intelligence
> -- Simple versions of X, seem to lead to interesting "narrow AI" behaviors
> THEREFORE...
> -- By pursuing more and more complex versions of X, we can get high levels
> (e.g. human-level) of real-world general intelligence
> "
>
> In other cases, X = logical reasoning, or pattern recognition, or
> automated program learning, or simulation, etc. etc.
>
> Unfortunately, things just don't work that way ;/ ...



Now that I have thought about this a little more, I have to say that I
disagree with it - except that it is an over-generalization.
We all know that there will be improvements in computer technology and some
of them will have an impact on AGI.  No one in this group has completely
ruled out the possibility, for example, that there may be a breakthrough in
parallelism.  We all see that the history of parallelism was extremely
disappointing and most of us are more than cautious about making
predictions based on a breakthrough in parallelism at this time.

I have come to the conclusion that a general breakthrough is going to have
to be something that has broad consequences for computation.  As a result I
have a stronger belief in the potential of a breakthrough in Logical
Satisfiability than I have ever had before.

While it is true that things don't usually work that way, they do work that
way every once in a while.  We just cannot see how they are going to take
shape before they do.

Jim



>

On Tue, Jun 19, 2012 at 9:31 PM, Ben Goertzel <[email protected]> wrote:

>
> There's a general fallacy that misleads many AGI people, of the following
> form ...
>
> "
> -- Capability or method X, if you could do it incredibly (i.e.
> unrealistically) well, would enable arbitrarily great general intelligence
> -- Simple versions of X, seem to lead to interesting "narrow AI" behaviors
> THEREFORE...
> -- By pursuing  more and more complex versions of X, we can get high
> levels (e.g. human-level) of real-world general intelligence
> "
>
> In the case we're discussing here X = Prediction ..
>
> In other cases, X = logical reasoning, or pattern recognition, or
> automated program learning, or simulation, etc. etc.
>
> Unfortunately, things just don't work that way ;/ ...
>
> ben
>
>>             <http://www.listbox.com>
>>>>>>>>          <http://www.listbox.com>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>> --
>>>> Full employment can be had with the stoke of a pen. Simply institute a
>>>> six hour workday. That will easily create enough new jobs to bring back
>>>> full employment.
>>>>
>>>>
>>>
>> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to