On Tue, Jun 19, 2012 at 9:31 PM, Ben Goertzel <[email protected]> wrote:

>
> There's a general fallacy that misleads many AGI people, of the following
> form ...
>
> "
> -- Capability or method X, if you could do it incredibly (i.e.
> unrealistically) well, would enable arbitrarily great general intelligence
> -- Simple versions of X, seem to lead to interesting "narrow AI" behaviors
> THEREFORE...
> -- By pursuing  more and more complex versions of X, we can get high
> levels (e.g. human-level) of real-world general intelligence
> "
>
> In the case we're discussing here X = Prediction ..
>
> In other cases, X = logical reasoning, or pattern recognition, or
> automated program learning, or simulation, etc. etc.
>
> Unfortunately, things just don't work that way ;/ ...
>
> ben
>
>

I mostly agree too but the thing is that if you want to use all the above
when they are well suited to a problem then you have to better describe the
complicated circumstances where they could be employed adequately.  This
cannot be done through abstract representations and so you end up being
stuck with different kinds of narrow solutions for different types of
narrow (but adequately described) problems.  However, I believe that the
reality is that many of these seemingly narrow situations may hide a great
deal of complexity (or at least a great deal of potential complexity) that
we don't fully understand.

I do, however, believe that significant breakthroughs in computing are
still possible.
Jim Bromer



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to