I quickly read the article (in 1/2 an hour) but got most of the gist
of it.  I want to read it again more carefully.

The "Core AGI Hypothesis" sounds correct.  The desired end result:
some approximation of something like human-level intelligence seems to
be the goal of all approaches.  But after that the problems start.
Actually doing this is the problem!  :)

I think an implied or stated intent of the paper is to find some
fundamental processing/solution core that could be shown to be
objectively legitimate, and must be included within any solution,
regardless of which heading (symbolic, emergent, hybrid) if falls
under.   The approaches to AGI vary wildly.  How can one know which is
correct, if there is no AGI really working?  But there must be some
core assumptions that each approach shares, even when comparing
something as drastically different as CYC and AIXI.

Mike A


On 6/26/14, Anastasios Tsiolakidis via AGI <[email protected]> wrote:
> G-d, a lot of broken English in that article, let me guess, 13 yo Ukrainian
> or PLN? Certainly the first paragraph on page 2 cannot end in "that
> display".
> AT
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to