Well I would say that none of the work done at Deep Mind and also none of
the ideas in Demis etc.'s paper address the questions I raised in this paper

http://ieeexplore.ieee.org/document/6889662/

(sorry for the paywall ... use sci-hub.cc ...)

So there is no real plan for how to achieve abstract symbolic reasoning as
needed for human level general intelligence within a purely formal-NN type
approach


Obviously in opencog we are taking more of a symbolic-neural approach so we
don't have issues with abstraction

Also if you look at the recent Markram et al paper on algebraic topology
and mesoscopic brain structure, there is nothing in the Hassabis etc.
universe that seems to address how such structures would be learned or
would emerge



But sure in a big-picture historical sense the progress happening these
days on "narrow AI verging toward AGI" and on "making complex cognitive
architectures finally do stuff" is super exciting.   We are on the verge of
multiple breakthroughs within the next few years.   Woo hoo !!

- -Ben


On Thu, Jul 27, 2017 at 5:55 AM, EdFromNH . <[email protected]> wrote:

> About the above linked Hassabis paper, Ben said, "It's sort of a high
> level inspirational paper... it does lay down pretty clearly what sort of
> thinking and approach Deep Mind is likely to be taking in the next years
> ... there are no big surprises here though as this has been Demis's
> approach, bias and interest all along, right?"
>
> From my knowledge of several articles and videos by, or about, Hassabis --
>  I totally agree.  But I am a little less ho-hum than Ben, perhaps because
> I'm not as up on the current state of AGI as Ben.
>
> Reading Hassabis's paper makes me bullish about how close we are to
> powerful, if not fully human-level AGI, within 5 years.
>
> Why?  Because all of the unsolved challenges Hassabis discusses seem like
> they could be easily solved if enough engineering and programming talent
> was thrown at them.  I feel like I could relatively easily -- within a
> few months -- weave plausible high level architectural descriptions for
> solving all of these problems, as, presumably, people like Demis and Ben
> could do even better. (Perhaps that is why Ben is so ho-hum about the
> paper.)  With the money that's being thrown into AGI, and the much greater
> ease of doing cognitive architectural experiments made possible with Neural
> Turing Machines -- which allow programmable, modular plug-and-play with
> pre-designed and pre-trained neural net modules -- the world is going to
> get weird fast.
>
> Tell me why I am wrong.
>
> On Sun, Jul 23, 2017 at 8:29 PM, Ed Pell <[email protected]> wrote:
>
>> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5467749/
>>
>>
>> On 7/23/2017 4:18 PM, Giacomo Spigler wrote:
>>
>>>
>>> An Approximation of the Error Backpropagation
>>> Algorithm in a Predictive Coding Network
>>> with Local Hebbian Synaptic Plasticity
>>>
>>
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/8630185-a57a74e1
>> Modify Your Subscription: https://www.listbox.com/member
>> /?& <https://www.listbox.com/member/?&;>
>> Powered by Listbox: http://www.listbox.com
>>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19237892-5029d625> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
http://goertzel.org

"I am God! I am nothing, I'm play, I am freedom, I am life. I am the
boundary, I am the peak." -- Alexander Scriabin



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to