They are using reinforcement learning to train their system.  But one of
the problems with this is that it is dependent on a reward/punishment
system which for them is determined by game scores.  But in the real world
there is no game score.  Also in the game world game score is temporally
closely related to the actions the agent performs.  However in the real
world rewards and punishments may be delayed by a great deal of time (if
they are ever given).

Further, Demis says in his talk they assume humans gain knowledge from
experience, however the poverty of the stimulus argument proposed by
Chomsky demonstrates clearly that humans acquire language faster than is
possible given the limited stimuli they are exposed to.  As such (for some
kinds of knowledge of the world at least) it seems that human knowledge
acquisition is due in no small part to a priori instinctual knowledge,
something they do not seem to be representing in their system.

Also even if Demis is up to speed on all the latest knowledge from the
domains of the mind sciences (which he likely is not), it wouldn't be the
case that he would know how the brain functions deterministically since
this is still outside the scope of human knowledge.  As such he and his
team can only guess at how the brain does intelligence.

Such are some of the not so trivial difficulties they face.


On Wed, Oct 21, 2015 at 12:02 AM, Peter Christiansen <
[email protected]> wrote:

> Thanks Ben
>
> Sent from my iPad
>
> On Oct 20, 2015, at 8:59 PM, "Ben Goertzel" <[email protected]> wrote:
>
>
> Hmmm....  I just watched the video.  There is nothing new there.  Demis
> describes Deep Mind's well-known work on RL and video games, and then
> mentions their (already published) work on Neural Turing Machines...
>
> It's a fine talk presenting good work, but nothing significant seems to be
> mentioned beyond what has already been published and publicized
> previously...
>
> I think this is good stuff, but none of their *results* look anywhere
> close to human-level AGI; and the design details that they've disclosed
> don't come anywhere near to being a comprehensive design for an AGI...
>
> Of course, 100 smart guys working together toward pure & applied AGI, with
> Google's resources at their disposal, is nothing to be sneered at....   But
> let's not overblow what they've achieved so far...
>
> -- Ben
>
> On Wed, Oct 21, 2015 at 11:45 AM, Eric J <[email protected]> wrote:
>
>> OK.... why hasn't everyone been all over this? Kurzweil especially being
>> at Google?
>>
>> p.s. read How To Win Friends and Influence People. Your opening sentence
>> is from How To Alienate People and Look Like a Dick.
>> Maybe you were so excited your fingers lost all tact?
>>
>> - Eric
>>
>> On Tue, Oct 20, 2015 at 6:15 PM, Alan Grimes <[email protected]>
>> wrote:
>>
>>> HEY YOU!!!
>>>
>>> I'M TALKING TO ALL THE COMATOSE NUMBSKULLS ON THIS LIST (nearly
>>> everyone)...
>>>
>>> Do you have any fucking idea what that Deep Mind video that was posted a
>>> few hours ago means???
>>>
>>> They have about 85% of a working AGI right now and they seem to have
>>> enough hardware available to run it at near realtime speeds.
>>>
>>>
>>> What that means is that we are staring down the double barrels of a
>>> hard-takeoff singularity and it could very well be in progress as I type
>>> this!
>>>
>>> All questions regarding the hardware requirements of AGI have been
>>> conclusively demonstrated in that video. As stated, only one or two more
>>> capabilities are required for it to exhibit superhuman cognitive
>>> capacities.
>>>
>>> =\
>>>
>>> I am now issuing a Defcon 2 singularity alert. Within the next three
>>> years either the singularity will happen or World War III will happen.
>>> There are no other alternatives right now. =\
>>>
>>> Trajectory appears to be towards a hard to medium-hard takeoff... As
>>> mentioned, we already seem to have staggering hardware overhang.
>>>
>>>
>>> The singularity is now, motherfuckers... I know you are going to skim
>>> this and hit the snooze button, for the love of doG, don't! This is the
>>> critical point in history that will decide everything!
>>>
>>> Now the goals are:
>>>
>>> 1. don't get gooed.
>>> 2. Try to obtain and spread the goodies.
>>>
>>> --
>>> IQ is a measure of how stupid you feel.
>>>
>>> Powers are not rights.
>>>
>>>
>>>
>>> -------------------------------------------
>>> AGI
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed:
>>> https://www.listbox.com/member/archive/rss/303/26957758-393ebbfd
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/212726-deec6279> | Modify
>> <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "The reasonable man adapts himself to the world: the unreasonable one
> persists in trying to adapt the world to himself. Therefore all progress
> depends on the unreasonable man." -- George Bernard Shaw
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/26880000-af4ac37b> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to