Hi Oliver

Reinforcement learning is different to unsupervised learning. We used
reinforcement learning to train the Atari games. Also we published a more
recent paper (www.nature.com/articles/nature14236) that applied the same
network to 50 different Atari games (achieving human level in around half).

Similar neural network architectures can indeed be applied to Go (indeed
that was one of the motivations for our recent ICLR paper). However,
training by reinforcement learning from self-play is perhaps more
challenging than for Atari: our method (DQN) was applied to single-player
Atari games, whereas in Go there is also an opponent. I could not guarantee
that DQN will be stable in this setting.

Cheers
Dave


On 16 March 2015 at 22:21, Oliver Lewis <ojfle...@yahoo.co.uk> wrote:

> Can you say anything about whether you think their approach to
> unsupervised learning could be applied to networks similar to those you
> trained? Any practical or theoretical constraints we should be aware of?
>
>
> On Monday, 16 March 2015, Aja Huang <ajahu...@gmail.com> wrote:
>
>> Hello Oliver,
>>
>> 2015-03-16 11:58 GMT+00:00 Oliver Lewis <ojfle...@yahoo.co.uk>:
>>>
>>> It's impressive that the same network learned to play seven games with
>>> just a win/lose signal.  It's also interesting that both these teams are in
>>> different parts of Google. I assume they are aware of each other's work,
>>> but maybe Aja can confirm.
>>>
>>
>> The authors are my colleagues at Google DeepMind as on the paper they
>> list DeepMind as their affiliation. Yes we are aware of each other's
>> work.
>>
>> Aja
>>
>>
> _______________________________________________
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to