Re: [Computer-go] Facebook Go AI

2015-12-06 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



Am 06.12.2015 um 16:24 schrieb Petr Baudis:
> On Sat, Dec 05, 2015 at 02:47:50PM +0100, Detlef Schmicker wrote:
>> I understand the idea, that long term prediction might lead to a 
>> different optimum (but it should not lead to one with a higher
>> one step prediction rate: it might result in a stronger player
>> with the same prediction rate...)
> 
> I think the whole idea is that it should improve raw prediction
> rate on unseen samples too.

This would mean, we are overfitting, which should be seen by a bigger
difference between unseen and seen samples, which is normaly checked
by using a test database and compare the results with the train
database and small ?!


The motivation of the increased suprevision is
> to improve the hidden representation in the network, making it
> more suitable for longer-term tactical predictions and therefore
> "stronger" and better encompassing the board situation.  This
> should result in better one-move predictions in a situation where
> the followup is also important.
> 
> It sounds rather reasonable to me...?
> 

Yes, it sounds reasonable, but this not always helps in computer go :)

Detlef
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWZGoGAAoJEInWdHg+Znf40Z0P/3vHdeHIb9/5Fqp78hdT45IC
RizA7703fWMeU8JC6dZA1ziI/oKfGLTetFVFGIcPIx+T3lkRapZLYZNa8BXQGXZr
lnjSk2aEfsJdCZd+Y62ECLitXEOOosjvF9bHNoAj39MeuDOUMxBcjCSSUjNDbOTm
eYWAYC0UgTE7xzR629FHQ1PJ6+iw2RYsvfEmEXwCD2blT8gab4uNDZelk+R/l4KI
F73Sid8ULz3AE8zi6XX2qWHmupKa7bMI9ZWcDfAf78rSPMOx6SPYbrX/QXsCiQAD
fy3KNOro93HJyW1sDk4mJEERm+UjmOcGjxILQDkcRo/+D/SdLT2DbeXM1KuE3RWu
0tSsubBNvNv5begSLkhOvSrybKZYtsgTqycF+4dMzLVj3LO5+Iy957w1QfwHBjBZ
ATT4GFPidrnrGfdKcRL/mtQJi0+JZ/QVZoIMvxYHMVk6Vz9JRwGFdFZKL1mnX5oy
t9Y8tIzkkQnjJk4/XRVuDnngBKFFgBgbyLWy6WU1MmhEGNyHt7anUKDyxHHz4LU/
00vJUIGiMsaOTqT92yLPWOm39RHRP2J9Pfflz12+ysGpD+VKsSNZOm6wdyn8/q5c
A3zrVJM7tug2Iu7mjJ6BC10bF813EGKfgtjxrQ54TnlIucMDssv9rBajqsoRDIxG
14Iv+3P/YDY0hmRoI51S
=Bx1M
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-12-06 Thread Petr Baudis
On Sat, Dec 05, 2015 at 02:47:50PM +0100, Detlef Schmicker wrote:
> I understand the idea, that long term prediction might lead to a
> different optimum (but it should not lead to one with a higher one
> step prediction rate: it might result in a stronger player with the
> same prediction rate...)

I think the whole idea is that it should improve raw prediction rate
on unseen samples too.  The motivation of the increased suprevision is
to improve the hidden representation in the network, making it more
suitable for longer-term tactical predictions and therefore "stronger"
and better encompassing the board situation.  This should result in
better one-move predictions in a situation where the followup is also
important.

It sounds rather reasonable to me...?

-- 
Petr Baudis
If you have good ideas, good data and fast computers,
you can do almost anything. -- Geoffrey Hinton
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-12-06 Thread Stefan Kaitschick
> I understand the idea, that long term prediction might lead to a
> different optimum (but it should not lead to one with a higher one
> step prediction rate: it might result in a stronger player with the
> same prediction rate...), and might increase training speed, but hard
> facts would be great before spending a GPU month into this :)
>
>
>
> Detlef
>

I wouldn't be too sure, that this cannot improve the 1 step prediction rate.
In a way, multi-step prediction is like peeking around the corner. :-)

Stefan
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-30 Thread Petr Baudis
On Tue, Nov 24, 2015 at 01:39:00PM +0100, Petr Baudis wrote:
> On Mon, Nov 23, 2015 at 10:00:27PM -0800, David Fotland wrote:
> > 1 kyu on KGS with no search is pretty impressive.
> 
> But it doesn't correlate very well with the reported results against
> Pachi, it seems to me.
> 
> ("Pachi 10k" should correspond to ~5s thinking time on 8-thread FX8350.)
> 
> > Perhaps Darkforest2 is too slow.
> 
> Darkfores2 should be just different parameters when training the
> network, according to the paper if I understood it right.

  Never mind, darkfores2 is now on KGS and got a 3d rank, which totally
fits the reported results. Wow.

  Very impressive - computers can now play "just by intuition" better
than most people who spend many years studying and playing the game.
(Noone can call this "brute force".)

-- 
Petr Baudis
If you have good ideas, good data and fast computers,
you can do almost anything. -- Geoffrey Hinton
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI.

2015-11-24 Thread Ingo Althöfer
Hello Yuandong,

thanks for your posting and welcome in the computer-go
mailing list.
I wish you and your team good luck for your further 
attempts with darkfores***. Please, keep playing on KGS.

Ingo.



> Gesendet: Dienstag, 24. November 2015 um 21:45 Uhr
> Von: "Yuandong Tian" <yuandong.t...@gmail.com>
> An: computer-go@computer-go.org
> Betreff: Re: [Computer-go] Facebook Go AI.
>
> Hi all,
> 
> I am the first author of Facebook Go AI. Thanks for your interest! This is
> the first time I post a message here, so please forgive me if I mess up
> with anything.
> 
> 1. The estimation of 1d-2d is based on the win rate of free game in the
> last 3 months (since darkforest launched in Aug). See Table 6 in the paper.
> For ranked game, its rank is definitely lower since people tend to play
> more seriously. It seems that now darkforest is 1k and darkfores1 is 1d.
> 
> 2. Here is the Pachi 10k command line for no pondering.
> pachi -t =1 threads=8,pondering=0
> 
> For pondering, it is simply
> pachi -t =1 threads=8
> 
> In both cases, all the spatial patterns are properly loaded. See the
> following GTP response:
> W>> protocol_version
> Random seed: 1448000132
> Loaded spatial dictionary of 1064482 patterns.
> Loaded 3021829 pattern-probability pairs.
> 
> 3. We use pachi version 11.99 as shown in the following GTP response:
> W>> version
> W<< = 11.99 (Genjo-devel): If you believe you have won but I am still
> playing, please help me understand by capturing all dead stones. Anyone can
> send me 'winrate' in private chat to get my assessment of the position.
> Have a nice game!
> 
> 4. Darkfores2 is still DCNN model and no search is involved.
> 
> Thanks! If you have any comments, please let me know.
> 
> 
> Yuandong Tian
> Research Scientist,
> Facebook Artificial Intelligence Research (FAIR)
> Website:
> https://research.facebook.com/researchers/1517678171821436/yuandong-tian/
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI.

2015-11-24 Thread Yuandong Tian
Hi all,

I am the first author of Facebook Go AI. Thanks for your interest! This is
the first time I post a message here, so please forgive me if I mess up
with anything.

1. The estimation of 1d-2d is based on the win rate of free game in the
last 3 months (since darkforest launched in Aug). See Table 6 in the paper.
For ranked game, its rank is definitely lower since people tend to play
more seriously. It seems that now darkforest is 1k and darkfores1 is 1d.

2. Here is the Pachi 10k command line for no pondering.
pachi -t =1 threads=8,pondering=0

For pondering, it is simply
pachi -t =1 threads=8

In both cases, all the spatial patterns are properly loaded. See the
following GTP response:
W>> protocol_version
Random seed: 1448000132
Loaded spatial dictionary of 1064482 patterns.
Loaded 3021829 pattern-probability pairs.

3. We use pachi version 11.99 as shown in the following GTP response:
W>> version
W<< = 11.99 (Genjo-devel): If you believe you have won but I am still
playing, please help me understand by capturing all dead stones. Anyone can
send me 'winrate' in private chat to get my assessment of the position.
Have a nice game!

4. Darkfores2 is still DCNN model and no search is involved.

Thanks! If you have any comments, please let me know.


Yuandong Tian
Research Scientist,
Facebook Artificial Intelligence Research (FAIR)
Website:
https://research.facebook.com/researchers/1517678171821436/yuandong-tian/
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-24 Thread Ingo Althöfer
Perhaps bots in the style of Darkforest would be good
candidates to win the Handicap-29 prize...

http://www.althofer.de/handicap-29-prize.html

Ingo.
 
***
Gesendet: Dienstag, 24. November 2015 um 07:00 Uhr
Von: "David Fotland" 

1 kyu on KGS with no search is pretty impressive.  Perhaps Darkforest2 is too 
slow.
David
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI.

2015-11-24 Thread Hiroshi Yamashita

Hi,

Thank you for the paper.
Not only next move, but also opponent move and next counter move
prediction is very interesting.

I have two questions.

darkforest : standard features, 1 step prediction on KGS dataset
darkfores1 : extended features, 3 step prediction on GoGoD dataset
darkfores2 : fine-tuned the learning rate, Based on darkfores1

1. How did you tune learning rate in darkfores2?

2. darkfores1 is stronger than darkforest. Is it because of 3 step
prediction or using GoGoD? Do you have a result using
"standard features, 1 step prediction on GoGoD"?

Regards,
Hiroshi Yamashita

- Original Message - 
From: "Yuandong Tian" <yuandong.t...@gmail.com>

To: <computer-go@computer-go.org>
Sent: Wednesday, November 25, 2015 5:45 AM
Subject: Re: [Computer-go] Facebook Go AI.



Hi all,

I am the first author of Facebook Go AI. Thanks for your interest! This is
the first time I post a message here, so please forgive me if I mess up
with anything.

1. The estimation of 1d-2d is based on the win rate of free game in the
last 3 months (since darkforest launched in Aug). See Table 6 in the paper.
For ranked game, its rank is definitely lower since people tend to play
more seriously. It seems that now darkforest is 1k and darkfores1 is 1d.

2. Here is the Pachi 10k command line for no pondering.
pachi -t =1 threads=8,pondering=0

For pondering, it is simply
pachi -t =1 threads=8

In both cases, all the spatial patterns are properly loaded. See the
following GTP response:
W>> protocol_version
Random seed: 1448000132
Loaded spatial dictionary of 1064482 patterns.
Loaded 3021829 pattern-probability pairs.

3. We use pachi version 11.99 as shown in the following GTP response:
W>> version
W<< = 11.99 (Genjo-devel): If you believe you have won but I am still
playing, please help me understand by capturing all dead stones. Anyone can
send me 'winrate' in private chat to get my assessment of the position.
Have a nice game!

4. Darkfores2 is still DCNN model and no search is involved.

Thanks! If you have any comments, please let me know.


Yuandong Tian
Research Scientist,
Facebook Artificial Intelligence Research (FAIR)
Website:
https://research.facebook.com/researchers/1517678171821436/yuandong-tian/








___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-24 Thread Petr Baudis
On Mon, Nov 23, 2015 at 10:00:27PM -0800, David Fotland wrote:
> 1 kyu on KGS with no search is pretty impressive.

But it doesn't correlate very well with the reported results against
Pachi, it seems to me.

("Pachi 10k" should correspond to ~5s thinking time on 8-thread FX8350.)

> Perhaps Darkforest2 is too slow.

Darkfores2 should be just different parameters when training the
network, according to the paper if I understood it right.

Petr Baudis
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-24 Thread Hideki Kato
That can happen if the bot has a big (and strange) weak point such as 
ladder.  See attached record.

Hideki

Petr Baudis: <20151124123900.gm10...@machine.or.cz>:
>On Mon, Nov 23, 2015 at 10:00:27PM -0800, David Fotland wrote:
>> 1 kyu on KGS with no search is pretty impressive.
>
>But it doesn't correlate very well with the reported results against
>Pachi, it seems to me.
>
>("Pachi 10k" should correspond to ~5s thinking time on 8-thread FX8350.)
>
>> Perhaps Darkforest2 is too slow.
>
>Darkfores2 should be just different parameters when training the
>network, according to the paper if I understood it right.
>
>   Petr Baudis
>___
>Computer-go mailing list
>Computer-go@computer-go.org
>http://computer-go.org/mailman/listinfo/computer-go
-- 
Hideki Kato 

zife-darkfores1.sgf
Description: x-go-sgf
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-24 Thread Igor Polyakov
If you train your neural network on pro games, pros never play out 
ladders that end up in capture, so when a ladder situation happens and 
it gets played out, the running group is always safe. This is not the 
case always, but you'd need to specifically play out the ladder to check.


On 2015-11-24 5:11, Hideki Kato wrote:

That can happen if the bot has a big (and strange) weak point such as
ladder.  See attached record.

Hideki

Petr Baudis: <20151124123900.gm10...@machine.or.cz>:

On Mon, Nov 23, 2015 at 10:00:27PM -0800, David Fotland wrote:

1 kyu on KGS with no search is pretty impressive.

But it doesn't correlate very well with the reported results against
Pachi, it seems to me.

("Pachi 10k" should correspond to ~5s thinking time on 8-thread FX8350.)


Perhaps Darkforest2 is too slow.

Darkfores2 should be just different parameters when training the
network, according to the paper if I understood it right.

Petr Baudis
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-23 Thread Rémi Coulom

It is darkforest, indeed:

Title: Better Computer Go Player with Neural Network and Long-term 
Prediction


Authors: Yuandong Tian, Yan Zhu

Abstract:
Competing with top human players in the ancient game of Go has been a 
long-term goal of artificial intelligence. Go's high branching factor 
makes traditional search techniques ineffective, even on leading-edge 
hardware, and Go's evaluation function could change drastically with one 
stone change. Recent works [Maddison et al. (2015); Clark & Storkey 
(2015)] show that search is not strictly necessary for machine Go 
players. A pure pattern-matching approach, based on a Deep Convolutional 
Neural Network (DCNN) that predicts the next move, can perform as well 
as Monte Carlo Tree Search (MCTS)-based open source Go engines such as 
Pachi [Baudis & Gailly (2012)] if its search budget is limited. We 
extend this idea in our bot named darkforest, which relies on a DCNN 
designed for long-term predictions. Darkforest substantially improves 
the win rate for pattern-matching approaches against MCTS-based 
approaches, even with looser search budgets. Against human players, 
darkforest achieves a stable 1d-2d level on KGS Go Server, estimated 
from free games against human players. This substantially improves the 
estimated rankings reported in Clark & Storkey (2015), where DCNN-based 
bots are estimated at 4k-5k level based on performance against other 
machine players. Adding MCTS to darkforest creates a much stronger 
player: with only 1000 rollouts, darkforest+MCTS beats pure darkforest 
90% of the time; with 5000 rollouts, our best model plus MCTS beats 
Pachi with 10,000 rollouts 95.5% of the time.


http://arxiv.org/abs/1511.06410

Rémi

On 11/03/2015 08:32 PM, Nick Wedd wrote:
I think this Facebook AI may be the program playing on KGS as 
darkforest and darkfores1.


Nick

On 3 November 2015 at 14:28, Petr Baudis > wrote:


  Hi!

  Facebook is working on a Go AI too, now:

https://www.facebook.com/Engineering/videos/10153621562717200/
https://code.facebook.com/posts/1478523512478471

http://www.wired.com/2015/11/facebook-is-aiming-its-ai-at-go-the-game-no-computer-can-crack/

The way it's presented triggers my hype alerts, but nevertheless:
does anyone know any details about this?  Most interestingly, how
strong is it?

--
Petr Baudis
If you have good ideas, good data and fast computers,
you can do almost anything. -- Geoffrey Hinton
___
Computer-go mailing list
Computer-go@computer-go.org 
http://computer-go.org/mailman/listinfo/computer-go




--
Nick Wedd mapr...@gmail.com 


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-23 Thread David Fotland
1 kyu on KGS with no search is pretty impressive.  Perhaps Darkforest2 is too 
slow.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of Andy
Sent: Monday, November 23, 2015 9:48 AM
To: computer-go
Subject: Re: [Computer-go] Facebook Go AI

 

As of about an hour ago darkforest and darkfores1 have started playing rated 
games on KGS!

 

 

2015-11-23 11:28 GMT-06:00 Andy <andy.olsen...@gmail.com>:

So the KGS bots darkforest and darkfores1 play with only DCNN, no MCTS search 
added? I wish they would put darkfores2 with MCTS on KGS, why not put your 
strongest bot out there?

 

 

 

 

2015-11-23 10:38 GMT-06:00 Petr Baudis <pa...@ucw.cz>:

The numbers look pretty impressive! So this DNN is as strong as
a full-fledged MCTS engine with non-trivial thinking time. The increased
supervision is a nice idea, but even barring that this seems like quite
a boost to the previously published results?  Surprising that this is
just thanks to relatively simple tweaks to representations and removing
features... (Or is there anything important I missed?)

I'm not sure what's the implementation difference between darkfores1 and
darkfores2, it's a bit light on detail given how huge the winrate delta
is, isn't it? ("we fine-tuned the learning rate")  Hopefully peer review
will help.

Do I understand it right that in the tree, they sort moves by their
probability estimate, keep only moves whose probability sum up to
0.8, prune the rest and use just plain UCT with no priors afterwards?
The result with +MCTS isn't at all convincing - it just shows that
MCTS helps strength, which isn't so surprising, but the extra thinking
time spent corresponds to about 10k->150k playouts increase in Pachi,
which may not be a good trade for +27/4.5/1.2% winrate increase.


On Mon, Nov 23, 2015 at 09:54:37AM +0100, Rémi Coulom wrote:
> It is darkforest, indeed:
>
> Title: Better Computer Go Player with Neural Network and Long-term
> Prediction
>
> Authors: Yuandong Tian, Yan Zhu
>
> Abstract:
> Competing with top human players in the ancient game of Go has been a
> long-term goal of artificial intelligence. Go's high branching factor makes
> traditional search techniques ineffective, even on leading-edge hardware,
> and Go's evaluation function could change drastically with one stone change.
> Recent works [Maddison et al. (2015); Clark & Storkey (2015)] show that
> search is not strictly necessary for machine Go players. A pure
> pattern-matching approach, based on a Deep Convolutional Neural Network
> (DCNN) that predicts the next move, can perform as well as Monte Carlo Tree
> Search (MCTS)-based open source Go engines such as Pachi [Baudis & Gailly
> (2012)] if its search budget is limited. We extend this idea in our bot
> named darkforest, which relies on a DCNN designed for long-term predictions.
> Darkforest substantially improves the win rate for pattern-matching
> approaches against MCTS-based approaches, even with looser search budgets.
> Against human players, darkforest achieves a stable 1d-2d level on KGS Go
> Server, estimated from free games against human players. This substantially
> improves the estimated rankings reported in Clark & Storkey (2015), where
> DCNN-based bots are estimated at 4k-5k level based on performance against
> other machine players. Adding MCTS to darkforest creates a much stronger
> player: with only 1000 rollouts, darkforest+MCTS beats pure darkforest 90%
> of the time; with 5000 rollouts, our best model plus MCTS beats Pachi with
> 10,000 rollouts 95.5% of the time.
>
> http://arxiv.org/abs/1511.06410

--
Petr Baudis
If you have good ideas, good data and fast computers,
you can do almost anything. -- Geoffrey Hinton
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-23 Thread Petr Baudis
The numbers look pretty impressive! So this DNN is as strong as
a full-fledged MCTS engine with non-trivial thinking time. The increased
supervision is a nice idea, but even barring that this seems like quite
a boost to the previously published results?  Surprising that this is
just thanks to relatively simple tweaks to representations and removing
features... (Or is there anything important I missed?)

I'm not sure what's the implementation difference between darkfores1 and
darkfores2, it's a bit light on detail given how huge the winrate delta
is, isn't it? ("we fine-tuned the learning rate")  Hopefully peer review
will help.

Do I understand it right that in the tree, they sort moves by their
probability estimate, keep only moves whose probability sum up to
0.8, prune the rest and use just plain UCT with no priors afterwards?
The result with +MCTS isn't at all convincing - it just shows that
MCTS helps strength, which isn't so surprising, but the extra thinking
time spent corresponds to about 10k->150k playouts increase in Pachi,
which may not be a good trade for +27/4.5/1.2% winrate increase.

On Mon, Nov 23, 2015 at 09:54:37AM +0100, Rémi Coulom wrote:
> It is darkforest, indeed:
> 
> Title: Better Computer Go Player with Neural Network and Long-term
> Prediction
> 
> Authors: Yuandong Tian, Yan Zhu
> 
> Abstract:
> Competing with top human players in the ancient game of Go has been a
> long-term goal of artificial intelligence. Go's high branching factor makes
> traditional search techniques ineffective, even on leading-edge hardware,
> and Go's evaluation function could change drastically with one stone change.
> Recent works [Maddison et al. (2015); Clark & Storkey (2015)] show that
> search is not strictly necessary for machine Go players. A pure
> pattern-matching approach, based on a Deep Convolutional Neural Network
> (DCNN) that predicts the next move, can perform as well as Monte Carlo Tree
> Search (MCTS)-based open source Go engines such as Pachi [Baudis & Gailly
> (2012)] if its search budget is limited. We extend this idea in our bot
> named darkforest, which relies on a DCNN designed for long-term predictions.
> Darkforest substantially improves the win rate for pattern-matching
> approaches against MCTS-based approaches, even with looser search budgets.
> Against human players, darkforest achieves a stable 1d-2d level on KGS Go
> Server, estimated from free games against human players. This substantially
> improves the estimated rankings reported in Clark & Storkey (2015), where
> DCNN-based bots are estimated at 4k-5k level based on performance against
> other machine players. Adding MCTS to darkforest creates a much stronger
> player: with only 1000 rollouts, darkforest+MCTS beats pure darkforest 90%
> of the time; with 5000 rollouts, our best model plus MCTS beats Pachi with
> 10,000 rollouts 95.5% of the time.
> 
> http://arxiv.org/abs/1511.06410

-- 
Petr Baudis
If you have good ideas, good data and fast computers,
you can do almost anything. -- Geoffrey Hinton
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-23 Thread Andy
So the KGS bots darkforest and darkfores1 play with only DCNN, no MCTS
search added? I wish they would put darkfores2 with MCTS on KGS, why not
put your strongest bot out there?




2015-11-23 10:38 GMT-06:00 Petr Baudis :

> The numbers look pretty impressive! So this DNN is as strong as
> a full-fledged MCTS engine with non-trivial thinking time. The increased
> supervision is a nice idea, but even barring that this seems like quite
> a boost to the previously published results?  Surprising that this is
> just thanks to relatively simple tweaks to representations and removing
> features... (Or is there anything important I missed?)
>
> I'm not sure what's the implementation difference between darkfores1 and
> darkfores2, it's a bit light on detail given how huge the winrate delta
> is, isn't it? ("we fine-tuned the learning rate")  Hopefully peer review
> will help.
>
> Do I understand it right that in the tree, they sort moves by their
> probability estimate, keep only moves whose probability sum up to
> 0.8, prune the rest and use just plain UCT with no priors afterwards?
> The result with +MCTS isn't at all convincing - it just shows that
> MCTS helps strength, which isn't so surprising, but the extra thinking
> time spent corresponds to about 10k->150k playouts increase in Pachi,
> which may not be a good trade for +27/4.5/1.2% winrate increase.
>
> On Mon, Nov 23, 2015 at 09:54:37AM +0100, Rémi Coulom wrote:
> > It is darkforest, indeed:
> >
> > Title: Better Computer Go Player with Neural Network and Long-term
> > Prediction
> >
> > Authors: Yuandong Tian, Yan Zhu
> >
> > Abstract:
> > Competing with top human players in the ancient game of Go has been a
> > long-term goal of artificial intelligence. Go's high branching factor
> makes
> > traditional search techniques ineffective, even on leading-edge hardware,
> > and Go's evaluation function could change drastically with one stone
> change.
> > Recent works [Maddison et al. (2015); Clark & Storkey (2015)] show that
> > search is not strictly necessary for machine Go players. A pure
> > pattern-matching approach, based on a Deep Convolutional Neural Network
> > (DCNN) that predicts the next move, can perform as well as Monte Carlo
> Tree
> > Search (MCTS)-based open source Go engines such as Pachi [Baudis & Gailly
> > (2012)] if its search budget is limited. We extend this idea in our bot
> > named darkforest, which relies on a DCNN designed for long-term
> predictions.
> > Darkforest substantially improves the win rate for pattern-matching
> > approaches against MCTS-based approaches, even with looser search
> budgets.
> > Against human players, darkforest achieves a stable 1d-2d level on KGS Go
> > Server, estimated from free games against human players. This
> substantially
> > improves the estimated rankings reported in Clark & Storkey (2015), where
> > DCNN-based bots are estimated at 4k-5k level based on performance against
> > other machine players. Adding MCTS to darkforest creates a much stronger
> > player: with only 1000 rollouts, darkforest+MCTS beats pure darkforest
> 90%
> > of the time; with 5000 rollouts, our best model plus MCTS beats Pachi
> with
> > 10,000 rollouts 95.5% of the time.
> >
> > http://arxiv.org/abs/1511.06410
>
> --
> Petr Baudis
> If you have good ideas, good data and fast computers,
> you can do almost anything. -- Geoffrey Hinton
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-23 Thread Andy
As of about an hour ago darkforest and darkfores1 have started playing
rated games on KGS!


2015-11-23 11:28 GMT-06:00 Andy :

> So the KGS bots darkforest and darkfores1 play with only DCNN, no MCTS
> search added? I wish they would put darkfores2 with MCTS on KGS, why not
> put your strongest bot out there?
>
>
>
>
> 2015-11-23 10:38 GMT-06:00 Petr Baudis :
>
>> The numbers look pretty impressive! So this DNN is as strong as
>> a full-fledged MCTS engine with non-trivial thinking time. The increased
>> supervision is a nice idea, but even barring that this seems like quite
>> a boost to the previously published results?  Surprising that this is
>> just thanks to relatively simple tweaks to representations and removing
>> features... (Or is there anything important I missed?)
>>
>> I'm not sure what's the implementation difference between darkfores1 and
>> darkfores2, it's a bit light on detail given how huge the winrate delta
>> is, isn't it? ("we fine-tuned the learning rate")  Hopefully peer review
>> will help.
>>
>> Do I understand it right that in the tree, they sort moves by their
>> probability estimate, keep only moves whose probability sum up to
>> 0.8, prune the rest and use just plain UCT with no priors afterwards?
>> The result with +MCTS isn't at all convincing - it just shows that
>> MCTS helps strength, which isn't so surprising, but the extra thinking
>> time spent corresponds to about 10k->150k playouts increase in Pachi,
>> which may not be a good trade for +27/4.5/1.2% winrate increase.
>>
>> On Mon, Nov 23, 2015 at 09:54:37AM +0100, Rémi Coulom wrote:
>> > It is darkforest, indeed:
>> >
>> > Title: Better Computer Go Player with Neural Network and Long-term
>> > Prediction
>> >
>> > Authors: Yuandong Tian, Yan Zhu
>> >
>> > Abstract:
>> > Competing with top human players in the ancient game of Go has been a
>> > long-term goal of artificial intelligence. Go's high branching factor
>> makes
>> > traditional search techniques ineffective, even on leading-edge
>> hardware,
>> > and Go's evaluation function could change drastically with one stone
>> change.
>> > Recent works [Maddison et al. (2015); Clark & Storkey (2015)] show that
>> > search is not strictly necessary for machine Go players. A pure
>> > pattern-matching approach, based on a Deep Convolutional Neural Network
>> > (DCNN) that predicts the next move, can perform as well as Monte Carlo
>> Tree
>> > Search (MCTS)-based open source Go engines such as Pachi [Baudis &
>> Gailly
>> > (2012)] if its search budget is limited. We extend this idea in our bot
>> > named darkforest, which relies on a DCNN designed for long-term
>> predictions.
>> > Darkforest substantially improves the win rate for pattern-matching
>> > approaches against MCTS-based approaches, even with looser search
>> budgets.
>> > Against human players, darkforest achieves a stable 1d-2d level on KGS
>> Go
>> > Server, estimated from free games against human players. This
>> substantially
>> > improves the estimated rankings reported in Clark & Storkey (2015),
>> where
>> > DCNN-based bots are estimated at 4k-5k level based on performance
>> against
>> > other machine players. Adding MCTS to darkforest creates a much stronger
>> > player: with only 1000 rollouts, darkforest+MCTS beats pure darkforest
>> 90%
>> > of the time; with 5000 rollouts, our best model plus MCTS beats Pachi
>> with
>> > 10,000 rollouts 95.5% of the time.
>> >
>> > http://arxiv.org/abs/1511.06410
>>
>> --
>> Petr Baudis
>> If you have good ideas, good data and fast computers,
>> you can do almost anything. -- Geoffrey Hinton
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
>
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-05 Thread Hiroshi Yamashita

Hi,

I tried darkforest by using Aya.

darkforest is 0-3 against 2d bot. (AyaMC2) (0 wins, 3 losses)
darkforest is 1-4 against 1k bot. (AyaBot4)
darkfores1 is 1-3 against 1k bot. (AyaBot4)

It looks like darkforest is 1k or 2k.
It plays very quickly, and plays ko very well. But sometimes
it fails ladder. Maybe pure DCNN without MC search?
darkforest is ver 1.0, darkfores1 is ver 1.1. a bit latest.

Regards,
Hiroshi Yamashita

- Original Message - 
From: "Nick Wedd" <mapr...@gmail.com>

To: <computer-go@computer-go.org>
Sent: Wednesday, November 04, 2015 4:32 AM
Subject: Re: [Computer-go] Facebook Go AI



I think this Facebook AI may be the program playing on KGS as darkforest
and darkfores1.

Nick


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-04 Thread Tobias Pfeiffer
Thanks for that observation Nick!

For those that don't want to look for themselves:

https://www.gokgs.com/gameArchives.jsp?user=darkforest
https://www.gokgs.com/gameArchives.jsp?user=darkfores1

>From a quick look it seems like it is winning most of its games, even
against 1d/2d players, but there also looses. I also found a rather
curois account where it often won by a large difference against a 10k,
but then also lost 2 times against the same player (sometimes by a lot)
- probably a couple of slip ups.

Tobi

On 03.11.2015 20:32, Nick Wedd wrote:
> I think this Facebook AI may be the program playing on KGS as
> darkforest and darkfores1.
>
> Nick
>
> On 3 November 2015 at 14:28, Petr Baudis  > wrote:
>
>   Hi!
>
>   Facebook is working on a Go AI too, now:
>
> https://www.facebook.com/Engineering/videos/10153621562717200/
> https://code.facebook.com/posts/1478523512478471
>
> 
> http://www.wired.com/2015/11/facebook-is-aiming-its-ai-at-go-the-game-no-computer-can-crack/
>
> The way it's presented triggers my hype alerts, but nevertheless:
> does anyone know any details about this?  Most interestingly, how
> strong is it?
>
> --
> Petr Baudis
> If you have good ideas, good data and fast computers,
> you can do almost anything. -- Geoffrey Hinton
> ___
> Computer-go mailing list
> Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
>
>
>
>
> -- 
> Nick Wedd  mapr...@gmail.com 
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

-- 
www.pragtob.info

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-04 Thread Nick Wedd
It loses games to kyu-players because it does not mark their stones as dead
at the game end. Some kyu players mark them for it, but others are happy to
accept an undeserved win.

While it does not mark dead stones, it will not be assigned KGS "rated bot"
status, to prevent dishonest players from using it to boost their own
ratings.

Nick

On 4 November 2015 at 18:23, Tobias Pfeiffer  wrote:

> Thanks for that observation Nick!
>
> For those that don't want to look for themselves:
>
> https://www.gokgs.com/gameArchives.jsp?user=darkforest
> https://www.gokgs.com/gameArchives.jsp?user=darkfores1
>
> From a quick look it seems like it is winning most of its games, even
> against 1d/2d players, but there also looses. I also found a rather curois
> account where it often won by a large difference against a 10k, but then
> also lost 2 times against the same player (sometimes by a lot) - probably a
> couple of slip ups.
>
> Tobi
>
>
> On 03.11.2015 20:32, Nick Wedd wrote:
>
> I think this Facebook AI may be the program playing on KGS as darkforest
> and darkfores1.
>
> Nick
>
> On 3 November 2015 at 14:28, Petr Baudis  wrote:
>
>>   Hi!
>>
>>   Facebook is working on a Go AI too, now:
>>
>> https://www.facebook.com/Engineering/videos/10153621562717200/
>> https://code.facebook.com/posts/1478523512478471
>>
>> http://www.wired.com/2015/11/facebook-is-aiming-its-ai-at-go-the-game-no-computer-can-crack/
>>
>> The way it's presented triggers my hype alerts, but nevertheless:
>> does anyone know any details about this?  Most interestingly, how
>> strong is it?
>>
>> --
>> Petr Baudis
>> If you have good ideas, good data and fast computers,
>> you can do almost anything. -- Geoffrey Hinton
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
>
>
>
> --
> Nick Wedd  mapr...@gmail.com
>
>
> ___
> Computer-go mailing 
> listComputer-go@computer-go.orghttp://computer-go.org/mailman/listinfo/computer-go
>
>
> -- www.pragtob.info
>
>


-- 
Nick Wedd  mapr...@gmail.com
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-03 Thread Nick Wedd
I think this Facebook AI may be the program playing on KGS as darkforest
and darkfores1.

Nick

On 3 November 2015 at 14:28, Petr Baudis  wrote:

>   Hi!
>
>   Facebook is working on a Go AI too, now:
>
> https://www.facebook.com/Engineering/videos/10153621562717200/
> https://code.facebook.com/posts/1478523512478471
>
> http://www.wired.com/2015/11/facebook-is-aiming-its-ai-at-go-the-game-no-computer-can-crack/
>
> The way it's presented triggers my hype alerts, but nevertheless:
> does anyone know any details about this?  Most interestingly, how
> strong is it?
>
> --
> Petr Baudis
> If you have good ideas, good data and fast computers,
> you can do almost anything. -- Geoffrey Hinton
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go




-- 
Nick Wedd  mapr...@gmail.com
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-03 Thread David Doshay
This looks like GnuGo at level 1. Note things like filling at Q15, which GnuGo 
would not do on level 10 or higher.

Cheers,
David G Doshay

ddos...@mac.com





> On 3, Nov 2015, at 8:31 AM, Marc Landgraf  wrote:
> 
> then again, Gnugo donked that game pretty badly. 
> Showing one game, where Gnugo just throws away the entire top before move 50 
> is not really telling about the overall strength, imho. Gnugo repeats the 
> failure by suiciding the top right as well. 
> What is shown after is hard to evaluate, considering the score difference. 
> The fact that the FB Bot prefers to play random moves in the center instead 
> of removing the possible Ko in the lower left later is weird, but may be due 
> to it's gigantic lead at this point.
> 
> Another interesting thing to note is, that the values shown on the right do 
> not always correspond to the played moves. E.g. at move S17 (killing the top 
> right) actually S16 had been give a higher score then the played S17
> 
> 2015-11-03 17:22 GMT+01:00 Aja Huang  >:
> Yes I checked the game. The agent looks pretty strong. It crushed GnuGo 
> easily.
> 
> Aja
> 
> On Tue, Nov 3, 2015 at 4:20 PM, Rémi Coulom  > wrote:
> Can a strong player look at the video and give impressions about the game?
> 
> On 11/03/2015 03:28 PM, Petr Baudis wrote:
>Hi!
> 
>Facebook is working on a Go AI too, now:
> 
> https://www.facebook.com/Engineering/videos/10153621562717200/ 
> 
> https://code.facebook.com/posts/1478523512478471 
> 
> 
> http://www.wired.com/2015/11/facebook-is-aiming-its-ai-at-go-the-game-no-computer-can-crack/
>  
> 
> 
> The way it's presented triggers my hype alerts, but nevertheless:
> does anyone know any details about this?  Most interestingly, how
> strong is it?
> 
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go 
> 
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go 
> 
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-03 Thread Rémi Coulom

Can a strong player look at the video and give impressions about the game?

On 11/03/2015 03:28 PM, Petr Baudis wrote:

   Hi!

   Facebook is working on a Go AI too, now:

https://www.facebook.com/Engineering/videos/10153621562717200/
https://code.facebook.com/posts/1478523512478471

http://www.wired.com/2015/11/facebook-is-aiming-its-ai-at-go-the-game-no-computer-can-crack/

The way it's presented triggers my hype alerts, but nevertheless:
does anyone know any details about this?  Most interestingly, how
strong is it?



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-03 Thread Marc Landgraf
then again, Gnugo donked that game pretty badly.
Showing one game, where Gnugo just throws away the entire top before move
50 is not really telling about the overall strength, imho. Gnugo repeats
the failure by suiciding the top right as well.
What is shown after is hard to evaluate, considering the score difference.
The fact that the FB Bot prefers to play random moves in the center instead
of removing the possible Ko in the lower left later is weird, but may be
due to it's gigantic lead at this point.

Another interesting thing to note is, that the values shown on the right do
not always correspond to the played moves. E.g. at move S17 (killing the
top right) actually S16 had been give a higher score then the played S17

2015-11-03 17:22 GMT+01:00 Aja Huang :

> Yes I checked the game. The agent looks pretty strong. It crushed GnuGo
> easily.
>
> Aja
>
> On Tue, Nov 3, 2015 at 4:20 PM, Rémi Coulom  wrote:
>
>> Can a strong player look at the video and give impressions about the game?
>>
>> On 11/03/2015 03:28 PM, Petr Baudis wrote:
>>
>>>Hi!
>>>
>>>Facebook is working on a Go AI too, now:
>>>
>>> https://www.facebook.com/Engineering/videos/10153621562717200/
>>> https://code.facebook.com/posts/1478523512478471
>>>
>>> http://www.wired.com/2015/11/facebook-is-aiming-its-ai-at-go-the-game-no-computer-can-crack/
>>>
>>> The way it's presented triggers my hype alerts, but nevertheless:
>>> does anyone know any details about this?  Most interestingly, how
>>> strong is it?
>>>
>>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go