Re: [Computer-go] CGOS again

2015-11-23 Thread Rémi Coulom
BTW, CrazyStone-0002 (on 19x19, not 9x9) is running on my desktop PC 
(i7-5930K). At first, it was running with 4 cores. But then it lost its 
first game to Aya. I was so impressed by the strength of Aya! So I made 
it use 6 threads instead. It has been using 6 threads since then.


I can't run it permanently, but if strong programs want to try it, just 
ask, and I will let it play during one night.


Rémi

On 11/20/2015 02:23 PM, Hiroshi Yamashita wrote:

Hi Remi,

running "mm 1 1" instead of just "mm". It just estimates the 
maximum-likelihood value of drawelo by itself. Or just set drawelo to 20 


I tried. CrazyStone-0002 Elo changed a bit on each setting.
It looks difficult. So I set drawelo 20.
 Elo
drawelo 0.01, mm,  CrazyStone-0002   2713
drawelo 0.01, mm 1 1,  CrazyStone-0002   2767
drawelo 20,   mm,  CrazyStone-0002   2746


I'll connect Crazy Stone to 19x19 too.


Thanks!

Hiroshi Yamashita

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-23 Thread Rémi Coulom

It is darkforest, indeed:

Title: Better Computer Go Player with Neural Network and Long-term 
Prediction


Authors: Yuandong Tian, Yan Zhu

Abstract:
Competing with top human players in the ancient game of Go has been a 
long-term goal of artificial intelligence. Go's high branching factor 
makes traditional search techniques ineffective, even on leading-edge 
hardware, and Go's evaluation function could change drastically with one 
stone change. Recent works [Maddison et al. (2015); Clark & Storkey 
(2015)] show that search is not strictly necessary for machine Go 
players. A pure pattern-matching approach, based on a Deep Convolutional 
Neural Network (DCNN) that predicts the next move, can perform as well 
as Monte Carlo Tree Search (MCTS)-based open source Go engines such as 
Pachi [Baudis & Gailly (2012)] if its search budget is limited. We 
extend this idea in our bot named darkforest, which relies on a DCNN 
designed for long-term predictions. Darkforest substantially improves 
the win rate for pattern-matching approaches against MCTS-based 
approaches, even with looser search budgets. Against human players, 
darkforest achieves a stable 1d-2d level on KGS Go Server, estimated 
from free games against human players. This substantially improves the 
estimated rankings reported in Clark & Storkey (2015), where DCNN-based 
bots are estimated at 4k-5k level based on performance against other 
machine players. Adding MCTS to darkforest creates a much stronger 
player: with only 1000 rollouts, darkforest+MCTS beats pure darkforest 
90% of the time; with 5000 rollouts, our best model plus MCTS beats 
Pachi with 10,000 rollouts 95.5% of the time.


http://arxiv.org/abs/1511.06410

Rémi

On 11/03/2015 08:32 PM, Nick Wedd wrote:
I think this Facebook AI may be the program playing on KGS as 
darkforest and darkfores1.


Nick

On 3 November 2015 at 14:28, Petr Baudis > wrote:


  Hi!

  Facebook is working on a Go AI too, now:

https://www.facebook.com/Engineering/videos/10153621562717200/
https://code.facebook.com/posts/1478523512478471

http://www.wired.com/2015/11/facebook-is-aiming-its-ai-at-go-the-game-no-computer-can-crack/

The way it's presented triggers my hype alerts, but nevertheless:
does anyone know any details about this?  Most interestingly, how
strong is it?

--
Petr Baudis
If you have good ideas, good data and fast computers,
you can do almost anything. -- Geoffrey Hinton
___
Computer-go mailing list
Computer-go@computer-go.org 
http://computer-go.org/mailman/listinfo/computer-go




--
Nick Wedd mapr...@gmail.com 


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-23 Thread David Fotland
1 kyu on KGS with no search is pretty impressive.  Perhaps Darkforest2 is too 
slow.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of Andy
Sent: Monday, November 23, 2015 9:48 AM
To: computer-go
Subject: Re: [Computer-go] Facebook Go AI

 

As of about an hour ago darkforest and darkfores1 have started playing rated 
games on KGS!

 

 

2015-11-23 11:28 GMT-06:00 Andy :

So the KGS bots darkforest and darkfores1 play with only DCNN, no MCTS search 
added? I wish they would put darkfores2 with MCTS on KGS, why not put your 
strongest bot out there?

 

 

 

 

2015-11-23 10:38 GMT-06:00 Petr Baudis :

The numbers look pretty impressive! So this DNN is as strong as
a full-fledged MCTS engine with non-trivial thinking time. The increased
supervision is a nice idea, but even barring that this seems like quite
a boost to the previously published results?  Surprising that this is
just thanks to relatively simple tweaks to representations and removing
features... (Or is there anything important I missed?)

I'm not sure what's the implementation difference between darkfores1 and
darkfores2, it's a bit light on detail given how huge the winrate delta
is, isn't it? ("we fine-tuned the learning rate")  Hopefully peer review
will help.

Do I understand it right that in the tree, they sort moves by their
probability estimate, keep only moves whose probability sum up to
0.8, prune the rest and use just plain UCT with no priors afterwards?
The result with +MCTS isn't at all convincing - it just shows that
MCTS helps strength, which isn't so surprising, but the extra thinking
time spent corresponds to about 10k->150k playouts increase in Pachi,
which may not be a good trade for +27/4.5/1.2% winrate increase.


On Mon, Nov 23, 2015 at 09:54:37AM +0100, Rémi Coulom wrote:
> It is darkforest, indeed:
>
> Title: Better Computer Go Player with Neural Network and Long-term
> Prediction
>
> Authors: Yuandong Tian, Yan Zhu
>
> Abstract:
> Competing with top human players in the ancient game of Go has been a
> long-term goal of artificial intelligence. Go's high branching factor makes
> traditional search techniques ineffective, even on leading-edge hardware,
> and Go's evaluation function could change drastically with one stone change.
> Recent works [Maddison et al. (2015); Clark & Storkey (2015)] show that
> search is not strictly necessary for machine Go players. A pure
> pattern-matching approach, based on a Deep Convolutional Neural Network
> (DCNN) that predicts the next move, can perform as well as Monte Carlo Tree
> Search (MCTS)-based open source Go engines such as Pachi [Baudis & Gailly
> (2012)] if its search budget is limited. We extend this idea in our bot
> named darkforest, which relies on a DCNN designed for long-term predictions.
> Darkforest substantially improves the win rate for pattern-matching
> approaches against MCTS-based approaches, even with looser search budgets.
> Against human players, darkforest achieves a stable 1d-2d level on KGS Go
> Server, estimated from free games against human players. This substantially
> improves the estimated rankings reported in Clark & Storkey (2015), where
> DCNN-based bots are estimated at 4k-5k level based on performance against
> other machine players. Adding MCTS to darkforest creates a much stronger
> player: with only 1000 rollouts, darkforest+MCTS beats pure darkforest 90%
> of the time; with 5000 rollouts, our best model plus MCTS beats Pachi with
> 10,000 rollouts 95.5% of the time.
>
> http://arxiv.org/abs/1511.06410

--
Petr Baudis
If you have good ideas, good data and fast computers,
you can do almost anything. -- Geoffrey Hinton
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Looking for game record of World Computer Weiqi Championship in Beijing

2015-11-23 Thread Horace Ho
I am looking for the game records between Lian Xiao and DolBaram in Nov
2015.

3rd game is here: http://gokifu.com/s/2hlx.w

How about the firs two games?

Thanks
horace​​
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Looking for game record of World Computer WeiqiChampionship in Beijing

2015-11-23 Thread Hiroshi Yamashita

Hi,

I made SGF from web page.

4H game http://www.yss-aya.com/lian_dol_4h.sgf
5H game http://www.yss-aya.com/lian_dol_5h.sgf

http://computer-go.org/pipermail/computer-go/2015-November/008236.html

Regards,
Hiroshi Yamashita

- Original Message - 
From: "Horace Ho" 

To: "computer-go" 
Sent: Tuesday, November 24, 2015 3:09 PM
Subject: [Computer-go] Looking for game record of World Computer 
WeiqiChampionship in Beijing


I am looking for the game records between Lian Xiao and DolBaram in Nov
2015.

3rd game is here: http://gokifu.com/s/2hlx.w

How about the firs two games?

Thanks
horace
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Looking for game record of World Computer WeiqiChampionship in Beijing

2015-11-23 Thread Horace Ho
Thank you!

I uploaded the games:

4H http://gokifu.com/s/2huz
5H http://gokifu.com/s/2hv0


On Tue, Nov 24, 2015 at 2:27 PM, Hiroshi Yamashita  wrote:

> Hi,
>
> I made SGF from web page.
>
> 4H game http://www.yss-aya.com/lian_dol_4h.sgf
> 5H game http://www.yss-aya.com/lian_dol_5h.sgf
>
> http://computer-go.org/pipermail/computer-go/2015-November/008236.html
>
> Regards,
> Hiroshi Yamashita
>
> - Original Message - From: "Horace Ho" 
> To: "computer-go" 
> Sent: Tuesday, November 24, 2015 3:09 PM
> Subject: [Computer-go] Looking for game record of World Computer
> WeiqiChampionship in Beijing
>
>
>
> I am looking for the game records between Lian Xiao and DolBaram in Nov
> 2015.
>
> 3rd game is here: http://gokifu.com/s/2hlx.w
>
> How about the firs two games?
>
> Thanks
> horace
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS again

2015-11-23 Thread Hiroshi Yamashita

Hi Remi,

I have relaxed a bit because I thought you use Atom 1.86GHz.
Aya486m_4c is running on i7-980X 3.3GHz with 4 cores.

Recently I changed Learning method for progressive windening
from MM to LFR. And accuracy changed from 38.8% to 40.7%.
It makes +70 Elo in selfplay. So I had a bit confidence, but
still CS is big wall. But I have another idea for UEC cup :-)

Hiroshi Yamashita

- Original Message - 
From: "Rémi Coulom" 

To: 
Sent: Tuesday, November 24, 2015 4:17 AM
Subject: Re: [Computer-go] CGOS again


BTW, CrazyStone-0002 (on 19x19, not 9x9) is running on my desktop PC (i7-5930K). At first, it was running with 4 
cores. But then it lost its first game to Aya. I was so impressed by the strength of Aya! So I made it use 6 threads 
instead. It has been using 6 threads since then.


I can't run it permanently, but if strong programs want to try it, just ask, 
and I will let it play during one night.

Rémi


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-23 Thread Petr Baudis
The numbers look pretty impressive! So this DNN is as strong as
a full-fledged MCTS engine with non-trivial thinking time. The increased
supervision is a nice idea, but even barring that this seems like quite
a boost to the previously published results?  Surprising that this is
just thanks to relatively simple tweaks to representations and removing
features... (Or is there anything important I missed?)

I'm not sure what's the implementation difference between darkfores1 and
darkfores2, it's a bit light on detail given how huge the winrate delta
is, isn't it? ("we fine-tuned the learning rate")  Hopefully peer review
will help.

Do I understand it right that in the tree, they sort moves by their
probability estimate, keep only moves whose probability sum up to
0.8, prune the rest and use just plain UCT with no priors afterwards?
The result with +MCTS isn't at all convincing - it just shows that
MCTS helps strength, which isn't so surprising, but the extra thinking
time spent corresponds to about 10k->150k playouts increase in Pachi,
which may not be a good trade for +27/4.5/1.2% winrate increase.

On Mon, Nov 23, 2015 at 09:54:37AM +0100, Rémi Coulom wrote:
> It is darkforest, indeed:
> 
> Title: Better Computer Go Player with Neural Network and Long-term
> Prediction
> 
> Authors: Yuandong Tian, Yan Zhu
> 
> Abstract:
> Competing with top human players in the ancient game of Go has been a
> long-term goal of artificial intelligence. Go's high branching factor makes
> traditional search techniques ineffective, even on leading-edge hardware,
> and Go's evaluation function could change drastically with one stone change.
> Recent works [Maddison et al. (2015); Clark & Storkey (2015)] show that
> search is not strictly necessary for machine Go players. A pure
> pattern-matching approach, based on a Deep Convolutional Neural Network
> (DCNN) that predicts the next move, can perform as well as Monte Carlo Tree
> Search (MCTS)-based open source Go engines such as Pachi [Baudis & Gailly
> (2012)] if its search budget is limited. We extend this idea in our bot
> named darkforest, which relies on a DCNN designed for long-term predictions.
> Darkforest substantially improves the win rate for pattern-matching
> approaches against MCTS-based approaches, even with looser search budgets.
> Against human players, darkforest achieves a stable 1d-2d level on KGS Go
> Server, estimated from free games against human players. This substantially
> improves the estimated rankings reported in Clark & Storkey (2015), where
> DCNN-based bots are estimated at 4k-5k level based on performance against
> other machine players. Adding MCTS to darkforest creates a much stronger
> player: with only 1000 rollouts, darkforest+MCTS beats pure darkforest 90%
> of the time; with 5000 rollouts, our best model plus MCTS beats Pachi with
> 10,000 rollouts 95.5% of the time.
> 
> http://arxiv.org/abs/1511.06410

-- 
Petr Baudis
If you have good ideas, good data and fast computers,
you can do almost anything. -- Geoffrey Hinton
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-23 Thread Andy
So the KGS bots darkforest and darkfores1 play with only DCNN, no MCTS
search added? I wish they would put darkfores2 with MCTS on KGS, why not
put your strongest bot out there?




2015-11-23 10:38 GMT-06:00 Petr Baudis :

> The numbers look pretty impressive! So this DNN is as strong as
> a full-fledged MCTS engine with non-trivial thinking time. The increased
> supervision is a nice idea, but even barring that this seems like quite
> a boost to the previously published results?  Surprising that this is
> just thanks to relatively simple tweaks to representations and removing
> features... (Or is there anything important I missed?)
>
> I'm not sure what's the implementation difference between darkfores1 and
> darkfores2, it's a bit light on detail given how huge the winrate delta
> is, isn't it? ("we fine-tuned the learning rate")  Hopefully peer review
> will help.
>
> Do I understand it right that in the tree, they sort moves by their
> probability estimate, keep only moves whose probability sum up to
> 0.8, prune the rest and use just plain UCT with no priors afterwards?
> The result with +MCTS isn't at all convincing - it just shows that
> MCTS helps strength, which isn't so surprising, but the extra thinking
> time spent corresponds to about 10k->150k playouts increase in Pachi,
> which may not be a good trade for +27/4.5/1.2% winrate increase.
>
> On Mon, Nov 23, 2015 at 09:54:37AM +0100, Rémi Coulom wrote:
> > It is darkforest, indeed:
> >
> > Title: Better Computer Go Player with Neural Network and Long-term
> > Prediction
> >
> > Authors: Yuandong Tian, Yan Zhu
> >
> > Abstract:
> > Competing with top human players in the ancient game of Go has been a
> > long-term goal of artificial intelligence. Go's high branching factor
> makes
> > traditional search techniques ineffective, even on leading-edge hardware,
> > and Go's evaluation function could change drastically with one stone
> change.
> > Recent works [Maddison et al. (2015); Clark & Storkey (2015)] show that
> > search is not strictly necessary for machine Go players. A pure
> > pattern-matching approach, based on a Deep Convolutional Neural Network
> > (DCNN) that predicts the next move, can perform as well as Monte Carlo
> Tree
> > Search (MCTS)-based open source Go engines such as Pachi [Baudis & Gailly
> > (2012)] if its search budget is limited. We extend this idea in our bot
> > named darkforest, which relies on a DCNN designed for long-term
> predictions.
> > Darkforest substantially improves the win rate for pattern-matching
> > approaches against MCTS-based approaches, even with looser search
> budgets.
> > Against human players, darkforest achieves a stable 1d-2d level on KGS Go
> > Server, estimated from free games against human players. This
> substantially
> > improves the estimated rankings reported in Clark & Storkey (2015), where
> > DCNN-based bots are estimated at 4k-5k level based on performance against
> > other machine players. Adding MCTS to darkforest creates a much stronger
> > player: with only 1000 rollouts, darkforest+MCTS beats pure darkforest
> 90%
> > of the time; with 5000 rollouts, our best model plus MCTS beats Pachi
> with
> > 10,000 rollouts 95.5% of the time.
> >
> > http://arxiv.org/abs/1511.06410
>
> --
> Petr Baudis
> If you have good ideas, good data and fast computers,
> you can do almost anything. -- Geoffrey Hinton
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-23 Thread Andy
As of about an hour ago darkforest and darkfores1 have started playing
rated games on KGS!


2015-11-23 11:28 GMT-06:00 Andy :

> So the KGS bots darkforest and darkfores1 play with only DCNN, no MCTS
> search added? I wish they would put darkfores2 with MCTS on KGS, why not
> put your strongest bot out there?
>
>
>
>
> 2015-11-23 10:38 GMT-06:00 Petr Baudis :
>
>> The numbers look pretty impressive! So this DNN is as strong as
>> a full-fledged MCTS engine with non-trivial thinking time. The increased
>> supervision is a nice idea, but even barring that this seems like quite
>> a boost to the previously published results?  Surprising that this is
>> just thanks to relatively simple tweaks to representations and removing
>> features... (Or is there anything important I missed?)
>>
>> I'm not sure what's the implementation difference between darkfores1 and
>> darkfores2, it's a bit light on detail given how huge the winrate delta
>> is, isn't it? ("we fine-tuned the learning rate")  Hopefully peer review
>> will help.
>>
>> Do I understand it right that in the tree, they sort moves by their
>> probability estimate, keep only moves whose probability sum up to
>> 0.8, prune the rest and use just plain UCT with no priors afterwards?
>> The result with +MCTS isn't at all convincing - it just shows that
>> MCTS helps strength, which isn't so surprising, but the extra thinking
>> time spent corresponds to about 10k->150k playouts increase in Pachi,
>> which may not be a good trade for +27/4.5/1.2% winrate increase.
>>
>> On Mon, Nov 23, 2015 at 09:54:37AM +0100, Rémi Coulom wrote:
>> > It is darkforest, indeed:
>> >
>> > Title: Better Computer Go Player with Neural Network and Long-term
>> > Prediction
>> >
>> > Authors: Yuandong Tian, Yan Zhu
>> >
>> > Abstract:
>> > Competing with top human players in the ancient game of Go has been a
>> > long-term goal of artificial intelligence. Go's high branching factor
>> makes
>> > traditional search techniques ineffective, even on leading-edge
>> hardware,
>> > and Go's evaluation function could change drastically with one stone
>> change.
>> > Recent works [Maddison et al. (2015); Clark & Storkey (2015)] show that
>> > search is not strictly necessary for machine Go players. A pure
>> > pattern-matching approach, based on a Deep Convolutional Neural Network
>> > (DCNN) that predicts the next move, can perform as well as Monte Carlo
>> Tree
>> > Search (MCTS)-based open source Go engines such as Pachi [Baudis &
>> Gailly
>> > (2012)] if its search budget is limited. We extend this idea in our bot
>> > named darkforest, which relies on a DCNN designed for long-term
>> predictions.
>> > Darkforest substantially improves the win rate for pattern-matching
>> > approaches against MCTS-based approaches, even with looser search
>> budgets.
>> > Against human players, darkforest achieves a stable 1d-2d level on KGS
>> Go
>> > Server, estimated from free games against human players. This
>> substantially
>> > improves the estimated rankings reported in Clark & Storkey (2015),
>> where
>> > DCNN-based bots are estimated at 4k-5k level based on performance
>> against
>> > other machine players. Adding MCTS to darkforest creates a much stronger
>> > player: with only 1000 rollouts, darkforest+MCTS beats pure darkforest
>> 90%
>> > of the time; with 5000 rollouts, our best model plus MCTS beats Pachi
>> with
>> > 10,000 rollouts 95.5% of the time.
>> >
>> > http://arxiv.org/abs/1511.06410
>>
>> --
>> Petr Baudis
>> If you have good ideas, good data and fast computers,
>> you can do almost anything. -- Geoffrey Hinton
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
>
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go