Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread Robert Jasiek

On 02.02.2016 17:29, Jim O'Flaherty wrote:

AI Software Engineers: Robert, please stop asking our AI for explanations.
We don't want to distract it with limited human understanding. And we don't
want the Herculean task of coding up that extremely frail and error prone
bridge.


Currently I do not ask a specific AI engine explanations. If an AI 
program only has the goal of playing strong, then - while it is playing 
or preparing play - it should not be disturbed with extra tasks.


Explanations can come from AI programs, their programmers, researchers 
providing the theory applied in those programs, researchers analysing 
the program codes, data structures or outputs.


I do not expect everybody to be interested in explanations, but I ask 
those interested. It must be possible to study theory for playing 
programs, their data structures or outputs and find connections to 
explanatory theory - as much as it must be possible to use explanatory 
theory to improve "brute force" programs.


Herculean task? Likely. The research in explanatory theory is, too.

Error prone? I disagree. Errors are not created due to volume of a task 
but due to carelessness or missing study of semantic conflicts.


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-02 Thread Peter Jin
Hi David,

I've used a GTX 970 for training deep convnets without issue. Depending on
your budget, a GTX 980 Ti or TITAN X would be even better (we use some
TITAN X's in our lab). The main thing about using smaller GPUs for training
these networks is that depending on the implementation of the neural net
code, you may have to tune your mini batch size to fit in memory. But this
shouldn't be a problem if you are using a lower-memory convolutions such as
some of the ones in cuDNN.

If you're willing to wait a bit, the first Nvidia Pascal chip is rumored to
be released as early as April. They are supposed to have full support for
half precision floating point, which in theory gives a 2x speedup over
equivalent single precision performance.

Regards,
Peter

On Tue, Feb 2, 2016 at 10:25 AM, David Fotland 
wrote:

> Detlef, Hiroshi, Hideki, and others,
>
> I have caffelib integrated with Many Faces so I can evaluate a DNN.  Thank
> you very much Detlef for sample code to set up the input layer.  Building
> caffe on windows is painful.  If anyone else is doing it and gets stuck I
> might be able to help.
>
> What hardware are you using to train networks?  I don’t have a
> cuda-capable GPU yet, so I'm going to buy a new box.  I'd like some
> advice.  Caffe is not well supported on Windows, so I plan to use a Linux
> box for training, but continue to use Windows for testing and development.
> For competitions I could use either windows or linux.
>
> Thanks in advance,
>
> David
>
> > -Original Message-
> > From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> > Of Hiroshi Yamashita
> > Sent: Monday, February 01, 2016 11:26 PM
> > To: computer-go@computer-go.org
> > Subject: *SPAM* Re: [Computer-go] DCNN can solve semeai?
> >
> > Hi Detlef,
> >
> > My study heavily depends on your information. Especially Oakfoam code,
> > lenet.prototxt and generate_sample_data_leveldb.py was helpful. Thanks!
> >
> > > Quite interesting that you do not reach the prediction rate 57% from
> > > the facebook paper by far too! I have the same experience with the
> >
> > I'm trying 12 layers 256 filters, but it is around 49.8%.
> > I think 57% is maybe from KGS games.
> >
> > > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > > did not do it and was thinking my training is not ok, but as you have
> > > the same result probably this is the only difference?!
> >
> > I also did not use before 1800AD. And don't use hadicap games.
> > Training positions are 15693570 from 76000 games.
> > Test positions are   445693 from  2156 games.
> > All games are shuffled in advance. Each position is randomly rotated.
> > And memorizing 24000 positions, then shuffle and store to LebelDB.
> > At first I did not shuffle games. Then accuracy is down each 61000
> > iteration (one epoch, 256 mini-batch).
> > http://www.yss-aya.com/20160108.png
> > It means DCNN understands easily the difference 1800AD games and  2015AD
> > games. I was surprised DCNN's ability. And maybe 1800AD games  are also
> > not good for training?
> >
> > Regards,
> > Hiroshi Yamashita
> >
> > - Original Message -
> > From: "Detlef Schmicker" 
> > To: 
> > Sent: Tuesday, February 02, 2016 3:15 PM
> > Subject: Re: [Computer-go] DCNN can solve semeai?
> >
> > > Thanks a lot for sharing this.
> > >
> > > Quite interesting that you do not reach the prediction rate 57% from
> > > the facebook paper by far too! I have the same experience with the
> > > GoGoD database. My numbers are nearly the same as yours 49% :) my net
> > > is quite simelar, but I use 7,5,5,3,3, with 12 layers in total.
> > >
> > > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > > did not do it and was thinking my training is not ok, but as you have
> > > the same result probably this is the only difference?!
> > >
> > > Best regards,
> > >
> > > Detlef
> >
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] *****SPAM***** Re: What hardware to use to train the DNN

2016-02-02 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

this is a very difficult questions:

I do have 100 batches (each 64) within 1 minute for the big facebook
DCNN (384 layers in each of the 9 3x3 kernel two 128 5x5 and two 128
7x7 before that)

What facebook calls an epoch is 400 of this (4 * 64 positions)

Now you can have a look at the fig 5 of facebook and you see it is
difficult to say where to stop training.

My experience at the moment is, that my training is not increasing
after 20 of facebook batches, but I am fighting with this at the
moment and still am not sure if figure 5 corresponds to KGS or GoGoD
database, which makes a huge difference for me.


Sorry for the not to good answer, but that is my state:(


I trained the same net with all 128 layers within about two weeks in
December, but was not happy with the result (49%, but after I read
Hiroshi's post I am not sure, if it was not ok anyway :)

At the moment I am preparing a net with additional winrate value output:

I was fighting with Komi representation last year, now I will try to
support 6.5, 7.5 and 0.5 komi (>90% of kgs games 6d+) using 6 layers
(3 for b moving and 3 for white moving) Before I tried flexible komi
support, but this was not successful enough:( And as google only
supports 7.5 :)

If somebody else working on this, I would love to share here!


Detlef

Am 02.02.2016 um 19:38 schrieb David Fotland:
> How long does it take to train one of your nets?  Is it safe to
> assume that training time is roughly proportional to the number of
> neurons in the net?
> 
> Thanks,
> 
> David
> 
>> -Original Message- From: Computer-go
>> [mailto:computer-go-boun...@computer-go.org] On Behalf Of Detlef
>> Schmicker Sent: Tuesday, February 02, 2016 10:35 AM To:
>> computer-go@computer-go.org Subject: *SPAM* Re:
>> [Computer-go] What hardware to use to train the DNN
>> 
> Hi David,
> 
> I use Ubuntu 14.04 LTS with a NVIDIA GTX970 Graphic card (and
> i7-4970k, but this is not important for training I think) and
> installed CUDNN v4 (important, at least a factor 4 in training
> speed).
> 
> This Ubuntu version is officially supported by Cuda and I did only
> have minor problems if an Ubuntu update updated the graphics
> driver: I had 2 times in the last year to reinstall cuda (a little
> ugly, as the graphic driver did not work after the update and you
> had to boot into command line mode).
> 
> Detlef
> 
> Am 02.02.2016 um 19:25 schrieb David Fotland:
 Detlef, Hiroshi, Hideki, and others,
 
 I have caffelib integrated with Many Faces so I can evaluate
 a DNN. Thank you very much Detlef for sample code to set up
 the input layer. Building caffe on windows is painful.  If
 anyone else is doing it and gets stuck I might be able to
 help.
 
 What hardware are you using to train networks?  I don t have
 a cuda-capable GPU yet, so I'm going to buy a new box.  I'd
 like some advice.  Caffe is not well supported on Windows, so
 I plan to use a Linux box for training, but continue to use
 Windows for testing and development.  For competitions I
 could use either windows or linux.
 
 Thanks in advance,
 
 David
 
> -Original Message- From: Computer-go 
> [mailto:computer-go-boun...@computer-go.org] On Behalf Of
> Hiroshi Yamashita Sent: Monday, February 01, 2016 11:26 PM
> To: computer-go@computer-go.org Subject: *SPAM*
> Re: [Computer-go] DCNN can solve semeai?
> 
> Hi Detlef,
> 
> My study heavily depends on your information. Especially
> Oakfoam code, lenet.prototxt and
> generate_sample_data_leveldb.py was helpful. Thanks!
> 
>> Quite interesting that you do not reach the prediction
>> rate 57% from the facebook paper by far too! I have the
>> same experience with the
> 
> I'm trying 12 layers 256 filters, but it is around 49.8%. I
> think 57% is maybe from KGS games.
> 
>> Did you strip the games before 1800AD, as mentioned in
>> the FB paper? I did not do it and was thinking my
>> training is not ok, but as you have the same result
>> probably this is the only difference?!
> 
> I also did not use before 1800AD. And don't use hadicap
> games. Training positions are 15693570 from 76000 games.
> Test positions are   445693 from  2156 games. All games are
> shuffled in advance. Each position is randomly rotated. And
> memorizing 24000 positions, then shuffle and store to
> LebelDB. At first I did not shuffle games. Then accuracy is
> down each 61000 iteration (one epoch, 256 mini-batch).
> http://www.yss-aya.com/20160108.png It means DCNN
> understands easily the difference 1800AD games and 2015AD
> games. I was surprised DCNN's ability. And maybe 1800AD
> games are also not good for training?
> 
> Regards, Hiroshi Yamashita
> 
> - Original Message - From: "Detlef Schmicker" 
> 

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread Olivier Teytaud
>
> If AlphaGo had lost at least one game, I'd understand how people can have
>> an upper bound on its level, but with 5-0 (except for Blitz) it's hard to
>> have an upper bound on his level. After all, AlphaGo might just have played
>> well enough for crushing Fan Hui, and a weak move while the position is
>> still in favor of AlphaGo is not really a weak move (at least in a
>> game-theoretic point of view...).
>>
>
> I just want to point that according to Myungwan Kim 9p (video referenced
> in this thread) on the first game, Alpha Go did some mistake early in the
> game and was behind during nearly the whole game so some of his moves
> should be weak in game-theoric point of view.
>

Thanks, this point is interesting - that's really an argument limiting the
strength of AlphaGo.

On the other hand, they have super strong people in the team (at the pro
level, maybe ? if Aja has pro level...),
and one of the guys said he is "quietly confident", which suggests they
have strong reasons for believing they have a big chance :-)

Good luck AlphaGo :-) I'm grateful because since this happened many more
doors are opened for people
working with these tools, even if they don't touch games, and this is
really useful for the world :-)



-- 
=
"I will never sign a document with logos in black & white." A. Einstein
Olivier Teytaud, olivier.teyt...@inria.fr, http://www.slideshare.net/teytaud
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-02 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi David,

I use Ubuntu 14.04 LTS with a NVIDIA GTX970 Graphic card (and
i7-4970k, but this is not important for training I think) and
installed CUDNN v4 (important, at least a factor 4 in training speed).

This Ubuntu version is officially supported by Cuda and I did only
have minor problems if an Ubuntu update updated the graphics driver: I
had 2 times in the last year to reinstall cuda (a little ugly, as the
graphic driver did not work after the update and you had to boot into
command line mode).

Detlef

Am 02.02.2016 um 19:25 schrieb David Fotland:
> Detlef, Hiroshi, Hideki, and others,
> 
> I have caffelib integrated with Many Faces so I can evaluate a DNN.
> Thank you very much Detlef for sample code to set up the input
> layer.  Building caffe on windows is painful.  If anyone else is
> doing it and gets stuck I might be able to help.
> 
> What hardware are you using to train networks?  I don’t have a
> cuda-capable GPU yet, so I'm going to buy a new box.  I'd like some
> advice.  Caffe is not well supported on Windows, so I plan to use a
> Linux box for training, but continue to use Windows for testing and
> development.  For competitions I could use either windows or
> linux.
> 
> Thanks in advance,
> 
> David
> 
>> -Original Message- From: Computer-go
>> [mailto:computer-go-boun...@computer-go.org] On Behalf Of Hiroshi
>> Yamashita Sent: Monday, February 01, 2016 11:26 PM To:
>> computer-go@computer-go.org Subject: *SPAM* Re:
>> [Computer-go] DCNN can solve semeai?
>> 
>> Hi Detlef,
>> 
>> My study heavily depends on your information. Especially Oakfoam
>> code, lenet.prototxt and generate_sample_data_leveldb.py was
>> helpful. Thanks!
>> 
>>> Quite interesting that you do not reach the prediction rate 57%
>>> from the facebook paper by far too! I have the same experience
>>> with the
>> 
>> I'm trying 12 layers 256 filters, but it is around 49.8%. I think
>> 57% is maybe from KGS games.
>> 
>>> Did you strip the games before 1800AD, as mentioned in the FB
>>> paper? I did not do it and was thinking my training is not ok,
>>> but as you have the same result probably this is the only
>>> difference?!
>> 
>> I also did not use before 1800AD. And don't use hadicap games. 
>> Training positions are 15693570 from 76000 games. Test
>> positions are   445693 from  2156 games. All games are shuffled
>> in advance. Each position is randomly rotated. And memorizing
>> 24000 positions, then shuffle and store to LebelDB. At first I
>> did not shuffle games. Then accuracy is down each 61000 iteration
>> (one epoch, 256 mini-batch). http://www.yss-aya.com/20160108.png 
>> It means DCNN understands easily the difference 1800AD games and
>> 2015AD games. I was surprised DCNN's ability. And maybe 1800AD
>> games  are also not good for training?
>> 
>> Regards, Hiroshi Yamashita
>> 
>> - Original Message - From: "Detlef Schmicker"
>>  To:  Sent: Tuesday,
>> February 02, 2016 3:15 PM Subject: Re: [Computer-go] DCNN can
>> solve semeai?
>> 
>>> Thanks a lot for sharing this.
>>> 
>>> Quite interesting that you do not reach the prediction rate 57%
>>> from the facebook paper by far too! I have the same experience
>>> with the GoGoD database. My numbers are nearly the same as
>>> yours 49% :) my net is quite simelar, but I use 7,5,5,3,3,
>>> with 12 layers in total.
>>> 
>>> Did you strip the games before 1800AD, as mentioned in the FB
>>> paper? I did not do it and was thinking my training is not ok,
>>> but as you have the same result probably this is the only
>>> difference?!
>>> 
>>> Best regards,
>>> 
>>> Detlef
>> 
>> ___ Computer-go
>> mailing list Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWsPbCAAoJEInWdHg+Znf4MMUQAIEp0VXzCC58S+FyWniUFrnq
1zTBd9bApj57mJWE7n+etPOWy9tPrWfKRdc25U8KHnJNiiK3UrVOdhVJPjOK3l9g
qEPom48av0DfjrGNp2ZFJ30xCV7eahdR27vguYCn+qdg2Hyc/X7yhCp/mzF7zEMm
yx1n8A+ZwuRMkLTApuewaccA1TiMTOP+mJj79PHRgFdaaoOiYn41Mzp8bKllb3t/
E7bqz2zhLz5Qct9p3x98SY/9vYYGEMojTsE10sCe3/1oDWaxgL+agA6PkbqxuwQ1
QpQh6XAYSYJo4eoCFvDJGiuUDHEOLkI0B6k8WBbRTRuGyqNjrY+Ih1UcfMrcJawU
dmz36cadtkQ5aLUCV+0bgEJnpW4apo+3YG2qputTQe6jcEQaK00eQ5fbmd87isT3
menAWfIfMDStwaFYA2sF3pDqWvK2DDjG/tJ2r5gn37tvo1yuTByEBn3yosiGSkHq
5TcZF0oTlY9zuzcmNAnU9+jZ2vOxgWdCXCsR39lA0tFf+Y7rnmIZMVUP579CkHTl
SNl4ZxJOkL6doL5rUB1Ptwx5zVErVuaNcE//LFn6iWNOxVWnzQQYDY5wq3rJ/ABT
qGpEvXlIYrlMCEtN28GhC+4Z9UTZAAiAauw/ko2VfGtk9yz/7BHsr41ZQvpfeUBP
s9mZdpAsT1cF/Gp1W2Ie
=4K/j
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread Xavier Combelle
2016-02-01 12:24 GMT+01:00 Olivier Teytaud :

> If AlphaGo had lost at least one game, I'd understand how people can have
> an upper bound on its level, but with 5-0 (except for Blitz) it's hard to
> have an upper bound on his level. After all, AlphaGo might just have played
> well enough for crushing Fan Hui, and a weak move while the position is
> still in favor of AlphaGo is not really a weak move (at least in a
> game-theoretic point of view...).
>

I just want to point that according to Myungwan Kim 9p (video referenced in
this thread) on the first game, Alpha Go did some mistake early in the game
and was behind during nearly the whole game so some of his moves should be
weak in game-theoric point of view.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] *****SPAM***** Re: Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread David Fotland
Robert, please consider some of this as the difference between math and 
engineering.  Math desires rigor.  Engineering desires working solutions.  When 
an engineering solution is being described, you shouldn't expect the same level 
of rigor as in a mathematical proof.  Often all we can say is something like, 
"I tried a bunch of things, and this one worked best".  Both have value.

-David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Robert Jasiek
> Sent: Tuesday, February 02, 2016 3:11 AM
> To: computer-go@computer-go.org
> Subject: *SPAM* Re: [Computer-go] Mastering the Game of Go with
> Deep Neural Networks and Tree Search
> 
> On 02.02.2016 11:49, Petr Baudis wrote:
> > you seem to come off as perhaps a little too aggressive in your recent
> > few emails...
> 
> If I were not aggressively critical about inappropriate ambiguity, it
> would continue for further decades. Papers containing mathematical
> contents must clarify when something whose use or annotation looks
> mathematical is not a definition / well-defined term but intentionally
> ambiguous. This clarity is a fundamental of mathematical, informatical
> or scientific research. Without clarity, progress is delayed. Every
> professor at university will confirm this to you.
> 
> >The question was about the practical implementation of an MC
> > simulation, which does *not* require formal definitions of all
> > concepts used in the description, or any proofs.  It's just a
> > heuristic, and it can be arbitrarily complicated, making a tradeoff
> > between speed and accuracy.
> 
> Fine, provided it is clearly stated that it is an ambiguous heuristic
> and not an [unambiguous] definition / term. References / links (possibly
> iterative) hiding ambiguity without declaring it are inappropriate.
> 
> --
> robert jasiek
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] *****SPAM***** Re: Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread David Fotland
Amazon uses deep neural nets in many, many areas.  There is some overlap with 
the kind of nets used in AlphaGo.  I passed a link to the paper on to one of 
our researchers and he found it very interesting.  DNN works very well when 
there is a lot of labelled data to learn from.  It can be useful to examine a 
problem area from the point of view: where can I get the most labelled data?

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of "Ingo Althöfer"
> Sent: Tuesday, February 02, 2016 12:31 AM
> To: computer-go@computer-go.org
> Subject: *SPAM* Re: [Computer-go] Mastering the Game of Go with
> Deep Neural Networks and Tree Search
> 
> Hi George,
> 
> welcome, and thanks for your valuable hint on the Google-whitepaper.
> 
> Do/did you have/see any cross-relations between your research and
> computer Go?
> 
> Cheers, Ingo.
> 
> 
> Gesendet: Dienstag, 02. Februar 2016 um 05:14 Uhr Von: "George Dahl"
>  An: computer-go 
> Betreff: Re: [Computer-go] Mastering the Game of Go with Deep Neural
> Networks and Tree Search
> 
> If anything, the other great DCNN applications predate the application
> of these methods to Go. Deep neural nets (convnets and other types) have
> been successfully applied in computer vision, robotics, speech
> recognition, machine translation, natural language processing, and hosts
> of other areas. The first paragraph of the TensorFlow whitepaper
> (http://download.tensorflow.org/paper/whitepaper2015.pdf) even mentions
> dozens at Alphabet specifically.
> 
> Of course the future will hold even more exciting applications, but
> these techniques have been proven in many important problems long before
> they had success in Go and they are used by many different companies and
> research groups. Many example applications from the literature or at
> various companies used models trained on a single machine with GPUs.
> 
> On Mon, Feb 1, 2016 at 12:00 PM, Hideki Kato
>  wrote:Ingo Althofer:
>  bs72>:
> >Hi Hideki,
> >
> >first of all congrats to the nice performance of Zen over the weekend!
> >
> >> Ingo and all,
> >> Why you care AlphaGo and DCNN so much?
> >
> >I can speak only for myself. DCNNs may be not only applied to achieve
> >better playing strength. One may use them to create playing styles, or
> >bots for go variants.
> >
> >One of my favorites is robot frisbee go.
> >http://www.althofer.de/robot-play/frisbee-robot-go.jpg[http://www.altho
> >fer.de/robot-play/frisbee-robot-go.jpg]
> >Perhaps one can teach robots with DCNN to throw the disks better.
> >
> >And my expectation is: During 2016 we will see many more fantastic
> >applications of DCNN, not only in Go. (Olivier had made a similar
> >remark already.)
> 
> Agree but one criticism.  If such great DCNN applications all need huge
> machine power like AlphaGo (upon execution, not training), then the
> technology is hard to apply to many areas, autos and robots, for
> examples.  Are DCNN chips the only way to reduce computational cost?  I
> don't forecast other possibilities.
> Much more economical methods should be developed anyway.
> #Our brain consumes less than 100 watt.
> 
> Hideki
> 
> >Ingo.
> >
> >PS. Dietmar Wolz, my partner in space trajectory design, just told me
> >that in his company they started woth deep learning...
> >___
> >Computer-go mailing list
> >Computer-go@computer-go.org[Computer-go@computer-go.org]
> >http://computer-go.org/mailman/listinfo/computer-go[http://computer-go.
> >org/mailman/listinfo/computer-go]
> --
> Hideki Kato 
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org[Computer-go@computer-go.org]
> http://computer-go.org/mailman/listinfo/computer-
> go___ Computer-go mailing
> list Computer-go@computer-go.org http://computer-
> go.org/mailman/listinfo/computer-go[http://computer-
> go.org/mailman/listinfo/computer-go]
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mathematics in the world

2016-02-02 Thread Robert Jasiek

On 02.02.2016 13:05, "Ingo Althöfer" wrote:

when a student starts
studying Mathematics (s)he learns in the first two semesters that
everything has to be defined waterproof. Later, in particular
when (s)he comes near to doing own research, you have to make
compromises - otherwise you will never make much progress.


When I studied maths and theoretical informatics at FU Berlin (and a bit 
at TU Berlin) (until quitting because of studying too much go, of 
course), during all semesters with every paper, lecture, homework or 
professor, everything had to be well-defined, assumptions complete and 
mandatory proofs accurate.


As a hobby go theory / go rules theory researcher, I can afford the 
luxury of choosing formality (see Cycle Law), semi-formality (see Ko) or 
informality (in informal texts) because I need not pass university 
degrees with the work. My luxury of laziness / convenience when I use 
semi-formal style (as typical in the theory parts of my go theory 
papers) indeed has the advantages of being understood more easily from 
the go player's (also my own) perspective and allowing my faster 
research progress. If I had had to use formal style for every text, I 
might have finished only half of the papers.


If we can believe Penrose (The Road to Reality) and Smolin (The Trouble 
with Physics), the world of mathematical physics is split into guesswork 
(string theory without valid mathematical foundation) and accurate 
maths. Progress might not be made because too many have lost themselves 
in the black hole of ambiguous string theory. Computer go theory seems 
to be similar to physics.


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] DCNN can solve semeai?

2016-02-02 Thread Hiroshi Yamashita

Hi Michael,

It's an intersting idea.
Maybe I could collect many LD positions from pro games.
So LD and semeai solver will be needed. Or just use LD
problems with sequence.
Without difficult feature, DCNN may find answer.

Regards,
Hiroshi Yamashita

- Original Message - 
From: "Michael Sué" 

To: 
Sent: Wednesday, February 03, 2016 4:39 AM
Subject: Re: [Computer-go] DCNN can solve semeai?



Hi,

I would expect this to happen if the system is trained by normal games, only. But I think the system should see actual 
live-and-death (LD) sequences (from some collection) to be able to learn about them and not soak this knowledge up from a whole 
game where most of the moves are "noise" compared to what you ask it to do later.

So the training data could be half and half normal games and LD sequences.

- Michael.

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go 


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] What hardware to use to train the DNN

2016-02-02 Thread David Fotland
Detlef, Hiroshi, Hideki, and others,

I have caffelib integrated with Many Faces so I can evaluate a DNN.  Thank you 
very much Detlef for sample code to set up the input layer.  Building caffe on 
windows is painful.  If anyone else is doing it and gets stuck I might be able 
to help.

What hardware are you using to train networks?  I don’t have a cuda-capable GPU 
yet, so I'm going to buy a new box.  I'd like some advice.  Caffe is not well 
supported on Windows, so I plan to use a Linux box for training, but continue 
to use Windows for testing and development.  For competitions I could use 
either windows or linux.

Thanks in advance,

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Hiroshi Yamashita
> Sent: Monday, February 01, 2016 11:26 PM
> To: computer-go@computer-go.org
> Subject: *SPAM* Re: [Computer-go] DCNN can solve semeai?
> 
> Hi Detlef,
> 
> My study heavily depends on your information. Especially Oakfoam code,
> lenet.prototxt and generate_sample_data_leveldb.py was helpful. Thanks!
> 
> > Quite interesting that you do not reach the prediction rate 57% from
> > the facebook paper by far too! I have the same experience with the
> 
> I'm trying 12 layers 256 filters, but it is around 49.8%.
> I think 57% is maybe from KGS games.
> 
> > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > did not do it and was thinking my training is not ok, but as you have
> > the same result probably this is the only difference?!
> 
> I also did not use before 1800AD. And don't use hadicap games.
> Training positions are 15693570 from 76000 games.
> Test positions are   445693 from  2156 games.
> All games are shuffled in advance. Each position is randomly rotated.
> And memorizing 24000 positions, then shuffle and store to LebelDB.
> At first I did not shuffle games. Then accuracy is down each 61000
> iteration (one epoch, 256 mini-batch).
> http://www.yss-aya.com/20160108.png
> It means DCNN understands easily the difference 1800AD games and  2015AD
> games. I was surprised DCNN's ability. And maybe 1800AD games  are also
> not good for training?
> 
> Regards,
> Hiroshi Yamashita
> 
> - Original Message -
> From: "Detlef Schmicker" 
> To: 
> Sent: Tuesday, February 02, 2016 3:15 PM
> Subject: Re: [Computer-go] DCNN can solve semeai?
> 
> > Thanks a lot for sharing this.
> >
> > Quite interesting that you do not reach the prediction rate 57% from
> > the facebook paper by far too! I have the same experience with the
> > GoGoD database. My numbers are nearly the same as yours 49% :) my net
> > is quite simelar, but I use 7,5,5,3,3, with 12 layers in total.
> >
> > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > did not do it and was thinking my training is not ok, but as you have
> > the same result probably this is the only difference?!
> >
> > Best regards,
> >
> > Detlef
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-02 Thread Hiroshi Yamashita

Hi David,

I use GTS 450 and GTX 980. I use Caffe on ubuntu 14.04.
Caffe's install is difficult. So I recommend using ubuntu 14.04.

time for predicting a position

  Detlef44% Detlef54%   CUDA cores   clock
GTS 450 17.2ms 21  ms   192  783MHz
GTX 980  5.1ms 10.1ms 2,048 1126MHz
GTX 980 cuDNN6.4ms  5.9ms 2,048 1126MHz
GTX 670  7.9ms1,344  915MHz

Learning time

   MNIST GPU Aya's 1 iteration (mini-batch=256)
GTS 450   306 sec   9720 sec
GTX 980   169 sec
GTX 980 cuDNN  24 sec726 sec

GTS 450 is not so slow for predicting a position.
But GTX 980 learning speed is 13 times faster than GTS 450.
And cuDNN, library provided by NVIDIA, is very effective.
cuDNN does not work on GTS 450.
And caffe's page is nice.
http://caffe.berkeleyvision.org/performance_hardware.html

Regards,
Hiroshi Yamashita


- Original Message - 
From: "David Fotland" 

To: 
Sent: Wednesday, February 03, 2016 3:25 AM
Subject: [Computer-go] What hardware to use to train the DNN



Detlef, Hiroshi, Hideki, and others,

I have caffelib integrated with Many Faces so I can evaluate a DNN.  Thank you very much Detlef for sample code to set up the 
input layer.  Building caffe on windows is painful.  If anyone else is doing it and gets stuck I might be able to help.


What hardware are you using to train networks?  I don’t have a cuda-capable GPU yet, so I'm going to buy a new box.  I'd like 
some advice.  Caffe is not well supported on Windows, so I plan to use a Linux box for training, but continue to use Windows for 
testing and development.  For competitions I could use either windows or linux.


Thanks in advance,

David 


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] DCNN can solve semeai?

2016-02-02 Thread Michael Sué

Hi,

I would expect this to happen if the system is trained by normal games, 
only. But I think the system should see actual live-and-death (LD) 
sequences (from some collection) to be able to learn about them and not 
soak this knowledge up from a whole game where most of the moves are 
"noise" compared to what you ask it to do later.

So the training data could be half and half normal games and LD sequences.

- Michael.

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread Marc Landgraf
What? You have mixed up things.

http://www.europeangodatabase.eu/EGD/Player_Card.php?=17374016

2016-02-02 20:21 GMT+01:00 Olivier Teytaud :
>>> If AlphaGo had lost at least one game, I'd understand how people can have
>>> an upper bound on its level, but with 5-0 (except for Blitz) it's hard to
>>> have an upper bound on his level. After all, AlphaGo might just have played
>>> well enough for crushing Fan Hui, and a weak move while the position is
>>> still in favor of AlphaGo is not really a weak move (at least in a
>>> game-theoretic point of view...).
>>
>>
>> I just want to point that according to Myungwan Kim 9p (video referenced
>> in this thread) on the first game, Alpha Go did some mistake early in the
>> game and was behind during nearly the whole game so some of his moves should
>> be weak in game-theoric point of view.
>
>
> Thanks, this point is interesting - that's really an argument limiting the
> strength of AlphaGo.
>
> On the other hand, they have super strong people in the team (at the pro
> level, maybe ? if Aja has pro level...),
> and one of the guys said he is "quietly confident", which suggests they have
> strong reasons for believing they have a big chance :-)
>
> Good luck AlphaGo :-) I'm grateful because since this happened many more
> doors are opened for people
> working with these tools, even if they don't touch games, and this is really
> useful for the world :-)
>
>
>
> --
> =
> "I will never sign a document with logos in black & white." A. Einstein
> Olivier Teytaud, olivier.teyt...@inria.fr, http://www.slideshare.net/teytaud
>
>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread Robert Jasiek

On 02.02.2016 20:21, Olivier Teytaud wrote:

On the other hand, they have super strong people in the team (at the pro
level, maybe ? if Aja has pro level...)


Ca. 5d amateur in the team is enough, regardless of whether Myongwan Kim 
thinks that only 9p can understand. Not so. Kim's above 5d amateur 
comments were related to reading or by heart knowledge of the latest 
nadare variations (before the post-joseki aji mistakes, which can be 
detected by 5d, or even below), but reading / joseki is not AlphaGo's 
weakness.


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] guess AlphaGo's CGOS rating

2016-02-02 Thread Hiroshi Yamashita

Hi,

I made CGOS rating include AlphaGo.

CGOSRemi's pro rating(same as paper)

Ke Jie   4642?   3620  Strongest human
Lee Sedol4538?   3516
AlphaGo  4162?   3140  Distributed, 1202CPUs, 176GPUs
Fan Hui  3942?   2920
AlphaGo  3912?   2890  48CPUs, 8GPUs, one machine
Zen 24core   3800???   KGS 7d, 24 cores, one machine.
Zen-10.8-2c  3485  
CrazyStone-0002  3313  i7-5930K, 6 threads

Zen-10.8-1c  3299  DCNN versoin
Zen-1c-2.8G  3072
AlphaGo RL   2780? Reinforcement learning, no search
Aya786l_10k  2718  KGS 2d
darkfores2   2657? KGS 3d, no search
AlphaGo SL   2539?   1517  DCNN,   no search
darkfores1   2490? KGS 2d, no search
pachi11_Pat_100k 2478
darkforest   2271? KGS 1d, no search
DCNN_Detlef542179  Detlef's 54%, no search
pachi11_Pat_10k  2104
DCNN-Detlef  1903  Detlef's 44%, no search
Gnugo-3.7.10-a1  1800  KGS 5k

? is guess. ??? is my anticipation.

This result is based on AlphaGo(RL) beats Pachi 100k with 85% winrate.
darkforest is also based on winrate against Pachi 100k.
Pro and computers comparison is maybe only from 8-2 result AlphaGo vs
Fan Hui. So pro is maybe more strong (or weak).
Watching this table, Zen 7d maybe play nice game against one machine
AlphaGo.

Pro rating is from Remi's site.
http://www.goratings.org/
CGOS rating is from BayesElo.
http://www.yss-aya.com/cgos/19x19/bayes.html

Regards,
Hiroshi Yamashita

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-02 Thread Hideki Kato
Since Zen's engine is improved sololy by Yomato, I have no idea 
in detail but I believe Yamato has used one Mac Pro so far 
(Linux and Windows).
#He has implemented DCNN by himself, not using tools.

Hideki
 
David Fotland: <0a0301d15de7$1180d760$34828620$@smart-games.com>: 
>Detlef, Hiroshi, Hideki, and others,

>

>I have caffelib integrated with Many Faces so I can evaluate a DNN.  Thank you 
>very much 
>Detlef for sample code to set up the input layer.  Building caffe on windows 
>is painful.  If 
>anyone else is doing it and gets stuck I might be able to help.

>

>What hardware are you using to train networks?  I don’t have a cuda-capable 
>GPU yet, so I'm 
>going to buy a new box.  I'd like some advice.  Caffe is not well supported on 
>Windows, so I 
>plan to use a Linux box for training, but continue to use Windows for testing 
>and 
>development.  For competitions I could use either windows or linux.

>

>Thanks in advance,

>

>David

>

>> -Original Message-

>> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf

>> Of Hiroshi Yamashita

>> Sent: Monday, February 01, 2016 11:26 PM

>> To: computer-go@computer-go.org

>> Subject: *SPAM* Re: [Computer-go] DCNN can solve semeai?

>> 

>> Hi Detlef,

>> 

>> My study heavily depends on your information. Especially Oakfoam code,

>> lenet.prototxt and generate_sample_data_leveldb.py was helpful. Thanks!

>> 

>> > Quite interesting that you do not reach the prediction rate 57% from

>> > the facebook paper by far too! I have the same experience with the

>> 

>> I'm trying 12 layers 256 filters, but it is around 49.8%.

>> I think 57% is maybe from KGS games.

>> 

>> > Did you strip the games before 1800AD, as mentioned in the FB paper? I

>> > did not do it and was thinking my training is not ok, but as you have

>> > the same result probably this is the only difference?!

>> 

>> I also did not use before 1800AD. And don't use hadicap games.

>> Training positions are 15693570 from 76000 games.

>> Test positions are   445693 from  2156 games.

>> All games are shuffled in advance. Each position is randomly rotated.

>> And memorizing 24000 positions, then shuffle and store to LebelDB.

>> At first I did not shuffle games. Then accuracy is down each 61000

>> iteration (one epoch, 256 mini-batch).

>> http://www.yss-aya.com/20160108.png

>> It means DCNN understands easily the difference 1800AD games and  2015AD

>> games. I was surprised DCNN's ability. And maybe 1800AD games  are also

>> not good for training?

>> 

>> Regards,

>> Hiroshi Yamashita

>> 

>> - Original Message -

>> From: "Detlef Schmicker" 

>> To: 

>> Sent: Tuesday, February 02, 2016 3:15 PM

>> Subject: Re: [Computer-go] DCNN can solve semeai?

>> 

>> > Thanks a lot for sharing this.

>> >

>> > Quite interesting that you do not reach the prediction rate 57% from

>> > the facebook paper by far too! I have the same experience with the

>> > GoGoD database. My numbers are nearly the same as yours 49% :) my net

>> > is quite simelar, but I use 7,5,5,3,3, with 12 layers in total.

>> >

>> > Did you strip the games before 1800AD, as mentioned in the FB paper? I

>> > did not do it and was thinking my training is not ok, but as you have

>> > the same result probably this is the only difference?!

>> >

>> > Best regards,

>> >

>> > Detlef

>> 

>> ___

>> Computer-go mailing list

>> Computer-go@computer-go.org

>> http://computer-go.org/mailman/listinfo/computer-go

>

>___

>Computer-go mailing list

>Computer-go@computer-go.org

>http://computer-go.org/mailman/listinfo/computer-go
-- 
Hideki Kato 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread Igor Polyakov
I think it would be an awesome commercial product for strong Go players. 
Maybe even if the AI shows the continuations and the score estimates 
between different lines, it will give the player enough reasoning to 
understand why one move is better than the other.


On 2016-02-02 8:29, Jim O'Flaherty wrote:


And to meta this awesome short story...

AI Software Engineers: Robert, please stop asking our AI for 
explanations. We don't want to distract it with limited human 
understanding. And we don't want the Herculean task of coding up that 
extremely frail and error prone bridge.


On Feb 1, 2016 3:03 PM, "Rainer Rosenthal" > wrote:


~~
Robert: "Hey, AI, you should provide explanations!"
AI: "Why?"
~~

Cheers,
Rainer

Date: Mon, 1 Feb 2016 08:15:12 -0600
From: "Jim O'Flaherty" >
To: computer-go@computer-go.org

Subject: Re: [Computer-go] Mastering the Game of Go with Deep
Neural
Networks and Tree Search
Message-ID:
   
>
Content-Type: text/plain; charset="utf-8"

Robert,

I'm not seeing the ROI in attempting to map human
idiosyncratic linguistic
systems to/into a Go engine.


___
Computer-go mailing list
Computer-go@computer-go.org 
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] guess AlphaGo's CGOS rating

2016-02-02 Thread Igor Polyakov
But AlphaGo single machine is stronger than Fan Hui, since it won 8-2 in 
all matches combined on one machine, as far as I understand they used 
this version


On 2016-02-02 13:33, Hiroshi Yamashita wrote:

Hi,

I made CGOS rating include AlphaGo.

CGOSRemi's pro rating(same as paper)

Ke Jie   4642?   3620  Strongest human
Lee Sedol4538?   3516
AlphaGo  4162?   3140  Distributed, 1202CPUs, 176GPUs
Fan Hui  3942?   2920
AlphaGo  3912?   2890  48CPUs, 8GPUs, one machine
Zen 24core   3800???   KGS 7d, 24 cores, one machine.
Zen-10.8-2c  3485  CrazyStone-0002  3313 i7-5930K, 6 threads
Zen-10.8-1c  3299  DCNN versoin
Zen-1c-2.8G  3072
AlphaGo RL   2780? Reinforcement learning, no search
Aya786l_10k  2718  KGS 2d
darkfores2   2657? KGS 3d, no search
AlphaGo SL   2539?   1517  DCNN,   no search
darkfores1   2490? KGS 2d, no search
pachi11_Pat_100k 2478
darkforest   2271? KGS 1d, no search
DCNN_Detlef542179  Detlef's 54%, no search
pachi11_Pat_10k  2104
DCNN-Detlef  1903  Detlef's 44%, no search
Gnugo-3.7.10-a1  1800  KGS 5k

? is guess. ??? is my anticipation.

This result is based on AlphaGo(RL) beats Pachi 100k with 85% winrate.
darkforest is also based on winrate against Pachi 100k.
Pro and computers comparison is maybe only from 8-2 result AlphaGo vs
Fan Hui. So pro is maybe more strong (or weak).
Watching this table, Zen 7d maybe play nice game against one machine
AlphaGo.

Pro rating is from Remi's site.
http://www.goratings.org/
CGOS rating is from BayesElo.
http://www.yss-aya.com/cgos/19x19/bayes.html

Regards,
Hiroshi Yamashita

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] guess AlphaGo's CGOS rating

2016-02-02 Thread Hideki Kato
>But AlphaGo single machine is stronger than Fan Hui, since it won 8-2 in 
>all matches combined on one machine, as far as I understand they used 
>this version

No.  AlphaGo Distributed was used for the 10 games.

Hideki

>On 2016-02-02 13:33, Hiroshi Yamashita wrote:
>> Hi,
>>
>> I made CGOS rating include AlphaGo.
>>
>> CGOSRemi's pro rating(same as paper)
>>
>> Ke Jie   4642?   3620  Strongest human
>> Lee Sedol4538?   3516
>> AlphaGo  4162?   3140  Distributed, 1202CPUs, 176GPUs
>> Fan Hui  3942?   2920
>> AlphaGo  3912?   2890  48CPUs, 8GPUs, one machine
>> Zen 24core   3800???   KGS 7d, 24 cores, one machine.
>> Zen-10.8-2c  3485  CrazyStone-0002  3313 i7-5930K, 6 threads
>> Zen-10.8-1c  3299  DCNN versoin
>> Zen-1c-2.8G  3072
>> AlphaGo RL   2780? Reinforcement learning, no search
>> Aya786l_10k  2718  KGS 2d
>> darkfores2   2657? KGS 3d, no search
>> AlphaGo SL   2539?   1517  DCNN,   no search
>> darkfores1   2490? KGS 2d, no search
>> pachi11_Pat_100k 2478
>> darkforest   2271? KGS 1d, no search
>> DCNN_Detlef542179  Detlef's 54%, no search
>> pachi11_Pat_10k  2104
>> DCNN-Detlef  1903  Detlef's 44%, no search
>> Gnugo-3.7.10-a1  1800  KGS 5k
>>
>> ? is guess. ??? is my anticipation.
>>
>> This result is based on AlphaGo(RL) beats Pachi 100k with 85% winrate.
>> darkforest is also based on winrate against Pachi 100k.
>> Pro and computers comparison is maybe only from 8-2 result AlphaGo vs
>> Fan Hui. So pro is maybe more strong (or weak).
>> Watching this table, Zen 7d maybe play nice game against one machine
>> AlphaGo.
>>
>> Pro rating is from Remi's site.
>> http://www.goratings.org/
>> CGOS rating is from BayesElo.
>> http://www.yss-aya.com/cgos/19x19/bayes.html
>>
>> Regards,
>> Hiroshi Yamashita
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
>___
>Computer-go mailing list
>Computer-go@computer-go.org
>http://computer-go.org/mailman/listinfo/computer-go
-- 
Hideki Kato 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] DCNN can solve semeai?

2016-02-02 Thread Andy
The Deepmind paper has a short section on the Rollout Policy they use, it
looks like they made some improvements on for their rollouts, maybe they
are better at handling semeai than previous methods. The response and
non-response patterns sound similar, but they also include liberty counts.
I don't remember that being included in classic Mogo 3x3 patterns.

They also talk about caching moves from the search tree. So if the tree has
already found the correct answers to sente moves in a semeai, they could be
applied in the rollout as well.

Has anyone tried techniques similar to these?





On Tue, Feb 2, 2016 at 2:02 PM, Hiroshi Yamashita  wrote:

> Hi Michael,
>
> It's an intersting idea.
> Maybe I could collect many LD positions from pro games.
> So LD and semeai solver will be needed. Or just use LD
> problems with sequence.
> Without difficult feature, DCNN may find answer.
>
> Regards,
> Hiroshi Yamashita
>
> - Original Message - From: "Michael Sué" 
> To: 
> Sent: Wednesday, February 03, 2016 4:39 AM
> Subject: Re: [Computer-go] DCNN can solve semeai?
>
>
> Hi,
>>
>> I would expect this to happen if the system is trained by normal games,
>> only. But I think the system should see actual live-and-death (LD)
>> sequences (from some collection) to be able to learn about them and not
>> soak this knowledge up from a whole game where most of the moves are
>> "noise" compared to what you ask it to do later.
>> So the training data could be half and half normal games and LD sequences.
>>
>> - Michael.
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread Robert Jasiek

On 02.02.2016 19:07, David Fotland wrote:

consider some of this as the difference between math and engineering.  Math 
desires rigor.
Engineering desires working solutions.  When an engineering solution is being 
described,
you shouldn't expect the same level of rigor as in a mathematical proof.  Often 
all we can
say is something like, "I tried a bunch of things, and this one worked best".  
Both have value.


Of course. This is perfectly fine. - I have criticised something else: 
the hiding of ambiguity of things portrayed as maths when statements of 
the kind "this is a heuristic / engineering / first guess" are easily 
possible. Research papers should be honest. (They may hide secret 
details, but this is another topic.)


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] *****SPAM***** Re: What hardware to use to train the DNN

2016-02-02 Thread David Fotland
How long does it take to train one of your nets?  Is it safe to assume that 
training time is roughly proportional to the number of neurons in the net?

Thanks,

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Detlef Schmicker
> Sent: Tuesday, February 02, 2016 10:35 AM
> To: computer-go@computer-go.org
> Subject: *SPAM* Re: [Computer-go] What hardware to use to train
> the DNN
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Hi David,
> 
> I use Ubuntu 14.04 LTS with a NVIDIA GTX970 Graphic card (and i7-4970k,
> but this is not important for training I think) and installed CUDNN v4
> (important, at least a factor 4 in training speed).
> 
> This Ubuntu version is officially supported by Cuda and I did only have
> minor problems if an Ubuntu update updated the graphics driver: I had 2
> times in the last year to reinstall cuda (a little ugly, as the graphic
> driver did not work after the update and you had to boot into command
> line mode).
> 
> Detlef
> 
> Am 02.02.2016 um 19:25 schrieb David Fotland:
> > Detlef, Hiroshi, Hideki, and others,
> >
> > I have caffelib integrated with Many Faces so I can evaluate a DNN.
> > Thank you very much Detlef for sample code to set up the input layer.
> > Building caffe on windows is painful.  If anyone else is doing it and
> > gets stuck I might be able to help.
> >
> > What hardware are you using to train networks?  I don t have a
> > cuda-capable GPU yet, so I'm going to buy a new box.  I'd like some
> > advice.  Caffe is not well supported on Windows, so I plan to use a
> > Linux box for training, but continue to use Windows for testing and
> > development.  For competitions I could use either windows or linux.
> >
> > Thanks in advance,
> >
> > David
> >
> >> -Original Message- From: Computer-go
> >> [mailto:computer-go-boun...@computer-go.org] On Behalf Of Hiroshi
> >> Yamashita Sent: Monday, February 01, 2016 11:26 PM To:
> >> computer-go@computer-go.org Subject: *SPAM* Re:
> >> [Computer-go] DCNN can solve semeai?
> >>
> >> Hi Detlef,
> >>
> >> My study heavily depends on your information. Especially Oakfoam
> >> code, lenet.prototxt and generate_sample_data_leveldb.py was helpful.
> >> Thanks!
> >>
> >>> Quite interesting that you do not reach the prediction rate 57% from
> >>> the facebook paper by far too! I have the same experience with the
> >>
> >> I'm trying 12 layers 256 filters, but it is around 49.8%. I think 57%
> >> is maybe from KGS games.
> >>
> >>> Did you strip the games before 1800AD, as mentioned in the FB paper?
> >>> I did not do it and was thinking my training is not ok, but as you
> >>> have the same result probably this is the only difference?!
> >>
> >> I also did not use before 1800AD. And don't use hadicap games.
> >> Training positions are 15693570 from 76000 games. Test
> >> positions are   445693 from  2156 games. All games are shuffled
> >> in advance. Each position is randomly rotated. And memorizing
> >> 24000 positions, then shuffle and store to LebelDB. At first I did
> >> not shuffle games. Then accuracy is down each 61000 iteration (one
> >> epoch, 256 mini-batch). http://www.yss-aya.com/20160108.png
> >> It means DCNN understands easily the difference 1800AD games and
> >> 2015AD games. I was surprised DCNN's ability. And maybe 1800AD games
> >> are also not good for training?
> >>
> >> Regards, Hiroshi Yamashita
> >>
> >> - Original Message - From: "Detlef Schmicker"
> >>  To:  Sent: Tuesday,
> >> February 02, 2016 3:15 PM Subject: Re: [Computer-go] DCNN can solve
> >> semeai?
> >>
> >>> Thanks a lot for sharing this.
> >>>
> >>> Quite interesting that you do not reach the prediction rate 57% from
> >>> the facebook paper by far too! I have the same experience with the
> >>> GoGoD database. My numbers are nearly the same as yours 49% :) my
> >>> net is quite simelar, but I use 7,5,5,3,3,
> >>> with 12 layers in total.
> >>>
> >>> Did you strip the games before 1800AD, as mentioned in the FB paper?
> >>> I did not do it and was thinking my training is not ok, but as you
> >>> have the same result probably this is the only difference?!
> >>>
> >>> Best regards,
> >>>
> >>> Detlef
> >>
> >> ___ Computer-go mailing
> >> list Computer-go@computer-go.org
> >> http://computer-go.org/mailman/listinfo/computer-go
> >
> > ___ Computer-go mailing
> > list Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
> >
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2.0.22 (GNU/Linux)
> 
> iQIcBAEBAgAGBQJWsPbCAAoJEInWdHg+Znf4MMUQAIEp0VXzCC58S+FyWniUFrnq
> 1zTBd9bApj57mJWE7n+etPOWy9tPrWfKRdc25U8KHnJNiiK3UrVOdhVJPjOK3l9g
> qEPom48av0DfjrGNp2ZFJ30xCV7eahdR27vguYCn+qdg2Hyc/X7yhCp/mzF7zEMm
> yx1n8A+ZwuRMkLTApuewaccA1TiMTOP+mJj79PHRgFdaaoOiYn41Mzp8bKllb3t/
> 

Re: [Computer-go] *****SPAM***** Re: What hardware to use to train the DNN

2016-02-02 Thread Petri Pitkanen
At least on digital filter time increases non-linearly - you can think NN
as non-linear FIR. And multilayer structure should make this harder, if you
think of it . So some tricks to speed it up might be necessary. dunno about
NN but on digital filters one trick was to train first part of filter and
only after it has achieved something adapt all weights, may not be
applicable here.

PP

2016-02-02 20:38 GMT+02:00 David Fotland :

> How long does it take to train one of your nets?  Is it safe to assume
> that training time is roughly proportional to the number of neurons in the
> net?
>
> Thanks,
>
> David
>
> > -Original Message-
> > From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> > Of Detlef Schmicker
> > Sent: Tuesday, February 02, 2016 10:35 AM
> > To: computer-go@computer-go.org
> > Subject: *SPAM* Re: [Computer-go] What hardware to use to train
> > the DNN
> >
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA1
> >
> > Hi David,
> >
> > I use Ubuntu 14.04 LTS with a NVIDIA GTX970 Graphic card (and i7-4970k,
> > but this is not important for training I think) and installed CUDNN v4
> > (important, at least a factor 4 in training speed).
> >
> > This Ubuntu version is officially supported by Cuda and I did only have
> > minor problems if an Ubuntu update updated the graphics driver: I had 2
> > times in the last year to reinstall cuda (a little ugly, as the graphic
> > driver did not work after the update and you had to boot into command
> > line mode).
> >
> > Detlef
> >
> > Am 02.02.2016 um 19:25 schrieb David Fotland:
> > > Detlef, Hiroshi, Hideki, and others,
> > >
> > > I have caffelib integrated with Many Faces so I can evaluate a DNN.
> > > Thank you very much Detlef for sample code to set up the input layer.
> > > Building caffe on windows is painful.  If anyone else is doing it and
> > > gets stuck I might be able to help.
> > >
> > > What hardware are you using to train networks?  I don t have a
> > > cuda-capable GPU yet, so I'm going to buy a new box.  I'd like some
> > > advice.  Caffe is not well supported on Windows, so I plan to use a
> > > Linux box for training, but continue to use Windows for testing and
> > > development.  For competitions I could use either windows or linux.
> > >
> > > Thanks in advance,
> > >
> > > David
> > >
> > >> -Original Message- From: Computer-go
> > >> [mailto:computer-go-boun...@computer-go.org] On Behalf Of Hiroshi
> > >> Yamashita Sent: Monday, February 01, 2016 11:26 PM To:
> > >> computer-go@computer-go.org Subject: *SPAM* Re:
> > >> [Computer-go] DCNN can solve semeai?
> > >>
> > >> Hi Detlef,
> > >>
> > >> My study heavily depends on your information. Especially Oakfoam
> > >> code, lenet.prototxt and generate_sample_data_leveldb.py was helpful.
> > >> Thanks!
> > >>
> > >>> Quite interesting that you do not reach the prediction rate 57% from
> > >>> the facebook paper by far too! I have the same experience with the
> > >>
> > >> I'm trying 12 layers 256 filters, but it is around 49.8%. I think 57%
> > >> is maybe from KGS games.
> > >>
> > >>> Did you strip the games before 1800AD, as mentioned in the FB paper?
> > >>> I did not do it and was thinking my training is not ok, but as you
> > >>> have the same result probably this is the only difference?!
> > >>
> > >> I also did not use before 1800AD. And don't use hadicap games.
> > >> Training positions are 15693570 from 76000 games. Test
> > >> positions are   445693 from  2156 games. All games are shuffled
> > >> in advance. Each position is randomly rotated. And memorizing
> > >> 24000 positions, then shuffle and store to LebelDB. At first I did
> > >> not shuffle games. Then accuracy is down each 61000 iteration (one
> > >> epoch, 256 mini-batch). http://www.yss-aya.com/20160108.png
> > >> It means DCNN understands easily the difference 1800AD games and
> > >> 2015AD games. I was surprised DCNN's ability. And maybe 1800AD games
> > >> are also not good for training?
> > >>
> > >> Regards, Hiroshi Yamashita
> > >>
> > >> - Original Message - From: "Detlef Schmicker"
> > >>  To:  Sent: Tuesday,
> > >> February 02, 2016 3:15 PM Subject: Re: [Computer-go] DCNN can solve
> > >> semeai?
> > >>
> > >>> Thanks a lot for sharing this.
> > >>>
> > >>> Quite interesting that you do not reach the prediction rate 57% from
> > >>> the facebook paper by far too! I have the same experience with the
> > >>> GoGoD database. My numbers are nearly the same as yours 49% :) my
> > >>> net is quite simelar, but I use 7,5,5,3,3,
> > >>> with 12 layers in total.
> > >>>
> > >>> Did you strip the games before 1800AD, as mentioned in the FB paper?
> > >>> I did not do it and was thinking my training is not ok, but as you
> > >>> have the same result probably this is the only difference?!
> > >>>
> > >>> Best regards,
> > >>>
> > >>> Detlef
> > >>
> > >> 

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread Robert Jasiek
On 01.02.2016 23:01, Brian Cloutier wrote:> I had to search a lot of 
papers on MCTS which

> mentioned "terminal states" before finding one which defined them.
> [...] they defined it as a position where there are no more legal
> moves.

On 01.02.2016 23:15, Brian Sheppard wrote:

You play until neither player wishes to make a move. The players

> are willing to move on any point that is not self-atari, and they

are willing to make self-atari plays if capture would result in a
Nakade (http://senseis.xmp.net/?Nakade)


Defining "terminal state" as no more legal moves is probably 
inappropriate. The phrase "willing to move" is undefined, unless they 
exactly define it as "to make self-atari plays iff capture would result 
in a Nakade". This requires a proof that this is the only exception. 
Where is that proof? It also requires a definition of nakade. Where is 
that definition?


In my book Capturing Races 1, I have outlined a definition of 
"[semeai-]eye" and, in Life and Death Problems 1, of "nakade". Such are 
more complicated by far than naive descriptions online suggest. In 
particular, such outlined definitions depend on the still undefined 
"essential [string]", "seki" [sic, undefined as a strategic object 
because the Japanese 2003 Rules' definition does not distinguish good 
from bad strategy!] and "lake" [connected part of the potential 
eyespace..., which in turn is still undefined as a strategic object]. 
They also depend on "ko", but at least this I have defined: 
http://home.snafu.de/jasiek/ko.pdf Needless to say, determining the 
objects that are essential, seki, lake, ko is a hard task in itself.


So where is the mathematically strict "definition" of nakade? Has 
anybody proceeded beyond my definition attempts? I suspect the standard 
problem of research again: definition by reference to a different paper 
with an ambiguous description. If ambiguous terms are presumed for 
pragmatic reasons, this must be stated! My mentioned terms are ambiguous 
but less so than every other attempt - or where are the better attempts?


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread Ingo Althöfer
Hi George,

welcome, and thanks for your valuable hint on the Google-whitepaper.

Do/did you have/see any cross-relations between your research and
computer Go?
 
Cheers, Ingo.
 

Gesendet: Dienstag, 02. Februar 2016 um 05:14 Uhr
Von: "George Dahl" 
An: computer-go 
Betreff: Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks 
and Tree Search

If anything, the other great DCNN applications predate the application of these 
methods to Go. Deep neural nets (convnets and other types) have been 
successfully applied in computer vision, robotics, speech recognition, machine 
translation, natural language processing, and hosts of other areas. The first 
paragraph of the TensorFlow whitepaper 
(http://download.tensorflow.org/paper/whitepaper2015.pdf) even mentions dozens 
at Alphabet specifically.
 
Of course the future will hold even more exciting applications, but these 
techniques have been proven in many important problems long before they had 
success in Go and they are used by many different companies and research 
groups. Many example applications from the literature or at various companies 
used models trained on a single machine with GPUs.
 
On Mon, Feb 1, 2016 at 12:00 PM, Hideki Kato 
 wrote:Ingo Althofer: 
:
>Hi Hideki,
>
>first of all congrats to the nice performance of Zen over the weekend!
>
>> Ingo and all,
>> Why you care AlphaGo and DCNN so much?
>
>I can speak only for myself. DCNNs may be not only applied to
>achieve better playing strength. One may use them to create
>playing styles, or bots for go variants.
>
>One of my favorites is robot frisbee go.
>http://www.althofer.de/robot-play/frisbee-robot-go.jpg[http://www.althofer.de/robot-play/frisbee-robot-go.jpg]
>Perhaps one can teach robots with DCNN to throw the disks better.
>
>And my expectation is: During 2016 we will see many more fantastic
>applications of DCNN, not only in Go. (Olivier had made a similar
>remark already.)

Agree but one criticism.  If such great DCNN applications all
need huge machine power like AlphaGo (upon execution, not
training), then the technology is hard to apply to many areas,
autos and robots, for examples.  Are DCNN chips the only way to
reduce computational cost?  I don't forecast other possibilities.
Much more economical methods should be developed anyway.
#Our brain consumes less than 100 watt.

Hideki

>Ingo.
>
>PS. Dietmar Wolz, my partner in space trajectory design, just told me
>that in his company they started woth deep learning...
>___
>Computer-go mailing list
>Computer-go@computer-go.org[Computer-go@computer-go.org]
>http://computer-go.org/mailman/listinfo/computer-go[http://computer-go.org/mailman/listinfo/computer-go]
--
Hideki Kato 

___
Computer-go mailing list
Computer-go@computer-go.org[Computer-go@computer-go.org]
http://computer-go.org/mailman/listinfo/computer-go___
 Computer-go mailing list Computer-go@computer-go.org 
http://computer-go.org/mailman/listinfo/computer-go[http://computer-go.org/mailman/listinfo/computer-go]
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread Jim O'Flaherty
And to meta this awesome short story...

AI Software Engineers: Robert, please stop asking our AI for explanations.
We don't want to distract it with limited human understanding. And we don't
want the Herculean task of coding up that extremely frail and error prone
bridge.
On Feb 1, 2016 3:03 PM, "Rainer Rosenthal"  wrote:

> ~~
> Robert: "Hey, AI, you should provide explanations!"
> AI: "Why?"
> ~~
>
> Cheers,
> Rainer
>
>> Date: Mon, 1 Feb 2016 08:15:12 -0600
>> From: "Jim O'Flaherty" 
>> To: computer-go@computer-go.org
>> Subject: Re: [Computer-go] Mastering the Game of Go with Deep Neural
>> Networks and Tree Search
>> Message-ID:
>> <
>> cakx5gkjc7j0uq_pmxyumyfre7r+7ydltigbna5oo7kvnzq7...@mail.gmail.com>
>> Content-Type: text/plain; charset="utf-8"
>>
>> Robert,
>>
>> I'm not seeing the ROI in attempting to map human idiosyncratic linguistic
>> systems to/into a Go engine.
>>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread Olivier Teytaud
> Without clarity, progress is delayed. Every professor at university will
> confirm this to you.
>

IMHO, Petr contributed enough to academic research
for not needing a discussion with a professor at university
for learning how to do/clarify research :-)






-- 
=
"I will never sign a document with logos in black & white." A. Einstein
Olivier Teytaud, olivier.teyt...@inria.fr, http://www.slideshare.net/teytaud
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mathematics in the world

2016-02-02 Thread Robert Jasiek

On 02.02.2016 13:05, "Ingo Althöfer" wrote:

For research in general it is good to have waves:


Research is faster if informalism and formalism progress simultaneously 
(by different people or in different papers).


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mathematics in the world

2016-02-02 Thread Ingo Althöfer
Hi Robert,

we met for the first time at the EGC 2000 in Berlin-Strausberg.
I know your special ways of argumenting - and think that you
are an enrichment both for the go world and for the computer go scene.

But ...
> Without clarity, progress is delayed. Every 
> professor at university will confirm this to you.

as a professor of Mathematics (for 22 years now) I severely
question this point of view. Of course, when a student starts
studying Mathematics (s)he learns in the first two semesters that
everything has to be defined waterproof. Later, in particular
when (s)he comes near to doing own research, you have to make
compromises - otherwise you will never make much progress.

For research in general it is good to have waves:
moving forward in informal thoughts and handwaving proofs (and maybe 
even with a glass of beer in the hand) - then having another phase where
precision and clarity is on the agenda. And back to informal mode ...
Also a team of mathematicians will be most successful when they
have handwavers, dreamdancers, bean-counters, and formalists.

Accept that the world is multi-facetted, even our small
computer go community.

Cheers (without a beer at hand right now),
Ingo.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread Robert Jasiek

On 02.02.2016 11:49, Petr Baudis wrote:

you seem to come off as perhaps a little too
aggressive in your recent few emails...


If I were not aggressively critical about inappropriate ambiguity, it 
would continue for further decades. Papers containing mathematical 
contents must clarify when something whose use or annotation looks 
mathematical is not a definition / well-defined term but intentionally 
ambiguous. This clarity is a fundamental of mathematical, informatical 
or scientific research. Without clarity, progress is delayed. Every 
professor at university will confirm this to you.



   The question was about the practical implementation of an MC
simulation, which does *not* require formal definitions of all concepts
used in the description, or any proofs.  It's just a heuristic, and it
can be arbitrarily complicated, making a tradeoff between speed and
accuracy.


Fine, provided it is clearly stated that it is an ambiguous heuristic 
and not an [unambiguous] definition / term. References / links (possibly 
iterative) hiding ambiguity without declaring it are inappropriate.


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] mini_batch size in prediction

2016-02-02 Thread Hiroshi Yamashita

Hi,

This is not training time, but about mini_batch for prediction.

Need time for one batch and time for per one position

mini_batch   one batchone position  Memory required(Caffe's log)
    1   0.002330 sec   2.33ms  4435968 (4.2MB)
    2   0.002440 sec   1.22ms
    4   0.002608 sec   0.65ms
    5   0.002717 sec   0.54ms 22179840 (21.2MB)
    6   0.003915 sec   0.65ms 26615808
    7   0.004107 sec   0.58ms 31051776
    8   0.004141 sec   0.51ms 35487744
   16   0.007400 sec   0.46ms 70975488
   32   0.012268 sec   0.38ms141950976
   64   0.023358 sec   0.36ms283901952
  128   0.044951 sec   0.35ms567803904
  256   0.088478 sec   0.34ms   1135607808
  512   0.175728 sec   0.34ms   2271215616
 1024   0.352346 sec   0.34ms   4542431232
 2048   Err, out of memory  9084862464 (8.5GB)


One batch speed does not change up to mini_batch = 5, then becomes slow.
Over mini_batch = 32, one position time is same,
mini_batch = 2048 does not work by out of memory.
I don't know learning speed when changing mini-batch.

I think this result depends on GPU and DCNN size.
And one batch speed maybe depends on CPU and GPU memory bus speed?

DCNN is 12 layers, 5x5_128, 3x3_128 x11
Input is two channels, black and white stone.
Training is 15.6 million position, GoGoD, accuracy 41.5%
batch_size=256, 70 iteration(11.5 epoch), 106 hours.
*.caffemodel size is 5.6MB. GTX 980 4GB

Regards,
Hiroshi Yamashita

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go