Re: [Computer-go] What hardware to use to train the DNN

2016-03-09 Thread James Guo
Will geforce 940m do cuDNN?


  From: Hiroshi Yamashita 
 To: computer-go@computer-go.org 
 Sent: Wednesday, February 3, 2016 2:55 AM
 Subject: Re: [Computer-go] What hardware to use to train the DNN
   
Hi David,

I use GTS 450 and GTX 980. I use Caffe on ubuntu 14.04.
Caffe's install is difficult. So I recommend using ubuntu 14.04.

time for predicting a position

              Detlef44%    Detlef54%  CUDA cores  clock
GTS 450        17.2ms        21  ms      192      783MHz
GTX 980          5.1ms        10.1ms    2,048    1126MHz
GTX 980 cuDNN    6.4ms          5.9ms    2,048    1126MHz
GTX 670          7.9ms                    1,344      915MHz

Learning time

                MNIST GPU    Aya's 1 iteration (mini-batch=256)
GTS 450          306 sec      9720 sec
GTX 980          169 sec
GTX 980 cuDNN      24 sec        726 sec

GTS 450 is not so slow for predicting a position.
But GTX 980 learning speed is 13 times faster than GTS 450.
And cuDNN, library provided by NVIDIA, is very effective.
cuDNN does not work on GTS 450.
And caffe's page is nice.
http://caffe.berkeleyvision.org/performance_hardware.html

Regards,
Hiroshi Yamashita


- Original Message - 
From: "David Fotland" 
To: 
Sent: Wednesday, February 03, 2016 3:25 AM
Subject: [Computer-go] What hardware to use to train the DNN


> Detlef, Hiroshi, Hideki, and others,
>
> I have caffelib integrated with Many Faces so I can evaluate a DNN.  Thank 
> you very much Detlef for sample code to set up the 
> input layer.  Building caffe on windows is painful.  If anyone else is doing 
> it and gets stuck I might be able to help.
>
> What hardware are you using to train networks?  I don’t have a cuda-capable 
> GPU yet, so I'm going to buy a new box.  I'd like 
> some advice.  Caffe is not well supported on Windows, so I plan to use a 
> Linux box for training, but continue to use Windows for 
> testing and development.  For competitions I could use either windows or 
> linux.
>
> Thanks in advance,
>
> David 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

  ___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-06 Thread Michael Sué
As I understand it: for C/C++ VS Code is just an editor not a compiler 
or debugger.


But as VS 2015 can already work with LLDB and gdb it is only a question 
of time and free licensing of more .NET parts, I assume.


- Michael.

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-06 Thread David Fotland
Thanks, this is really interesting.  I still need something that works on 
Windows, and I use Many Faces to visualize what's going on, so I'll stick with 
windows for development.  I might use this for debugging on linux though.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Detlef Schmicker
> Sent: Saturday, February 06, 2016 1:04 AM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] What hardware to use to train the DNN
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Hi David,
> 
> I am not happy with my IDE on linux too. You might give Visual Studio on
> linux a try:
> 
> https://www.visualstudio.com/de-de/products/code-vs.aspx
> 
> It seems to be free...
> 
> Detlef
> 
> Am 05.02.2016 um 07:13 schrieb David Fotland:
> > I ll do training on Linux for performance, and because it is so much
> > easier to build than on Windows.  I need something I can ship to my
> > windows customers, that is light weight enough to play well without a
> > GPU.
> >
> >
> >
> > All of my testing and evaluation machines and tools are on Windows, so
> > I can t easily measure strength and progress on linux.  I m also not
> > eager to learn a new IDE.  I like Visual Studio.
> >
> >
> >
> > David
> >
> >
> >
> > From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
> > Behalf Of Petri Pitkanen Sent: Thursday, February 04, 2016 9:12 PM
> > To: computer-go Subject: Re: [Computer-go] What hardware to use to
> > train the DNN
> >
> >
> >
> > Welll, David is making a product. Making a product is 'trooper'
> > solution unless you are making very specific product to a very narrow
> > target group, willing to pay thousands for single license
> >
> > Petri
> >
> >
> >
> > 2016-02-04 23:50 GMT+02:00 uurtamo . :
> >
> > David,
> >
> >
> >
> > You're a trooper for doing this in windows. :)
> >
> >
> >
> > The OS overhead is generally lighter if you use unix; even the most
> > modern windows versions have a few layers of slowdown. Unix (for
> > better or worse) will give you closer, easier access to the hardware,
> > and closer, easier access to halting your machine if you are deep in
> > the guts. ;)
> >
> >
> >
> > s.
> >
> >
> >
> >
> >
> > On Tue, Feb 2, 2016 at 10:25 AM, David Fotland
> >  wrote:
> >
> > Detlef, Hiroshi, Hideki, and others,
> >
> > I have caffelib integrated with Many Faces so I can evaluate a DNN.
> > Thank you very much Detlef for sample code to set up the input layer.
> > Building caffe on windows is painful.  If anyone else is doing it and
> > gets stuck I might be able to help.
> >
> > What hardware are you using to train networks?  I don t have a
> > cuda-capable GPU yet, so I'm going to buy a new box.  I'd like some
> > advice.  Caffe is not well supported on Windows, so I plan to use a
> > Linux box for training, but continue to use Windows for testing and
> > development.  For competitions I could use either windows or linux.
> >
> > Thanks in advance,
> >
> > David
> >
> >> -Original Message- From: Computer-go
> >> [mailto:computer-go-boun...@computer-go.org] On Behalf Of Hiroshi
> >> Yamashita Sent: Monday, February 01, 2016 11:26 PM To:
> >> computer-go@computer-go.org Subject: *SPAM* Re:
> >> [Computer-go] DCNN can solve semeai?
> >>
> >> Hi Detlef,
> >>
> >> My study heavily depends on your information. Especially Oakfoam
> >> code, lenet.prototxt and generate_sample_data_leveldb.py was helpful.
> >> Thanks!
> >>
> >>> Quite interesting that you do not reach the prediction rate 57% from
> >>> the facebook paper by far too! I have the same experience with the
> >>
> >> I'm trying 12 layers 256 filters, but it is around 49.8%. I think 57%
> >> is maybe from KGS games.
> >>
> >>> Did you strip the games before 1800AD, as mentioned in the FB paper?
> >>> I did not do it and was thinking my training is not ok, but as you
> >>> have the same result probably this is the only difference?!
> >>
> >> I also did not use before 1800AD. And don't use hadicap games.
> >> Training positions are 15693570 from 76000 games. Test
> >> positions are   445693 from  2156 games. All games are shuffled
> >> in 

Re: [Computer-go] What hardware to use to train the DNN

2016-02-06 Thread David Fotland
I'm not using it.  Many Faces is written in c, (gui in C++ with MFC).  I ported 
caffe to windows and I'm calling caffelib directly from mfgo.  I'm not training 
a net yet, so I haven’t decided what to do.  Most likely I will create the 
input database using c++ code in many faces, and train using caffe.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Richard Lorentz
> Sent: Saturday, February 06, 2016 6:39 AM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] What hardware to use to train the DNN
> 
> Thought I'd ask you this off line. Are you using Code:Blocks and finding
> it's crashing a lot recently? (That's my experience.)
> 
> -Richard
> 
> 
> On 02/06/2016 01:04 AM, Detlef Schmicker wrote:
> > I am not happy with my IDE on linux too.
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-06 Thread Richard Lorentz
Thought I'd ask you this off line. Are you using Code:Blocks and finding 
it's crashing a lot recently? (That's my experience.)


-Richard


On 02/06/2016 01:04 AM, Detlef Schmicker wrote:

I am not happy with my IDE on linux too.


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-06 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi David,

I am not happy with my IDE on linux too. You might give Visual Studio
on linux a try:

https://www.visualstudio.com/de-de/products/code-vs.aspx

It seems to be free...

Detlef

Am 05.02.2016 um 07:13 schrieb David Fotland:
> I’ll do training on Linux for performance, and because it is so
> much easier to build than on Windows.  I need something I can ship
> to my windows customers, that is light weight enough to play well
> without a GPU.
> 
> 
> 
> All of my testing and evaluation machines and tools are on Windows,
> so I can’t easily measure strength and progress on linux.  I’m also
> not eager to learn a new IDE.  I like Visual Studio.
> 
> 
> 
> David
> 
> 
> 
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
> Behalf Of Petri Pitkanen Sent: Thursday, February 04, 2016 9:12 PM 
> To: computer-go Subject: Re: [Computer-go] What hardware to use to
> train the DNN
> 
> 
> 
> Welll, David is making a product. Making a product is 'trooper'
> solution unless you are making very specific product to a very
> narrow target group, willing to pay thousands for single license
> 
> Petri
> 
> 
> 
> 2016-02-04 23:50 GMT+02:00 uurtamo . :
> 
> David,
> 
> 
> 
> You're a trooper for doing this in windows. :)
> 
> 
> 
> The OS overhead is generally lighter if you use unix; even the most
> modern windows versions have a few layers of slowdown. Unix (for
> better or worse) will give you closer, easier access to the
> hardware, and closer, easier access to halting your machine if you
> are deep in the guts. ;)
> 
> 
> 
> s.
> 
> 
> 
> 
> 
> On Tue, Feb 2, 2016 at 10:25 AM, David Fotland
>  wrote:
> 
> Detlef, Hiroshi, Hideki, and others,
> 
> I have caffelib integrated with Many Faces so I can evaluate a DNN.
> Thank you very much Detlef for sample code to set up the input
> layer.  Building caffe on windows is painful.  If anyone else is
> doing it and gets stuck I might be able to help.
> 
> What hardware are you using to train networks?  I don’t have a
> cuda-capable GPU yet, so I'm going to buy a new box.  I'd like some
> advice.  Caffe is not well supported on Windows, so I plan to use a
> Linux box for training, but continue to use Windows for testing and
> development.  For competitions I could use either windows or
> linux.
> 
> Thanks in advance,
> 
> David
> 
>> -Original Message- From: Computer-go
>> [mailto:computer-go-boun...@computer-go.org] On Behalf Of Hiroshi
>> Yamashita Sent: Monday, February 01, 2016 11:26 PM To:
>> computer-go@computer-go.org Subject: *SPAM* Re:
>> [Computer-go] DCNN can solve semeai?
>> 
>> Hi Detlef,
>> 
>> My study heavily depends on your information. Especially Oakfoam
>> code, lenet.prototxt and generate_sample_data_leveldb.py was
>> helpful. Thanks!
>> 
>>> Quite interesting that you do not reach the prediction rate 57%
>>> from the facebook paper by far too! I have the same experience
>>> with the
>> 
>> I'm trying 12 layers 256 filters, but it is around 49.8%. I think
>> 57% is maybe from KGS games.
>> 
>>> Did you strip the games before 1800AD, as mentioned in the FB
>>> paper? I did not do it and was thinking my training is not ok,
>>> but as you have the same result probably this is the only
>>> difference?!
>> 
>> I also did not use before 1800AD. And don't use hadicap games. 
>> Training positions are 15693570 from 76000 games. Test
>> positions are   445693 from  2156 games. All games are shuffled
>> in advance. Each position is randomly rotated. And memorizing
>> 24000 positions, then shuffle and store to LebelDB. At first I
>> did not shuffle games. Then accuracy is down each 61000 iteration
>> (one epoch, 256 mini-batch). http://www.yss-aya.com/20160108.png 
>> It means DCNN understands easily the difference 1800AD games and
>> 2015AD games. I was surprised DCNN's ability. And maybe 1800AD
>> games  are also not good for training?
>> 
>> Regards, Hiroshi Yamashita
>> 
>> - Original Message - From: "Detlef Schmicker"
>>  To:  Sent: Tuesday,
>> February 02, 2016 3:15 PM Subject: Re: [Computer-go] DCNN can
>> solve semeai?
>> 
>>> Thanks a lot for sharing this.
>>> 
>>> Quite interesting that you do not reach the prediction rate 57%
>>> from the facebook paper by far too! I have the same experience
>>> with the GoGoD database. My numbers are nearly the same as
>>> yours 4

Re: [Computer-go] What hardware to use to train the DNN

2016-02-04 Thread David Fotland
I’ll do training on Linux for performance, and because it is so much easier to 
build than on Windows.  I need something I can ship to my windows customers, 
that is light weight enough to play well without a GPU.

 

All of my testing and evaluation machines and tools are on Windows, so I can’t 
easily measure strength and progress on linux.  I’m also not eager to learn a 
new IDE.  I like Visual Studio.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Petri Pitkanen
Sent: Thursday, February 04, 2016 9:12 PM
To: computer-go
Subject: Re: [Computer-go] What hardware to use to train the DNN

 

Welll, David is making a product. Making a product is 'trooper' solution unless 
you are making very specific product to a very narrow target group, willing to 
pay thousands for single license

Petri

 

2016-02-04 23:50 GMT+02:00 uurtamo . :

David,

 

You're a trooper for doing this in windows. :)

 

The OS overhead is generally lighter if you use unix; even the most modern 
windows versions have a few layers of slowdown. Unix (for better or worse) will 
give you closer, easier access to the hardware, and closer, easier access to 
halting your machine if you are deep in the guts. ;)

 

s.

 

 

On Tue, Feb 2, 2016 at 10:25 AM, David Fotland  wrote:

Detlef, Hiroshi, Hideki, and others,

I have caffelib integrated with Many Faces so I can evaluate a DNN.  Thank you 
very much Detlef for sample code to set up the input layer.  Building caffe on 
windows is painful.  If anyone else is doing it and gets stuck I might be able 
to help.

What hardware are you using to train networks?  I don’t have a cuda-capable GPU 
yet, so I'm going to buy a new box.  I'd like some advice.  Caffe is not well 
supported on Windows, so I plan to use a Linux box for training, but continue 
to use Windows for testing and development.  For competitions I could use 
either windows or linux.

Thanks in advance,

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Hiroshi Yamashita
> Sent: Monday, February 01, 2016 11:26 PM
> To: computer-go@computer-go.org
> Subject: *SPAM* Re: [Computer-go] DCNN can solve semeai?
>
> Hi Detlef,
>
> My study heavily depends on your information. Especially Oakfoam code,
> lenet.prototxt and generate_sample_data_leveldb.py was helpful. Thanks!
>
> > Quite interesting that you do not reach the prediction rate 57% from
> > the facebook paper by far too! I have the same experience with the
>
> I'm trying 12 layers 256 filters, but it is around 49.8%.
> I think 57% is maybe from KGS games.
>
> > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > did not do it and was thinking my training is not ok, but as you have
> > the same result probably this is the only difference?!
>
> I also did not use before 1800AD. And don't use hadicap games.
> Training positions are 15693570 from 76000 games.
> Test positions are   445693 from  2156 games.
> All games are shuffled in advance. Each position is randomly rotated.
> And memorizing 24000 positions, then shuffle and store to LebelDB.
> At first I did not shuffle games. Then accuracy is down each 61000
> iteration (one epoch, 256 mini-batch).
> http://www.yss-aya.com/20160108.png
> It means DCNN understands easily the difference 1800AD games and  2015AD
> games. I was surprised DCNN's ability. And maybe 1800AD games  are also
> not good for training?
>
> Regards,
> Hiroshi Yamashita
>
> - Original Message -
> From: "Detlef Schmicker" 
> To: 
> Sent: Tuesday, February 02, 2016 3:15 PM
> Subject: Re: [Computer-go] DCNN can solve semeai?
>
> > Thanks a lot for sharing this.
> >
> > Quite interesting that you do not reach the prediction rate 57% from
> > the facebook paper by far too! I have the same experience with the
> > GoGoD database. My numbers are nearly the same as yours 49% :) my net
> > is quite simelar, but I use 7,5,5,3,3, with 12 layers in total.
> >
> > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > did not do it and was thinking my training is not ok, but as you have
> > the same result probably this is the only difference?!
> >
> > Best regards,
> >
> > Detlef
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-04 Thread Petri Pitkanen
Welll, David is making a product. Making a product is 'trooper' solution
unless you are making very specific product to a very narrow target group,
willing to pay thousands for single license

Petri

2016-02-04 23:50 GMT+02:00 uurtamo . :

> David,
>
> You're a trooper for doing this in windows. :)
>
> The OS overhead is generally lighter if you use unix; even the most modern
> windows versions have a few layers of slowdown. Unix (for better or worse)
> will give you closer, easier access to the hardware, and closer, easier
> access to halting your machine if you are deep in the guts. ;)
>
> s.
>
>
> On Tue, Feb 2, 2016 at 10:25 AM, David Fotland 
> wrote:
>
>> Detlef, Hiroshi, Hideki, and others,
>>
>> I have caffelib integrated with Many Faces so I can evaluate a DNN.
>> Thank you very much Detlef for sample code to set up the input layer.
>> Building caffe on windows is painful.  If anyone else is doing it and gets
>> stuck I might be able to help.
>>
>> What hardware are you using to train networks?  I don’t have a
>> cuda-capable GPU yet, so I'm going to buy a new box.  I'd like some
>> advice.  Caffe is not well supported on Windows, so I plan to use a Linux
>> box for training, but continue to use Windows for testing and development.
>> For competitions I could use either windows or linux.
>>
>> Thanks in advance,
>>
>> David
>>
>> > -Original Message-
>> > From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
>> Behalf
>> > Of Hiroshi Yamashita
>> > Sent: Monday, February 01, 2016 11:26 PM
>> > To: computer-go@computer-go.org
>> > Subject: *SPAM* Re: [Computer-go] DCNN can solve semeai?
>> >
>> > Hi Detlef,
>> >
>> > My study heavily depends on your information. Especially Oakfoam code,
>> > lenet.prototxt and generate_sample_data_leveldb.py was helpful. Thanks!
>> >
>> > > Quite interesting that you do not reach the prediction rate 57% from
>> > > the facebook paper by far too! I have the same experience with the
>> >
>> > I'm trying 12 layers 256 filters, but it is around 49.8%.
>> > I think 57% is maybe from KGS games.
>> >
>> > > Did you strip the games before 1800AD, as mentioned in the FB paper? I
>> > > did not do it and was thinking my training is not ok, but as you have
>> > > the same result probably this is the only difference?!
>> >
>> > I also did not use before 1800AD. And don't use hadicap games.
>> > Training positions are 15693570 from 76000 games.
>> > Test positions are   445693 from  2156 games.
>> > All games are shuffled in advance. Each position is randomly rotated.
>> > And memorizing 24000 positions, then shuffle and store to LebelDB.
>> > At first I did not shuffle games. Then accuracy is down each 61000
>> > iteration (one epoch, 256 mini-batch).
>> > http://www.yss-aya.com/20160108.png
>> > It means DCNN understands easily the difference 1800AD games and  2015AD
>> > games. I was surprised DCNN's ability. And maybe 1800AD games  are also
>> > not good for training?
>> >
>> > Regards,
>> > Hiroshi Yamashita
>> >
>> > - Original Message -
>> > From: "Detlef Schmicker" 
>> > To: 
>> > Sent: Tuesday, February 02, 2016 3:15 PM
>> > Subject: Re: [Computer-go] DCNN can solve semeai?
>> >
>> > > Thanks a lot for sharing this.
>> > >
>> > > Quite interesting that you do not reach the prediction rate 57% from
>> > > the facebook paper by far too! I have the same experience with the
>> > > GoGoD database. My numbers are nearly the same as yours 49% :) my net
>> > > is quite simelar, but I use 7,5,5,3,3, with 12 layers in total.
>> > >
>> > > Did you strip the games before 1800AD, as mentioned in the FB paper? I
>> > > did not do it and was thinking my training is not ok, but as you have
>> > > the same result probably this is the only difference?!
>> > >
>> > > Best regards,
>> > >
>> > > Detlef
>> >
>> > ___
>> > Computer-go mailing list
>> > Computer-go@computer-go.org
>> > http://computer-go.org/mailman/listinfo/computer-go
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-04 Thread uurtamo .
David,

You're a trooper for doing this in windows. :)

The OS overhead is generally lighter if you use unix; even the most modern
windows versions have a few layers of slowdown. Unix (for better or worse)
will give you closer, easier access to the hardware, and closer, easier
access to halting your machine if you are deep in the guts. ;)

s.


On Tue, Feb 2, 2016 at 10:25 AM, David Fotland 
wrote:

> Detlef, Hiroshi, Hideki, and others,
>
> I have caffelib integrated with Many Faces so I can evaluate a DNN.  Thank
> you very much Detlef for sample code to set up the input layer.  Building
> caffe on windows is painful.  If anyone else is doing it and gets stuck I
> might be able to help.
>
> What hardware are you using to train networks?  I don’t have a
> cuda-capable GPU yet, so I'm going to buy a new box.  I'd like some
> advice.  Caffe is not well supported on Windows, so I plan to use a Linux
> box for training, but continue to use Windows for testing and development.
> For competitions I could use either windows or linux.
>
> Thanks in advance,
>
> David
>
> > -Original Message-
> > From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> > Of Hiroshi Yamashita
> > Sent: Monday, February 01, 2016 11:26 PM
> > To: computer-go@computer-go.org
> > Subject: *SPAM* Re: [Computer-go] DCNN can solve semeai?
> >
> > Hi Detlef,
> >
> > My study heavily depends on your information. Especially Oakfoam code,
> > lenet.prototxt and generate_sample_data_leveldb.py was helpful. Thanks!
> >
> > > Quite interesting that you do not reach the prediction rate 57% from
> > > the facebook paper by far too! I have the same experience with the
> >
> > I'm trying 12 layers 256 filters, but it is around 49.8%.
> > I think 57% is maybe from KGS games.
> >
> > > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > > did not do it and was thinking my training is not ok, but as you have
> > > the same result probably this is the only difference?!
> >
> > I also did not use before 1800AD. And don't use hadicap games.
> > Training positions are 15693570 from 76000 games.
> > Test positions are   445693 from  2156 games.
> > All games are shuffled in advance. Each position is randomly rotated.
> > And memorizing 24000 positions, then shuffle and store to LebelDB.
> > At first I did not shuffle games. Then accuracy is down each 61000
> > iteration (one epoch, 256 mini-batch).
> > http://www.yss-aya.com/20160108.png
> > It means DCNN understands easily the difference 1800AD games and  2015AD
> > games. I was surprised DCNN's ability. And maybe 1800AD games  are also
> > not good for training?
> >
> > Regards,
> > Hiroshi Yamashita
> >
> > - Original Message -
> > From: "Detlef Schmicker" 
> > To: 
> > Sent: Tuesday, February 02, 2016 3:15 PM
> > Subject: Re: [Computer-go] DCNN can solve semeai?
> >
> > > Thanks a lot for sharing this.
> > >
> > > Quite interesting that you do not reach the prediction rate 57% from
> > > the facebook paper by far too! I have the same experience with the
> > > GoGoD database. My numbers are nearly the same as yours 49% :) my net
> > > is quite simelar, but I use 7,5,5,3,3, with 12 layers in total.
> > >
> > > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > > did not do it and was thinking my training is not ok, but as you have
> > > the same result probably this is the only difference?!
> > >
> > > Best regards,
> > >
> > > Detlef
> >
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-02 Thread Hideki Kato
Since Zen's engine is improved sololy by Yomato, I have no idea 
in detail but I believe Yamato has used one Mac Pro so far 
(Linux and Windows).
#He has implemented DCNN by himself, not using tools.

Hideki
 
David Fotland: <0a0301d15de7$1180d760$34828620$@smart-games.com>: 
>Detlef, Hiroshi, Hideki, and others,

>

>I have caffelib integrated with Many Faces so I can evaluate a DNN.  Thank you 
>very much 
>Detlef for sample code to set up the input layer.  Building caffe on windows 
>is painful.  If 
>anyone else is doing it and gets stuck I might be able to help.

>

>What hardware are you using to train networks?  I don’t have a cuda-capable 
>GPU yet, so I'm 
>going to buy a new box.  I'd like some advice.  Caffe is not well supported on 
>Windows, so I 
>plan to use a Linux box for training, but continue to use Windows for testing 
>and 
>development.  For competitions I could use either windows or linux.

>

>Thanks in advance,

>

>David

>

>> -Original Message-

>> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf

>> Of Hiroshi Yamashita

>> Sent: Monday, February 01, 2016 11:26 PM

>> To: computer-go@computer-go.org

>> Subject: *SPAM* Re: [Computer-go] DCNN can solve semeai?

>> 

>> Hi Detlef,

>> 

>> My study heavily depends on your information. Especially Oakfoam code,

>> lenet.prototxt and generate_sample_data_leveldb.py was helpful. Thanks!

>> 

>> > Quite interesting that you do not reach the prediction rate 57% from

>> > the facebook paper by far too! I have the same experience with the

>> 

>> I'm trying 12 layers 256 filters, but it is around 49.8%.

>> I think 57% is maybe from KGS games.

>> 

>> > Did you strip the games before 1800AD, as mentioned in the FB paper? I

>> > did not do it and was thinking my training is not ok, but as you have

>> > the same result probably this is the only difference?!

>> 

>> I also did not use before 1800AD. And don't use hadicap games.

>> Training positions are 15693570 from 76000 games.

>> Test positions are   445693 from  2156 games.

>> All games are shuffled in advance. Each position is randomly rotated.

>> And memorizing 24000 positions, then shuffle and store to LebelDB.

>> At first I did not shuffle games. Then accuracy is down each 61000

>> iteration (one epoch, 256 mini-batch).

>> http://www.yss-aya.com/20160108.png

>> It means DCNN understands easily the difference 1800AD games and  2015AD

>> games. I was surprised DCNN's ability. And maybe 1800AD games  are also

>> not good for training?

>> 

>> Regards,

>> Hiroshi Yamashita

>> 

>> - Original Message -

>> From: "Detlef Schmicker" 

>> To: 

>> Sent: Tuesday, February 02, 2016 3:15 PM

>> Subject: Re: [Computer-go] DCNN can solve semeai?

>> 

>> > Thanks a lot for sharing this.

>> >

>> > Quite interesting that you do not reach the prediction rate 57% from

>> > the facebook paper by far too! I have the same experience with the

>> > GoGoD database. My numbers are nearly the same as yours 49% :) my net

>> > is quite simelar, but I use 7,5,5,3,3, with 12 layers in total.

>> >

>> > Did you strip the games before 1800AD, as mentioned in the FB paper? I

>> > did not do it and was thinking my training is not ok, but as you have

>> > the same result probably this is the only difference?!

>> >

>> > Best regards,

>> >

>> > Detlef

>> 

>> ___

>> Computer-go mailing list

>> Computer-go@computer-go.org

>> http://computer-go.org/mailman/listinfo/computer-go

>

>___

>Computer-go mailing list

>Computer-go@computer-go.org

>http://computer-go.org/mailman/listinfo/computer-go
-- 
Hideki Kato 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-02 Thread Hiroshi Yamashita

Hi David,

I use GTS 450 and GTX 980. I use Caffe on ubuntu 14.04.
Caffe's install is difficult. So I recommend using ubuntu 14.04.

time for predicting a position

  Detlef44% Detlef54%   CUDA cores   clock
GTS 450 17.2ms 21  ms   192  783MHz
GTX 980  5.1ms 10.1ms 2,048 1126MHz
GTX 980 cuDNN6.4ms  5.9ms 2,048 1126MHz
GTX 670  7.9ms1,344  915MHz

Learning time

   MNIST GPU Aya's 1 iteration (mini-batch=256)
GTS 450   306 sec   9720 sec
GTX 980   169 sec
GTX 980 cuDNN  24 sec726 sec

GTS 450 is not so slow for predicting a position.
But GTX 980 learning speed is 13 times faster than GTS 450.
And cuDNN, library provided by NVIDIA, is very effective.
cuDNN does not work on GTS 450.
And caffe's page is nice.
http://caffe.berkeleyvision.org/performance_hardware.html

Regards,
Hiroshi Yamashita


- Original Message - 
From: "David Fotland" 

To: 
Sent: Wednesday, February 03, 2016 3:25 AM
Subject: [Computer-go] What hardware to use to train the DNN



Detlef, Hiroshi, Hideki, and others,

I have caffelib integrated with Many Faces so I can evaluate a DNN.  Thank you very much Detlef for sample code to set up the 
input layer.  Building caffe on windows is painful.  If anyone else is doing it and gets stuck I might be able to help.


What hardware are you using to train networks?  I don’t have a cuda-capable GPU yet, so I'm going to buy a new box.  I'd like 
some advice.  Caffe is not well supported on Windows, so I plan to use a Linux box for training, but continue to use Windows for 
testing and development.  For competitions I could use either windows or linux.


Thanks in advance,

David 


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-02 Thread Peter Jin
Hi David,

I've used a GTX 970 for training deep convnets without issue. Depending on
your budget, a GTX 980 Ti or TITAN X would be even better (we use some
TITAN X's in our lab). The main thing about using smaller GPUs for training
these networks is that depending on the implementation of the neural net
code, you may have to tune your mini batch size to fit in memory. But this
shouldn't be a problem if you are using a lower-memory convolutions such as
some of the ones in cuDNN.

If you're willing to wait a bit, the first Nvidia Pascal chip is rumored to
be released as early as April. They are supposed to have full support for
half precision floating point, which in theory gives a 2x speedup over
equivalent single precision performance.

Regards,
Peter

On Tue, Feb 2, 2016 at 10:25 AM, David Fotland 
wrote:

> Detlef, Hiroshi, Hideki, and others,
>
> I have caffelib integrated with Many Faces so I can evaluate a DNN.  Thank
> you very much Detlef for sample code to set up the input layer.  Building
> caffe on windows is painful.  If anyone else is doing it and gets stuck I
> might be able to help.
>
> What hardware are you using to train networks?  I don’t have a
> cuda-capable GPU yet, so I'm going to buy a new box.  I'd like some
> advice.  Caffe is not well supported on Windows, so I plan to use a Linux
> box for training, but continue to use Windows for testing and development.
> For competitions I could use either windows or linux.
>
> Thanks in advance,
>
> David
>
> > -Original Message-
> > From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> > Of Hiroshi Yamashita
> > Sent: Monday, February 01, 2016 11:26 PM
> > To: computer-go@computer-go.org
> > Subject: *SPAM* Re: [Computer-go] DCNN can solve semeai?
> >
> > Hi Detlef,
> >
> > My study heavily depends on your information. Especially Oakfoam code,
> > lenet.prototxt and generate_sample_data_leveldb.py was helpful. Thanks!
> >
> > > Quite interesting that you do not reach the prediction rate 57% from
> > > the facebook paper by far too! I have the same experience with the
> >
> > I'm trying 12 layers 256 filters, but it is around 49.8%.
> > I think 57% is maybe from KGS games.
> >
> > > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > > did not do it and was thinking my training is not ok, but as you have
> > > the same result probably this is the only difference?!
> >
> > I also did not use before 1800AD. And don't use hadicap games.
> > Training positions are 15693570 from 76000 games.
> > Test positions are   445693 from  2156 games.
> > All games are shuffled in advance. Each position is randomly rotated.
> > And memorizing 24000 positions, then shuffle and store to LebelDB.
> > At first I did not shuffle games. Then accuracy is down each 61000
> > iteration (one epoch, 256 mini-batch).
> > http://www.yss-aya.com/20160108.png
> > It means DCNN understands easily the difference 1800AD games and  2015AD
> > games. I was surprised DCNN's ability. And maybe 1800AD games  are also
> > not good for training?
> >
> > Regards,
> > Hiroshi Yamashita
> >
> > - Original Message -
> > From: "Detlef Schmicker" 
> > To: 
> > Sent: Tuesday, February 02, 2016 3:15 PM
> > Subject: Re: [Computer-go] DCNN can solve semeai?
> >
> > > Thanks a lot for sharing this.
> > >
> > > Quite interesting that you do not reach the prediction rate 57% from
> > > the facebook paper by far too! I have the same experience with the
> > > GoGoD database. My numbers are nearly the same as yours 49% :) my net
> > > is quite simelar, but I use 7,5,5,3,3, with 12 layers in total.
> > >
> > > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > > did not do it and was thinking my training is not ok, but as you have
> > > the same result probably this is the only difference?!
> > >
> > > Best regards,
> > >
> > > Detlef
> >
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-02 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi David,

I use Ubuntu 14.04 LTS with a NVIDIA GTX970 Graphic card (and
i7-4970k, but this is not important for training I think) and
installed CUDNN v4 (important, at least a factor 4 in training speed).

This Ubuntu version is officially supported by Cuda and I did only
have minor problems if an Ubuntu update updated the graphics driver: I
had 2 times in the last year to reinstall cuda (a little ugly, as the
graphic driver did not work after the update and you had to boot into
command line mode).

Detlef

Am 02.02.2016 um 19:25 schrieb David Fotland:
> Detlef, Hiroshi, Hideki, and others,
> 
> I have caffelib integrated with Many Faces so I can evaluate a DNN.
> Thank you very much Detlef for sample code to set up the input
> layer.  Building caffe on windows is painful.  If anyone else is
> doing it and gets stuck I might be able to help.
> 
> What hardware are you using to train networks?  I don’t have a
> cuda-capable GPU yet, so I'm going to buy a new box.  I'd like some
> advice.  Caffe is not well supported on Windows, so I plan to use a
> Linux box for training, but continue to use Windows for testing and
> development.  For competitions I could use either windows or
> linux.
> 
> Thanks in advance,
> 
> David
> 
>> -Original Message- From: Computer-go
>> [mailto:computer-go-boun...@computer-go.org] On Behalf Of Hiroshi
>> Yamashita Sent: Monday, February 01, 2016 11:26 PM To:
>> computer-go@computer-go.org Subject: *SPAM* Re:
>> [Computer-go] DCNN can solve semeai?
>> 
>> Hi Detlef,
>> 
>> My study heavily depends on your information. Especially Oakfoam
>> code, lenet.prototxt and generate_sample_data_leveldb.py was
>> helpful. Thanks!
>> 
>>> Quite interesting that you do not reach the prediction rate 57%
>>> from the facebook paper by far too! I have the same experience
>>> with the
>> 
>> I'm trying 12 layers 256 filters, but it is around 49.8%. I think
>> 57% is maybe from KGS games.
>> 
>>> Did you strip the games before 1800AD, as mentioned in the FB
>>> paper? I did not do it and was thinking my training is not ok,
>>> but as you have the same result probably this is the only
>>> difference?!
>> 
>> I also did not use before 1800AD. And don't use hadicap games. 
>> Training positions are 15693570 from 76000 games. Test
>> positions are   445693 from  2156 games. All games are shuffled
>> in advance. Each position is randomly rotated. And memorizing
>> 24000 positions, then shuffle and store to LebelDB. At first I
>> did not shuffle games. Then accuracy is down each 61000 iteration
>> (one epoch, 256 mini-batch). http://www.yss-aya.com/20160108.png 
>> It means DCNN understands easily the difference 1800AD games and
>> 2015AD games. I was surprised DCNN's ability. And maybe 1800AD
>> games  are also not good for training?
>> 
>> Regards, Hiroshi Yamashita
>> 
>> - Original Message - From: "Detlef Schmicker"
>>  To:  Sent: Tuesday,
>> February 02, 2016 3:15 PM Subject: Re: [Computer-go] DCNN can
>> solve semeai?
>> 
>>> Thanks a lot for sharing this.
>>> 
>>> Quite interesting that you do not reach the prediction rate 57%
>>> from the facebook paper by far too! I have the same experience
>>> with the GoGoD database. My numbers are nearly the same as
>>> yours 49% :) my net is quite simelar, but I use 7,5,5,3,3,
>>> with 12 layers in total.
>>> 
>>> Did you strip the games before 1800AD, as mentioned in the FB
>>> paper? I did not do it and was thinking my training is not ok,
>>> but as you have the same result probably this is the only
>>> difference?!
>>> 
>>> Best regards,
>>> 
>>> Detlef
>> 
>> ___ Computer-go
>> mailing list Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWsPbCAAoJEInWdHg+Znf4MMUQAIEp0VXzCC58S+FyWniUFrnq
1zTBd9bApj57mJWE7n+etPOWy9tPrWfKRdc25U8KHnJNiiK3UrVOdhVJPjOK3l9g
qEPom48av0DfjrGNp2ZFJ30xCV7eahdR27vguYCn+qdg2Hyc/X7yhCp/mzF7zEMm
yx1n8A+ZwuRMkLTApuewaccA1TiMTOP+mJj79PHRgFdaaoOiYn41Mzp8bKllb3t/
E7bqz2zhLz5Qct9p3x98SY/9vYYGEMojTsE10sCe3/1oDWaxgL+agA6PkbqxuwQ1
QpQh6XAYSYJo4eoCFvDJGiuUDHEOLkI0B6k8WBbRTRuGyqNjrY+Ih1UcfMrcJawU
dmz36cadtkQ5aLUCV+0bgEJnpW4apo+3YG2qputTQe6jcEQaK00eQ5fbmd87isT3
menAWfIfMDStwaFYA2sF3pDqWvK2DDjG/tJ2r5gn37tvo1yuTByEBn3yosiGSkHq
5TcZF0oTlY9zuzcmNAnU9+jZ2vOxgWdCXCsR39lA0tFf+Y7rnmIZMVUP579CkHTl
SNl4ZxJOkL6doL5rUB1Ptwx5zVErVuaNcE//LFn6iWNOxVWnzQQYDY5wq3rJ/ABT
qGpEvXlIYrlMCEtN28GhC+4Z9UTZAAiAauw/ko2VfGtk9yz/7BHsr41ZQvpfeUBP
s9mZdpAsT1cF/Gp1W2Ie
=4K/j
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] What hardware to use to train the DNN

2016-02-02 Thread David Fotland
Detlef, Hiroshi, Hideki, and others,

I have caffelib integrated with Many Faces so I can evaluate a DNN.  Thank you 
very much Detlef for sample code to set up the input layer.  Building caffe on 
windows is painful.  If anyone else is doing it and gets stuck I might be able 
to help.

What hardware are you using to train networks?  I don’t have a cuda-capable GPU 
yet, so I'm going to buy a new box.  I'd like some advice.  Caffe is not well 
supported on Windows, so I plan to use a Linux box for training, but continue 
to use Windows for testing and development.  For competitions I could use 
either windows or linux.

Thanks in advance,

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Hiroshi Yamashita
> Sent: Monday, February 01, 2016 11:26 PM
> To: computer-go@computer-go.org
> Subject: *SPAM* Re: [Computer-go] DCNN can solve semeai?
> 
> Hi Detlef,
> 
> My study heavily depends on your information. Especially Oakfoam code,
> lenet.prototxt and generate_sample_data_leveldb.py was helpful. Thanks!
> 
> > Quite interesting that you do not reach the prediction rate 57% from
> > the facebook paper by far too! I have the same experience with the
> 
> I'm trying 12 layers 256 filters, but it is around 49.8%.
> I think 57% is maybe from KGS games.
> 
> > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > did not do it and was thinking my training is not ok, but as you have
> > the same result probably this is the only difference?!
> 
> I also did not use before 1800AD. And don't use hadicap games.
> Training positions are 15693570 from 76000 games.
> Test positions are   445693 from  2156 games.
> All games are shuffled in advance. Each position is randomly rotated.
> And memorizing 24000 positions, then shuffle and store to LebelDB.
> At first I did not shuffle games. Then accuracy is down each 61000
> iteration (one epoch, 256 mini-batch).
> http://www.yss-aya.com/20160108.png
> It means DCNN understands easily the difference 1800AD games and  2015AD
> games. I was surprised DCNN's ability. And maybe 1800AD games  are also
> not good for training?
> 
> Regards,
> Hiroshi Yamashita
> 
> - Original Message -
> From: "Detlef Schmicker" 
> To: 
> Sent: Tuesday, February 02, 2016 3:15 PM
> Subject: Re: [Computer-go] DCNN can solve semeai?
> 
> > Thanks a lot for sharing this.
> >
> > Quite interesting that you do not reach the prediction rate 57% from
> > the facebook paper by far too! I have the same experience with the
> > GoGoD database. My numbers are nearly the same as yours 49% :) my net
> > is quite simelar, but I use 7,5,5,3,3, with 12 layers in total.
> >
> > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > did not do it and was thinking my training is not ok, but as you have
> > the same result probably this is the only difference?!
> >
> > Best regards,
> >
> > Detlef
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go