Re: [Computer-go] software like: http://ps.waltheri.net/

2017-05-06 Thread David Fotland
In the Fuseki Tutor, File, Add game(s) to Database. You can select multiple 
files or type a folder name + .dir (for example mygames.dir). If you have more 
questions email me directly at fotl...@smart-games.com

david

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
> Ray Tayek
> Sent: Friday, May 05, 2017 10:44 PM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] software like: http://ps.waltheri.net/
> 
> On 5/5/2017 5:38 PM, David Fotland wrote:
> > Many Faces of Go Fuseki tutor can do this, but I'd have to help if you
> want to start from an empty database. That's how I generate the tutor. You
> can add sgf files to the existing tutor pretty easily.
> >
> > David
> good. how can i add an sgf file to the existing tutor?
> 
> thanks
> 
> >> -Original Message-
> >> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
> >> Behalf Of Ray Tayek
> >> Sent: Friday, May 05, 2017 4:24 PM
> >> To: computer-go@computer-go.org
> >> Subject: [Computer-go] software like: http://ps.waltheri.net/
> >>
> >> does anyone know of software that well eat a bunch of sgf games and
> >> produce something like this web site? ...
> 
> 
> --
> Honesty is a very expensive gift. So, don't expect it from cheap people -
> Warren Buffett?
> http://tayek.com/
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] software like: http://ps.waltheri.net/

2017-05-05 Thread David Fotland
Many Faces of Go Fuseki tutor can do this, but I'd have to help if you want to 
start from an empty database. That's how I generate the tutor. You can add sgf 
files to the existing tutor pretty easily.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
> Ray Tayek
> Sent: Friday, May 05, 2017 4:24 PM
> To: computer-go@computer-go.org
> Subject: [Computer-go] software like: http://ps.waltheri.net/
> 
> does anyone know of software that well eat a bunch of sgf games and produce
> something like this web site?
> 
> thanks
> 
> --
> Honesty is a very expensive gift. So, don't expect it from cheap people -
> Warren Buffett?
> http://tayek.com/
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] SGF

2017-01-02 Thread David Fotland
I think the character set property just refers to the contents of comments and 
similar fields. The sgf format itself is entirely in the common characters in 
UTF-8 and US-ASCII. There is no need to assume a character set before the 
property. If you find the character set property in the root node, it should 
apply to a root comment, even if it comes earlier in the properties in the root 
node.

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Clark B. Wierda
Sent: Monday, January 02, 2017 11:35 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] SGF

 

On Fri, Dec 30, 2016 at 3:52 PM, Dave Dyer  wrote:


Character encoding (usually UTF8 these days) ought not to be part of
the standard, it ought to be up to the containing file to describe the
encoding at that level.Likewise, nothing in the standard ought to
require support for particular character sets. Rather, if a sgf record
contains an unsupported character set, it will fail at the "reading"
phase, independent of the actual contents of the file.

I've used sgf as a general format for more than 70 different games,
as well as Go, and I only treat it as a rough guide.  I use a generic
read/write process that doesn't care about the content, and any sensible
user of the "standard" ought to do likewise.

The details of what properties exist and how they are used is always
going to be a negotiation between content originators and third party
consumers.

 

 

Since character set is a defined property, my main issue in writing a parser is 
assumptions until finding that property.

 

I would prefer we define a default that will apply until that property is 
found.  Currently, I use UTF8 which has worked so far.  I reopen the file with 
the defined character set (if supported) when I hit that property and restart 
the parse.

 

I'm glad to see that there is still discussion on the format.

 

Clark

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Short statement by Aja Huang

2017-01-02 Thread David Fotland
Alpha's publication is pretty clear on how they did it. Now that their research 
has shown the way, other competent teams with similar compute resources should 
be able to duplicate their work.  It has been almost a year, which is enough 
time.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
> "Ingo Althöfer"
> Sent: Monday, January 02, 2017 5:32 AM
> To: computer-go@computer-go.org
> Subject: [Computer-go] Short statement by Aja Huang
> 
> This screenshot was just posted in German's computer go forum:
> 
> https://scontent.fbkk5-6.fna.fbcdn.net/v/t1.0-
> 9/15823610_206422803154077_4501261067334984925_n.jpg?oh=1dd6e8324085c35ed14
> d9ae1176e33ec=591E1F54
> 
> Ingo.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Go Tournament with hinteresting rules

2016-12-14 Thread David Fotland
Because you test it both ways, and one wins more games. Many things about the 
playout policy are mysterious and can only be tested to see if they make play 
stronger. Often the results of testing are counterintuitive. I'd guess only 
about a quarter of the things I tried in Many Faces made the program stronger.

I can think of several possible explanations, but that's not science, it's 
telling a story with no evidence.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
> Charles Leedham-green
> Sent: Thursday, December 08, 2016 3:23 PM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] Go Tournament with hinteresting rules
> 
> I have been told that bots that are based on MC play better when they only
> record the result of each roll out (W or L) rather than the margin of
> victory.
> 
> To me this is counter-intuitive.
> 
> Does anyone have an intelligible reason why it should be so?
> 
> Charles
> 
> > On 8 Dec 2016, at 22:56, Erik van der Werf 
> wrote:
> >
> > On Thu, Dec 8, 2016 at 10:58 PM, "Ingo Alth fer" <3-hirn-ver...@gmx.de>
> wrote:
> >> Playing under such conditions might be a challenge for the bots
> >
> > Why? Do you think the humans will collude?  ;-)
> >
> > Erik.
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Auto Go game recorder

2016-11-24 Thread David Fotland
Remi has something: https://www.remi-coulom.fr/kifu-snap/

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
> Hideki Kato
> Sent: Thursday, November 24, 2016 4:17 PM
> To: computer-go@computer-go.org
> Subject: [Computer-go] Auto Go game recorder
> 
> Hello everybody,
> 
> Chizu Kobayashi 6p is seeking automatic Go game recorders.  Does anyone
> know about that?  An application for mobilephones is the best but any
> system is appriciated.
> 
> Best,
> Hideki
> --
> Hideki Kato 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Deep Zen vs Cho Chikun -- Round 3

2016-11-23 Thread David Fotland
Congratulations to Zen for playing so well against a strong pro. It won't be 
long until anyone can get a pro strength go program that runs on their ordinary 
PC.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
> Hiroshi Yamashita
> Sent: Wednesday, November 23, 2016 5:54 AM
> To: computer-go@computer-go.org
> Subject: [Computer-go] Deep Zen vs Cho Chikun -- Round 3
> 
> Hi,
> 
> Cho Chikun won game3 against DeepZenGo, and won the match.
> 
> (;GM[1]SZ[19]
> PB[Cho Chikun]
> PW[DeepZenGo]
> DT[2016-11-23]RE[B+R]KM[6.5]TM[120]RU[Japanese]
> PC[Hotel New Otani]EV[2nd Igo DenOu-sen]GN[Game 3]
> ;B[pd];W[dp];B[pq];W[dd];B[fq];W[cn];B[pk];W[mp];B[jp];W[po]
> ;B[oo];W[on];B[op];W[pm];B[nn];W[nm];B[mm];W[nl];B[mo];W[ml]
> ;B[lm];W[qf];B[qh];W[of];B[oh];W[nd];B[rd];W[mg];B[nc];W[mc]
> ;B[oc];W[ni];B[md];W[ld];B[me];W[le];B[ne];W[oe];B[lc];W[mb]
> ;B[od];W[kc];B[nf];W[ng];B[og];W[mf];B[nd];W[dg];B[dc];W[cc]
> ;B[ec];W[cb];B[gc];W[hq];B[dr];W[jq];B[kq];W[kp];B[lp];W[ko]
> ;B[lq];W[ip];B[cq];W[eq];B[er];W[fp];B[ci];W[ch];B[di];W[fh]
> ;B[cl];W[qi];B[pi];W[qj];B[pj];W[rh];B[qk];W[rk];B[rg];W[qg]
> ;B[ph];W[ri];B[rf];W[ql];B[fe];W[fr];B[bo];W[co];B[bp];W[ic]
> ;B[ib];W[ej];B[bm];W[kl];B[km];W[ed];B[fd];W[fb];B[hc];W[id]
> ;B[eb];W[jb];B[eg];W[eh];B[df];W[cf];B[de];W[ce];B[ee];W[cd]
> ;B[fg];W[qp];B[qq];W[rq];B[gh];W[dh];B[rr];W[hh];B[gi];W[hg]
> ;B[gg];W[oa];B[bh];W[bg];B[ei];W[bi];B[fi];W[ah];B[jr];W[jo]
> ;B[ir];W[fm];B[qn];W[pn];B[rp];W[ro];B[sq];W[pb];B[rn];W[qc]
> ;B[qe];W[ij];B[ll];W[lk];B[sl];W[qo];B[so];W[rl];B[kk];W[lj]
> ;B[jl];W[kj];B[in];W[hn];B[nj];W[bn];B[an];W[db];B[ea];W[ln]
> ;B[mn];W[bk];B[mi];W[li];B[jj];W[ji];B[mk])
> 
> Japanese movie news
> Japanese Go AI lost against top pro by 1 win 2 losses
> http://www3.nhk.or.jp/news/html/20161123/k10010781571000.html
> https://www.youtube.com/watch?v=B0U6ZwyC1-0
> 
> Japanese newspapers
> http://www.yomiuri.co.jp/culture/20161123-OYT1T50036.html
> http://www.nikkei.com/article/DGXLASDG23H6A_T21C16A1CR8000/
> http://www.nikkei.com/article/DGXLASFG23H0Y_T21C16A100/
> http://mainichi.jp/graphs/20161123/hpj/00m/040/002000g/1
> http://www.asahi.com/articles/ASJCR4GV5JCRUCVL004.html
> 
> Regards,
> Hiroshi Yamashita
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Subject: Zen vs Cho Chikun

2016-11-11 Thread David Fotland
Amazon p2.16xlarge instance gets you 64 cores (Xeon E5-2686v4) and 16 K80 GPUs 
for $14.40 per hour. Not bad if you just want to run it during a competition.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
> Cameron Browne
> Sent: Thursday, November 10, 2016 5:07 AM
> To: computer-go@computer-go.org
> Subject: [Computer-go] Subject: Zen vs Cho Chikun
> 
> Hi All,
> 
> > Hardware:
> > CPU: 2 x Intel Xeon E5-2699v4� (44 cores/2.2 GHz)
> > GPU: 4 x nVidia Titan X (Pascal�)�
> > RAM: 128GB
> 
> Does anyone know the practicalities of putting two of those CPUs in one
> machine? I�d read that two would generate too much heat for most cooling
> systems.
> 
> Regards,
> Cameron
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Computer-go Digest, Vol 82, Issue 2

2016-11-05 Thread David Fotland
 

Many faces does most of what has been mentioned. In addition, rather than stop 
search when it is impossible for another move to be chosen, I stop earlier, 
when it is unlikely for another move to become best. When far ahead, I stop a 
little earlier. That preserves some time in case there is a reversal later, and 
makes customers happier.

 

I don’t stop early when pondering on opponent’s time. Sometimes this means that 
when Many Faces gets to move, it moves instantly, because the criteria for 
early stopping is already met after pondering.

 

It’s more difficult to decide when to take extra time to think, and I don’t 
have a good solution. I just set the target time very generously and depend on 
time saving in other moves to give enough cushion.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Pawel Koziol
Sent: Saturday, November 05, 2016 8:15 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Computer-go Digest, Vol 82, Issue 2

 

Authors of chess program Stockfish gathered massive statistics about 
probability that on reaching certain move number game is still undecided 
(meaning not too larhe score difference). From that data they derived a table 
of multipliers used to adjust time per move calculated by their usual formula. 
Later on, they approximated the table by using some other formula.

 

2016-11-05 13:00 GMT+01:00 :

Send Computer-go mailing list submissions to
computer-go@computer-go.org

To subscribe or unsubscribe via the World Wide Web, visit
http://computer-go.org/mailman/listinfo/computer-go
or, via email, send a message with subject or body 'help' to
computer-go-requ...@computer-go.org

You can reach the person managing the list at
computer-go-ow...@computer-go.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Computer-go digest..."


Today's Topics:

   1. Re: Time policy (Álvaro Begué)
   2. Re: Time Policy (Hendrik Baier)
   3. Re: Time policy (valky...@phmp.se)


--

Message: 1
Date: Fri, 4 Nov 2016 08:59:11 -0400
From: Álvaro Begué 
To: computer-go 
Subject: Re: [Computer-go] Time policy
Message-ID:

Re: [Computer-go] Converging to 57%

2016-08-24 Thread David Fotland
I train using approximately the same training set as AlphaGo, but so far 
without the augmentation with rotations and reflection. My target is about 
55.5%, since that's what Alphago got on their training set without 
reinforcement learning.

I find I need 5x5 in the first layer, at least 12 layers, and at least 96 
filters to get over 50%. My best net is 55.3%, 18 layers by 96 filters. I use 
simple SGD with a 64 minibatch, no momentum, 0.01 learning rate until it 
flattens out, then 0.001. I have two 980TI, and the best nets take about 5 days 
to train (about 20 epochs on about 30M positions). The last few percent is just 
trial and error. Sometimes making the net wider or deeper makes it weaker. 
Perhaps it's just variation from one training run to another. I haven’t tried 
training the same net more than once.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Gian-Carlo Pascutto
> Sent: Tuesday, August 23, 2016 12:42 AM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] Converging to 57%
> 
> On 23-08-16 08:57, Detlef Schmicker wrote:
> 
> > So, if somebody is sure, it is measured against GoGod, I think a
> > number of other go programmers have to think again. I heard them
> > reaching 51% (e. g. posts by Hiroshi in this list)
> 
> I trained a 128 x 14 network for Leela 0.7.0 and this gets 51.1% on
> GoGoD.
> 
> Something I noticed from the papers is that the prediction percentage
> keeps going upwards with more epochs, even if slowly, but still clearly
> up.
> 
> In my experience my networks converge rather quickly (like >0.5% per
> epoch after the first), get stuck, get one more 0.5% gain if I lower the
> learning rate (by a factor 5 or 10) and don't gain any more regardless
> of what I do thereafter.
> 
> I do use momentum. IIRC I tested without momentum once and it was worse,
> and much slower.
> 
> I did not find any improvement in playing strength from doing Facebook's
> 3 move prediction. Perhaps it needs much bigger networks than 128 x 12.
> 
> Adding ladder features also isn't good enough to (consistently) keep the
> network from playing into them. (And once it's played the first move,
> you're totally SOL because the resulting positions aren't in the
> training set and you'll get 99% confidence for continuing the losing
> ladder moves)
> 
> I'm currently doing a more systematic comparison of all methods (and
> GoGoD vs KGS+GoGoD) on 128 x 12, and testing the resulting strength
> (rather than looking at prediction %). I'll post the results here, if
> anything definite comes out of it.
> 
> --
> GCP
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] For those of us who play both Go and Pokemon Go...

2016-07-18 Thread David Fotland
 

https://www.reddit.com/r/pokemongo/comments/4tez82/how_pokemon_really_play_go/


Although it looks like they are actually playing Go Moku.

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to Zen!

2016-07-16 Thread David Fotland
Correction on ManyFaces hardware. Running on a 4-core i7-4790 3.6 GHz, without 
a GPU, using a deep neural net (that I trained on KGS games).

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Nick Wedd
Sent: Saturday, July 16, 2016 8:21 AM
To: computer-go@computer-go.org
Subject: [Computer-go] Congratulations to Zen!

 

Congratulations to Zen19X, undefeated winner of last Sunday's KGS bot 
tournament!

 

My report is at http://www.weddslist.com/kgs/past/124/index.html. It is late 
because I have been on holiday with only a laptop for the last week.  I hope 
you will all tell me of any comments or corrections.

 

I apologise for my error in setting up the Open division with 9x9 boards, which 
meant that no games could be played in that division. 


 

Nick

-- 

Nick Wedd  mapr...@gmail.com

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Question on TPU hardware

2016-05-21 Thread David Fotland
I don’t expect AlphaGo will be available at any price, but I expect similar 
strength programs will be running on high end PCs in a few years. The AlphaGo 
team has done an outstanding job of exploring the solution space and showing us 
the way.  Others can now tweak and optimize and find more efficient ways to get 
similar results with much less expensive hardware.

David 

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
> Behalf Of "Ingo Althöfer"
> Sent: Saturday, May 21, 2016 12:26 AM
> To: computer-go@computer-go.org
> Subject: [Computer-go] Question on TPU hardware
> 
> I have a question (maybe to Aja):
> 
> Concerning the TPU hardware used in AlphaGo, how long will it take
> until that system (all together, including Go software) will be
> available for 20,000 Euro or less?
> 
> Ingo.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Commonly used neural network architectures

2016-05-20 Thread David Fotland
The alphaGo network is detailed in their paper.  They have about 50 binary 
inputs, one layer of 5x5 convolutional filters, and about 12 layers of 3x3 
convolutional filters.  Detlef’s net is specified in the prototxt file he 
published here.  It’s wider and deeper, but with fewer inputs.

 

The current popular approach is a to use 5x5 or 7x7 filters in the first 
layers, and 3x3 filters in higher layers.  The topmost layer is special, 
typically a 1x1 convolutional filter.  AlphaGo uses position dependent biases. 
RelU seems to work well, and pooling is not used (for obvious reasons).

 

In my experiments it is essential to have the first layer filters larger than 
3x3.  In higher layers 3x3 seems to work fine.

 

Hope this helps.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Urban Hafner
Sent: Friday, May 20, 2016 4:47 AM
To: computer-go@computer-go.org
Subject: [Computer-go] Commonly used neural network architectures

 

Hey there,

 

just like everyone else I’m currently looking into neural networks for my go 
program. ;) Apart from the AlphaGo paper where I can I find information about 
network architecture? There’s the network from April 2015 from Detlef 
(http://computer-go.org/pipermail/computer-go/2015-April/007573.html) but I 
don’t know enough about caffe to figure out the architecture. Basically, I more 
interested in understanding how to build a network myself than just using a 
pre-trained network.

 

Cheers,

 

Urban

-- 

Blog: http://bettong.net/

Twitter: https://twitter.com/ujh

Homepage: http://www.urbanhafner.com/

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Machine for Deep Neural Net training

2016-04-28 Thread David Fotland
Thanks.  I’ll give those a try before buying a bigger disk.  

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Josef Moudrik
Sent: Wednesday, April 27, 2016 10:03 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Machine for Deep Neural Net training

 

You can also use hdf5 format, which has transparent compression as well as 
coffee support.

Josef

 

Dne st 27. 4. 2016 18:06 uživatel Gian-Carlo Pascutto <g...@sjeng.org> napsal:

On 27-04-16 17:45, David Fotland wrote:
> I’d rather just buy another drive than spend time coding and
> debugging another Caffe input layer to further compress the inputs.

Caffe supports LevelDB as database layer, as an alternative to LMDB.

It has built-in compression, enabled by default.

--
GCP
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Machine for Deep Neural Net training

2016-04-27 Thread David Fotland
30M samples, 42 planes, 19x19 chars/plane, plus database overhead is 490 GB.  
In a dual boot machine that had windows on it originally, Windows wants to keep 
half of its original partition.  I didn’t want to reinstall windows after 
formatting, so I have a 1 TB for Linux.

 

However, AlphaGo used data augmentation (rotations and reflections), which will 
increase the input size to about 4 TB.  The input bandwidth is pretty low, and 
an external 8 TB USB drive will hold it all (about $250).  I’d rather just buy 
another drive than spend time coding and debugging another Caffe input layer to 
further compress the inputs.

 

Regards,

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Álvaro Begué
Sent: Wednesday, April 27, 2016 1:56 AM
To: computer-go
Subject: Re: [Computer-go] Machine for Deep Neural Net training

 

What are you doing that uses so much disk space? An extremely naive computation 
of required space for what you are doing is:

30M samples * (42 input planes + 1 output plane)/sample * 19*19 floats/plane * 
4 bytes/float = 1.7 TB

 

So that's cutting it close, But I think the inputs and outputs are all binary, 
which allows a factor of 32 compression right there, and you might be using 
constant planes for some inputs, and if the output is a move it fits in 9 
bits...

 

Álvaro.

 

 

On Wed, Apr 27, 2016 at 12:55 AM, David Fotland <fotl...@smart-games.com> wrote:

I have my deep neural net training setup working, and it's working so well I
want to share.  I already had Caffe running on my desktop machine (4 core
i7) without a GPU, with inputs similar to AlphaGo generated by Many Faces
into an LMDB database.  I trained a few small nets for a day each to get
some feel for it.

I bought an Alienware Area 51 from Dell, with two GTX 980 TI GPUs, 16 GB of
memory, and 2 TB of disk.  I set it up to dual boot Ubuntu 14.04, which made
it trivial to get the latest caffe up and running with CUDNN.  2 TB of disk
is not enough.  I'll have to add another drive.

I expected something like 20x speedup on training, but I was shocked by what
I actually got.

On my desktop, the Caffe MNIST sample took 27 minutes to complete.  On the
new machine it was 22 seconds.  73x faster.

My simple network has 42 input planes, and 4 layers of 48 filters each.
Training runs about 100x faster on the Alienware.  Training 100k Caffe
iterations (batches) of 50 positions takes 13 minutes, rather than almost a
full day on my desktop.

David

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Machine for Deep Neural Net training

2016-04-26 Thread David Fotland
I have my deep neural net training setup working, and it's working so well I
want to share.  I already had Caffe running on my desktop machine (4 core
i7) without a GPU, with inputs similar to AlphaGo generated by Many Faces
into an LMDB database.  I trained a few small nets for a day each to get
some feel for it.

I bought an Alienware Area 51 from Dell, with two GTX 980 TI GPUs, 16 GB of
memory, and 2 TB of disk.  I set it up to dual boot Ubuntu 14.04, which made
it trivial to get the latest caffe up and running with CUDNN.  2 TB of disk
is not enough.  I'll have to add another drive.

I expected something like 20x speedup on training, but I was shocked by what
I actually got.

On my desktop, the Caffe MNIST sample took 27 minutes to complete.  On the
new machine it was 22 seconds.  73x faster.

My simple network has 42 input planes, and 4 layers of 48 filters each.
Training runs about 100x faster on the Alienware.  Training 100k Caffe
iterations (batches) of 50 positions takes 13 minutes, rather than almost a
full day on my desktop.

David

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] *****SPAM***** Re: UEC cup 2nd day

2016-03-24 Thread David Fotland
There was one program (Shrike) that had a dnn without search.  It didn’t finish 
in the top 8.  Zen and Crazystone have custom DNN implementations.  Dark Forest 
uses Torch.  The rest used Caffe.

Remi's implementation is unusual and interesting.  I'll let him share it if he 
wants to.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
> Darren Cook
> Sent: Wednesday, March 23, 2016 5:19 AM
> To: computer-go@computer-go.org
> Subject: *SPAM* Re: [Computer-go] UEC cup 2nd day
> 
> David Fotland wrote:
> > There are 12 programs here that have deep neural nets.  2 were not
> > qualified for the second day, and six of them made the final 8.  Many
> > Faces has very basic DNN support, but it s turned off because it isn t
> > making the program stronger yet.  Only Dolburam and Many Faces don t
> > have DNN in the final 8.  Dolburam won in Beijing, but the DNN
> > programs are stronger and it didn t make the final 4.
> 
> Are all the DNN programs (or, at least, all 6 in the top 8) also using MCTS?
> (Re-phrased: is there any currently strong program not using MCTS?)
> 
> Darren
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] UEC cup 2nd day

2016-03-20 Thread David Fotland
I have sgf’s of the Many Faces’ games, but I finished 8th.  I don’t have the 
top games.

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Pawel Morawiecki
Sent: Sunday, March 20, 2016 1:57 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] UEC cup 2nd day

 

Hi, 


Final result
1st Zen
2nd darkforest
3rd CrazyStone
4th Aya

 

Are there any games available? (SGFs)?

 

Regards,

Paweł

 

 


- Original Message - From: ""Ingo Althöfer"" <3-hirn-ver...@gmx.de>
To: 
Sent: Sunday, March 20, 2016 4:04 PM
Subject: Re: [Computer-go] UEC cup 2nd day





Hi Hiroshi,

thanks for the many updates.

On another site I read that the bits on rank 1 and 2 will
play exhibition matches against a pro player on Wednesday.

Will those games be transmitted on KGS?
Has it been decided alreay which handicap?

Thanks in advance, Ingo.



Gesendet: Sonntag, 20. März 2016 um 07:41 Uhr
Von: "Hiroshi Yamashita" 
An: computer-go@computer-go.org
Betreff: Re: [Computer-go] UEC cup 2nd day

Zen won against darkforest

1st Zen
2nd darkforest
3rd CrazyStone
4th Aya

Hiroshi Yamashita


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] UEC cup 2nd day

2016-03-20 Thread David Fotland
There are 12 programs here that have deep neural nets.  2 were not qualified 
for the second day, and six of them made the final 8.  Many Faces has very 
basic DNN support, but it’s turned off because it isn’t making the program 
stronger yet.  Only Dolburam and Many Faces don’t have DNN in the final 8.  
Dolburam won in Beijing, but the DNN programs are stronger and it didn’t make 
the final 4.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
> Behalf Of Hiroshi Yamashita
> Sent: Saturday, March 19, 2016 8:35 PM
> To: computer-go@computer-go.org
> Subject: [Computer-go] UEC cup 2nd day
> 
> darkforest won against CGI.
> CrazyStone won against DolBalam.
> Zen won against Ray.
> Aya won against MFG.
> 
> Semi final is
> CrazyStone vs Zen
> darkforest vs Aya
> 
> Thanks,
> Hiroshi Yamashita
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] *****SPAM***** Congratulations to AlphaGo

2016-03-13 Thread David Fotland
Smart-games.com is getting a big increase in traffic, so there is certainly 
more interest in the game now.  I hope it holds up for the long term.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Dmitry Kamenetsky
Sent: Saturday, March 12, 2016 2:18 PM
To: computer-go@computer-go.org
Subject: *SPAM* [Computer-go] Congratulations to AlphaGo

 

Congratulations to AlphaGo and its team! You have done what many of us could 
only dream to do and in such short time I may add. This is a truly historical 
moment and an amazing achievement for AI research!

 

I hope this is not the end of Go and only sparks more interest in this 
beautiful game. What an exciting time we live in and I can't wait to see what 
the future holds. 

 

 

Regards,

Dmitry Kamenetsky

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo

2016-03-12 Thread David Fotland
Tremendous games by AlphaGo.  Congratulations!

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Lukas van de Wiel
Sent: Saturday, March 12, 2016 12:14 AM
To: computer-go@computer-go.org
Subject: [Computer-go] Congratulations to AlphaGo

 

Whoa, what a fight! Well fought, and well won!

Lukas

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Finding Alphago's Weaknesses

2016-03-10 Thread David Fotland
He was already in Byo-yomi, so perhaps he didn’t have an accurate count. This 
might explain why he looked upset at move 175.  He might have realized his 
mistake.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Darren Cook
> Sent: Thursday, March 10, 2016 8:26 AM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] Finding Alphago's Weaknesses
> 
> > In fact in game 2, white 172 was described [1] as the losing move,
> > because it would have started a ko. ...
> 
> "would have started a ko" --> "should have instead started a ko"
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] *****SPAM***** Re: AlphaGo won first game!

2016-03-09 Thread David Fotland

Yes, I think the programs will have similar biases.  In this game Sedol had 
some groups that were alive, but needed correct responses to stay alive.  Even 
though the pro's stones won’t die, the playouts sometimes manage to kill them.  
This makes the program think it is more ahead than it actually is.  AlphaGo 
should be much more accurate because it has a value network and can replay 
sequences from the mains search.

David

> 
> I.e. is it fair to say that other computer programs will appreciate and
> understand the computer's moves better than the human's moves, so saying
> it is ahead is to be expected? (confirmation bias)
> 
> Darren


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] *****SPAM***** Re: AlphaGo won first game!

2016-03-09 Thread David Fotland
I predicted Sedol would be shocked.  I'm still routing for Sedol.  From 
Scientific American interview...

Schaeffer and Fotland still predict Sedol will win the match. “I think the pro 
will win,” Fotland says, “But I think the pro will be shocked at how strong the 
program is.”

> 
> P.S. Lee Sedol says he was shocked, and never expected to lose, even
> when he was behind. I wonder if he did any special preparation for this
> match? (E.g. playing handicap games against other strong MCTS program,
> to appreciate how they behave when they have a lead.)
> 
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AlphaGo won first game!

2016-03-09 Thread David Fotland
Many Faces thought alpha go was ahead most of the game.  It looked to me like 
the turning point was when Alphago cut in the center then gave up the two 
cutting stones for gains on both sides (but not so strong…).  

 

Congratulations Aja!

 

I watched it at Google in Mountain View with about 100 people.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of Jim 
O'Flaherty
Sent: Tuesday, March 08, 2016 11:50 PM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] AlphaGo won first game!

 

Congratulations, AlphaGo and team. And by resignation! That's fantastic!

 

Anyone know where the tipping point was? Did Sedol get the end game order just 
slightly off and AlphaGo took advantage? Or was their an earlier poor move by 
Sedol and/or surprising (and good) move by AlphaGo? I'm WAY too weak a player 
to even make stupid guesses. Any links to in depth analysis would be greatly 
appreciated!

 

On Wed, Mar 9, 2016 at 1:46 AM, René van de Veerdonk 
 wrote:

wow .. congrats to the AlphaGo team!!

 

On Tue, Mar 8, 2016 at 11:43 PM, Hiroshi Yamashita  wrote:

AlphaGo won 1st game against Lee Sedol!

Hiroshi Yamashita

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CPU vs GPU

2016-03-03 Thread David Fotland
If you are using caffe, the network evaluator is single threaded, but it spends 
almost all of its time in BLAS, which uses one thread per virtual CPU.On a 
somewhat slower i7, I’m seeing about 200 ms.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Rémi Coulom
> Sent: Wednesday, March 02, 2016 1:23 AM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] CPU vs GPU
> 
> I tried Detlef's 54% NN on my machine. CPU = i7-5930K, GPU = GTX 980
> (not using cuDNN).
> 
> On the CPU, I get 176 ms time, and 10 ms on the GPU (IIRC, someone
> reported 6 ms with cuDNN). But it is using only one core on the CPU,
> whereas it is using the full GPU.
> 
> If this is correct, then I believe it is still possible to have a very
> strong CPU-based program.
> 
> Or is it possible to evaluate faster on the GPU by using a batch?
> 
> R mi
> 
> On 03/02/2016 09:43 AM, Petr Baudis wrote:
> > Also, reading more of that pull request, the guy benchmarking it had
> > old nvidia driver version which came with about 50% performance hit.
> > So I'm not sure what were the final numbers.  (And whether current
> > caffe version can actually match these numbers, since this pull
> > request wasn't
> > merged.)
> >
> > On Wed, Mar 02, 2016 at 12:29:41AM -0800, Chaz G. wrote:
> >> R mi,
> >>
> >> Nvidia launched the K20 GPU in late 2012. Since then, GPUs and their
> >> convolution algorithms have improved considerably, while CPU
> >> performance has been relatively stagnant. I would expect about a 10x
> >> improvement with
> >> 2016 hardware.
> >>
> >> When it comes to training, it's the difference between running a job
> >> overnight and running a job for the entire weekend.
> >>
> >> Best,
> >> -Chaz
> >>
> >> On Tue, Mar 1, 2016 at 1:03 PM, R mi Coulom <remi.cou...@free.fr>
> wrote:
> >>
> >>> How tremendous is it? On that page, I find this data:
> >>>
> >>> https://github.com/BVLC/caffe/pull/439
> >>>
> >>> "
> >>> These are setup details:
> >>>
> >>>   * Desktop: CPU i7-4770 (Haswell), 3.5 GHz , DRAM - 16 GB; GPU K20.
> >>>   * Ubuntu 12.04; gcc 4.7.3; MKL 11.1.
> >>>
> >>> Test:: imagenet, 100 train iteration (batch = 256).
> >>>
> >>>   * GPU: time= 260 sec / memory = 0.8 GB
> >>>   * CPU: time= 752 sec / memory = 3.5 GiB //Memory data is from
> system
> >>> monitor.
> >>>
> >>> "
> >>>
> >>> This does not look so tremendous to me. What kind of speed
> >>> difference do you get for Go networks?
> >>>
> >>> R mi
> >>>
> >>> On 03/01/2016 06:19 PM, Petr Baudis wrote:
> >>>
> >>>> On Tue, Mar 01, 2016 at 09:14:39AM -0800, David Fotland wrote:
> >>>>
> >>>>> Very interesting, but it should also mention Aya.
> >>>>>
> >>>>> I'm working on this as well, but I haven t bought any hardware
> >>>>> yet.  My goal is not to get 7 dan on expensive hardware, but to
> >>>>> get as much strength as I can on standard PC hardware.  I'll be
> >>>>> looking at much smaller nets, that don t need a GPU to run.  I'll
> have to buy a GPU for training.
> >>>>>
> >>>> But I think most people who play Go are also fans of computer games
> >>>> that often do use GPUs. :-)  Of course, it's something totally
> >>>> different from NVidia Keplers, but still the step up from a CPU is
> tremendous.
> >>>>
> >>>>  Petr Baudis
> >>>> ___
> >>>> Computer-go mailing list
> >>>> Computer-go@computer-go.org
> >>>> http://computer-go.org/mailman/listinfo/computer-go
> >>>>
> >>> ___
> >>> Computer-go mailing list
> >>> Computer-go@computer-go.org
> >>> http://computer-go.org/mailman/listinfo/computer-go
> >> ___
> >> Computer-go mailing list
> >> Computer-go@computer-go.org
> >> http://computer-go.org/mailman/listinfo/computer-go
> >
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Deep Learning learning resources?

2016-03-03 Thread David Fotland
I got the basics of Machine learning (including sample neural nets) from Andrew 
Ng's course course, two or three years ago.  I highly recommend it.  Lots of 
practical advice.  The rest came from reading papers and probably some on-line 
searches.  Amazon's Computer Vision team uses deep neural nets.  I've talked to 
some of them and attended some internal presentations.  I'm using caffe, so I 
don’t need to implement the network code itself.  Mostly I looked for practical 
advice about network organizations that work.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Darren Cook
> Sent: Wednesday, March 02, 2016 1:53 AM
> To: computer-go@computer-go.org
> Subject: [Computer-go] Deep Learning learning resources?
> 
> I'm sure quite a few people here have suddenly taken a look at neural
> nets the past few months. With hindsight where have you learnt most?
> Which is the most useful book you've read? Is there a Udacity (or
> similar) course that you recommend? Or perhaps a blog or youtube series
> that was so good you went back and read/viewed all the archives?
> 
> Thanks!
> Darren
> 
> P.S. I was thinking pragmatic, and general, how-to guides for people
> dealing with challenging problems similar to computer go, but if you
> have recommendations for latest academic theories, or for a very
> specific field, I'm sure someone would appreciate hearing it.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Deep Zen - do we have a race now?

2016-03-01 Thread David Fotland
I'll keep it in mind.  I'm using caffe, which has a compile-time flag, so I'm 
not sure it will work with GPU enabled on a machine without a GPU.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Michael Markefka
> Sent: Tuesday, March 01, 2016 2:21 PM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] Deep Zen - do we have a race now?
> 
> Would be nice to have it as an option. My desktop PC and my laptop both
> have CUDA-enabled graphics, and that isn't uncommon anymore.
> 
> Also, if you are training on a GPU you can probably avoid a lot of
> hassle if you expect to run it on a GPU as well. I don't know how other
> NN implementations handle it, but the GPU-to-CPU conversion script that
> comes with the Theano-based pylearn2 kit doesn't work very reliably.
> 
> Also, even quite big nets probably can be run on modest GPUs reasonably
> well (within memory bounds). It's the training where the size really
> hurts.
> 
> On Tue, Mar 1, 2016 at 6:19 PM, Petr Baudis <pa...@ucw.cz> wrote:
> > On Tue, Mar 01, 2016 at 09:14:39AM -0800, David Fotland wrote:
> >> Very interesting, but it should also mention Aya.
> >>
> >> I'm working on this as well, but I haven t bought any hardware yet.
> My goal is not to get 7 dan on expensive hardware, but to get as much
> strength as I can on standard PC hardware.  I'll be looking at much
> smaller nets, that don t need a GPU to run.  I'll have to buy a GPU for
> training.
> >
> > But I think most people who play Go are also fans of computer games
> > that often do use GPUs. :-)  Of course, it's something totally
> > different from NVidia Keplers, but still the step up from a CPU is
> tremendous.
> >
> > Petr Baudis
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Deep Zen - do we have a race now?

2016-03-01 Thread David Fotland
Very interesting, but it should also mention Aya.  

I'm working on this as well, but I haven’t bought any hardware yet.  My goal is 
not to get 7 dan on expensive hardware, but to get as much strength as I can on 
standard PC hardware.  I'll be looking at much smaller nets, that don’t need a 
GPU to run.  I'll have to buy a GPU for training.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of "Ingo Althöfer"
> Sent: Tuesday, March 01, 2016 8:45 AM
> To: computer-go@computer-go.org
> Subject: [Computer-go] Deep Zen - do we have a race now?
> 
> Read here:
> http://www.lifein19x19.com/forum/viewtopic.php?p=199532#p199532
> 
> Wonderfully exciting times!
> Ingo.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] *****SPAM***** Re: Move evalution by expected value, as product of expected winrate and expected points?

2016-02-23 Thread David Fotland
Working on it :)

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of "Ingo Althöfer"
> Sent: Tuesday, February 23, 2016 7:56 AM
> To: computer-go@computer-go.org
> Subject: *SPAM* Re: [Computer-go] Move evalution by expected
> value, as product of expected winrate and expected points?
> 
> My 1.5 cent:
> 
> David Fotland has a nice score-estimator in his (old) ManyFaces bot.
> The score estimator is still from the days before the Monte Carlo
> version.
> 
> Perhaps, David can improve on this estimator with help of CNNs.
> 
> Ingo.
> 
> 
> 
> Gesendet: Dienstag, 23. Februar 2016 um 16:41 Uhr Von: "Justin .Gilmer"
> <jmgil...@gmail.com> An: computer-go@computer-go.org
> Betreff: Re: [Computer-go] Move evalution by expected value, as product
> of expected winrate and expected points?
> 
> I made a similar attempt as Alvaro to predict final ownership. You can
> find the code here: https://github.com/jmgilmer/GoCNN/. It's trained to
> predict final ownership for about 15000 professional games which were
> played until the end (didn't end in resignation). It gets about 80.5%
> accuracy on a held out test set, although the accuracy greatly varies
> based on how far through the game you are. Can't say how well it would
> work in a go player.  -Justin   On Tue, Feb 23, 2016 at 7:00 AM,
> <computer-go-requ...@computer-go.org[computer-go-request@computer-
> go.org]> wrote:Send Computer-go mailing list submissions to
> computer-go@computer-go.org[computer-go@computer-go.org]
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> http://computer-go.org/mailman/listinfo/computer-go[http://computer-
> go.org/mailman/listinfo/computer-go]
> or, via email, send a message with subject or body 'help' to
> computer-go-requ...@computer-go.org[computer-go-requ...@computer-go.org]
> 
> You can reach the person managing the list at computer-go-
> ow...@computer-go.org[computer-go-ow...@computer-go.org]
> 
> When replying, please edit your Subject line so it is more specific than
> "Re: Contents of Computer-go digest..."
> 
> 
> Today's Topics:
> 
>1. Re: Congratulations to Zen! (Robert Jasiek)2. Move evalution
> by expected value, as product of expected   winrate and expected
> points? (Michael Markefka)3. Re: Move evalution by expected value,
> as product of expected   winrate and expected points? ( lvaro Begu )
> 4. Re: Move evalution by expected value, as product of expected
> winrate and expected points? (Robert Jasiek)
> 
> 
> --
> 
> Message: 1
> Date: Mon, 22 Feb 2016 19:13:20 +0100
> From: Robert Jasiek <jas...@snafu.de[jas...@snafu.de]>
> To: computer-go@computer-go.org[computer-go@computer-go.org]
> Subject: Re: [Computer-go] Congratulations to Zen!
> Message-ID: <56cb4fc0.4010...@snafu.de[56cb4fc0.4010...@snafu.de]>
> Content-Type: text/plain; charset=UTF-8; format=flowed
> 
> Aja, sorry to bother you with trivialities, but how does Alphago avoid
> power or network failures and such incidents?
> 
> --
> robert jasiek
> 
> 
> --
> 
> Message: 2
> Date: Tue, 23 Feb 2016 11:36:57 +0100
> From: Michael Markefka
> <michael.marke...@gmail.com[michael.marke...@gmail.com]>
> To: computer-go@computer-go.org[computer-go@computer-go.org]
> Subject: [Computer-go] Move evalution by expected value, as product of
> expected winrate and expected points?
> Message-ID:
> <CAJg7PAPU_gbHvNy3Cv+D-
> p238_hkqkv5pojxozjly4nsqas...@mail.gmail.com[CAJg7PAPU_gbHvNy3Cv%2BD-
> p238_hkqkv5pojxozjly4nsqas...@mail.gmail.com]>
> Content-Type: text/plain; charset=UTF-8
> 
> Hello everyone,
> 
> in the wake of AlphaGo using a DCNN to predict expected winrate of a
> move, I've been wondering whether one could train a DCNN for expected
> territory or points successfully enough to be of some use (leaving the
> issue of win by resignation for a more in-depth discussion). And,
> whether winrate and expected territory (or points) always run in
> parallel or whether there are diverging moments.
> 
> Computer Go programs play what are considered slack or slow moves when
> ahead, sometimes being too conservative and giving away too much of
> their potential advantage. If expected points and expected winrate
> diverge, this could be a way to make the programs play in a more natural
> way, even if there were no strength increase to be gained.
> Then again there might be a parameter configuration that might yield
> some advantage and perhaps this configuration would need 

Re: [Computer-go] What hardware to use to train the DNN

2016-02-06 Thread David Fotland
Thanks, this is really interesting.  I still need something that works on 
Windows, and I use Many Faces to visualize what's going on, so I'll stick with 
windows for development.  I might use this for debugging on linux though.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Detlef Schmicker
> Sent: Saturday, February 06, 2016 1:04 AM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] What hardware to use to train the DNN
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Hi David,
> 
> I am not happy with my IDE on linux too. You might give Visual Studio on
> linux a try:
> 
> https://www.visualstudio.com/de-de/products/code-vs.aspx
> 
> It seems to be free...
> 
> Detlef
> 
> Am 05.02.2016 um 07:13 schrieb David Fotland:
> > I ll do training on Linux for performance, and because it is so much
> > easier to build than on Windows.  I need something I can ship to my
> > windows customers, that is light weight enough to play well without a
> > GPU.
> >
> >
> >
> > All of my testing and evaluation machines and tools are on Windows, so
> > I can t easily measure strength and progress on linux.  I m also not
> > eager to learn a new IDE.  I like Visual Studio.
> >
> >
> >
> > David
> >
> >
> >
> > From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
> > Behalf Of Petri Pitkanen Sent: Thursday, February 04, 2016 9:12 PM
> > To: computer-go Subject: Re: [Computer-go] What hardware to use to
> > train the DNN
> >
> >
> >
> > Welll, David is making a product. Making a product is 'trooper'
> > solution unless you are making very specific product to a very narrow
> > target group, willing to pay thousands for single license
> >
> > Petri
> >
> >
> >
> > 2016-02-04 23:50 GMT+02:00 uurtamo . <uurt...@gmail.com>:
> >
> > David,
> >
> >
> >
> > You're a trooper for doing this in windows. :)
> >
> >
> >
> > The OS overhead is generally lighter if you use unix; even the most
> > modern windows versions have a few layers of slowdown. Unix (for
> > better or worse) will give you closer, easier access to the hardware,
> > and closer, easier access to halting your machine if you are deep in
> > the guts. ;)
> >
> >
> >
> > s.
> >
> >
> >
> >
> >
> > On Tue, Feb 2, 2016 at 10:25 AM, David Fotland
> > <fotl...@smart-games.com> wrote:
> >
> > Detlef, Hiroshi, Hideki, and others,
> >
> > I have caffelib integrated with Many Faces so I can evaluate a DNN.
> > Thank you very much Detlef for sample code to set up the input layer.
> > Building caffe on windows is painful.  If anyone else is doing it and
> > gets stuck I might be able to help.
> >
> > What hardware are you using to train networks?  I don t have a
> > cuda-capable GPU yet, so I'm going to buy a new box.  I'd like some
> > advice.  Caffe is not well supported on Windows, so I plan to use a
> > Linux box for training, but continue to use Windows for testing and
> > development.  For competitions I could use either windows or linux.
> >
> > Thanks in advance,
> >
> > David
> >
> >> -Original Message- From: Computer-go
> >> [mailto:computer-go-boun...@computer-go.org] On Behalf Of Hiroshi
> >> Yamashita Sent: Monday, February 01, 2016 11:26 PM To:
> >> computer-go@computer-go.org Subject: *SPAM* Re:
> >> [Computer-go] DCNN can solve semeai?
> >>
> >> Hi Detlef,
> >>
> >> My study heavily depends on your information. Especially Oakfoam
> >> code, lenet.prototxt and generate_sample_data_leveldb.py was helpful.
> >> Thanks!
> >>
> >>> Quite interesting that you do not reach the prediction rate 57% from
> >>> the facebook paper by far too! I have the same experience with the
> >>
> >> I'm trying 12 layers 256 filters, but it is around 49.8%. I think 57%
> >> is maybe from KGS games.
> >>
> >>> Did you strip the games before 1800AD, as mentioned in the FB paper?
> >>> I did not do it and was thinking my training is not ok, but as you
> >>> have the same result probably this is the only difference?!
> >>
> >> I also did not use before 1800AD. And don't use hadicap games.
> >> Training positions are 15693570 from 76000 games. Test
> >> positions are   445693 from  2156 games. All games are shuffled

Re: [Computer-go] What hardware to use to train the DNN

2016-02-06 Thread David Fotland
I'm not using it.  Many Faces is written in c, (gui in C++ with MFC).  I ported 
caffe to windows and I'm calling caffelib directly from mfgo.  I'm not training 
a net yet, so I haven’t decided what to do.  Most likely I will create the 
input database using c++ code in many faces, and train using caffe.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Richard Lorentz
> Sent: Saturday, February 06, 2016 6:39 AM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] What hardware to use to train the DNN
> 
> Thought I'd ask you this off line. Are you using Code:Blocks and finding
> it's crashing a lot recently? (That's my experience.)
> 
> -Richard
> 
> 
> On 02/06/2016 01:04 AM, Detlef Schmicker wrote:
> > I am not happy with my IDE on linux too.
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-04 Thread David Fotland
I’ll do training on Linux for performance, and because it is so much easier to 
build than on Windows.  I need something I can ship to my windows customers, 
that is light weight enough to play well without a GPU.

 

All of my testing and evaluation machines and tools are on Windows, so I can’t 
easily measure strength and progress on linux.  I’m also not eager to learn a 
new IDE.  I like Visual Studio.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Petri Pitkanen
Sent: Thursday, February 04, 2016 9:12 PM
To: computer-go
Subject: Re: [Computer-go] What hardware to use to train the DNN

 

Welll, David is making a product. Making a product is 'trooper' solution unless 
you are making very specific product to a very narrow target group, willing to 
pay thousands for single license

Petri

 

2016-02-04 23:50 GMT+02:00 uurtamo . <uurt...@gmail.com>:

David,

 

You're a trooper for doing this in windows. :)

 

The OS overhead is generally lighter if you use unix; even the most modern 
windows versions have a few layers of slowdown. Unix (for better or worse) will 
give you closer, easier access to the hardware, and closer, easier access to 
halting your machine if you are deep in the guts. ;)

 

s.

 

 

On Tue, Feb 2, 2016 at 10:25 AM, David Fotland <fotl...@smart-games.com> wrote:

Detlef, Hiroshi, Hideki, and others,

I have caffelib integrated with Many Faces so I can evaluate a DNN.  Thank you 
very much Detlef for sample code to set up the input layer.  Building caffe on 
windows is painful.  If anyone else is doing it and gets stuck I might be able 
to help.

What hardware are you using to train networks?  I don’t have a cuda-capable GPU 
yet, so I'm going to buy a new box.  I'd like some advice.  Caffe is not well 
supported on Windows, so I plan to use a Linux box for training, but continue 
to use Windows for testing and development.  For competitions I could use 
either windows or linux.

Thanks in advance,

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Hiroshi Yamashita
> Sent: Monday, February 01, 2016 11:26 PM
> To: computer-go@computer-go.org
> Subject: *SPAM* Re: [Computer-go] DCNN can solve semeai?
>
> Hi Detlef,
>
> My study heavily depends on your information. Especially Oakfoam code,
> lenet.prototxt and generate_sample_data_leveldb.py was helpful. Thanks!
>
> > Quite interesting that you do not reach the prediction rate 57% from
> > the facebook paper by far too! I have the same experience with the
>
> I'm trying 12 layers 256 filters, but it is around 49.8%.
> I think 57% is maybe from KGS games.
>
> > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > did not do it and was thinking my training is not ok, but as you have
> > the same result probably this is the only difference?!
>
> I also did not use before 1800AD. And don't use hadicap games.
> Training positions are 15693570 from 76000 games.
> Test positions are   445693 from  2156 games.
> All games are shuffled in advance. Each position is randomly rotated.
> And memorizing 24000 positions, then shuffle and store to LebelDB.
> At first I did not shuffle games. Then accuracy is down each 61000
> iteration (one epoch, 256 mini-batch).
> http://www.yss-aya.com/20160108.png
> It means DCNN understands easily the difference 1800AD games and  2015AD
> games. I was surprised DCNN's ability. And maybe 1800AD games  are also
> not good for training?
>
> Regards,
> Hiroshi Yamashita
>
> - Original Message -
> From: "Detlef Schmicker" <d...@physik.de>
> To: <computer-go@computer-go.org>
> Sent: Tuesday, February 02, 2016 3:15 PM
> Subject: Re: [Computer-go] DCNN can solve semeai?
>
> > Thanks a lot for sharing this.
> >
> > Quite interesting that you do not reach the prediction rate 57% from
> > the facebook paper by far too! I have the same experience with the
> > GoGoD database. My numbers are nearly the same as yours 49% :) my net
> > is quite simelar, but I use 7,5,5,3,3, with 12 layers in total.
> >
> > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > did not do it and was thinking my training is not ok, but as you have
> > the same result probably this is the only difference?!
> >
> > Best regards,
> >
> > Detlef
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] *****SPAM***** Re: Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread David Fotland
Robert, please consider some of this as the difference between math and 
engineering.  Math desires rigor.  Engineering desires working solutions.  When 
an engineering solution is being described, you shouldn't expect the same level 
of rigor as in a mathematical proof.  Often all we can say is something like, 
"I tried a bunch of things, and this one worked best".  Both have value.

-David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Robert Jasiek
> Sent: Tuesday, February 02, 2016 3:11 AM
> To: computer-go@computer-go.org
> Subject: *SPAM* Re: [Computer-go] Mastering the Game of Go with
> Deep Neural Networks and Tree Search
> 
> On 02.02.2016 11:49, Petr Baudis wrote:
> > you seem to come off as perhaps a little too aggressive in your recent
> > few emails...
> 
> If I were not aggressively critical about inappropriate ambiguity, it
> would continue for further decades. Papers containing mathematical
> contents must clarify when something whose use or annotation looks
> mathematical is not a definition / well-defined term but intentionally
> ambiguous. This clarity is a fundamental of mathematical, informatical
> or scientific research. Without clarity, progress is delayed. Every
> professor at university will confirm this to you.
> 
> >The question was about the practical implementation of an MC
> > simulation, which does *not* require formal definitions of all
> > concepts used in the description, or any proofs.  It's just a
> > heuristic, and it can be arbitrarily complicated, making a tradeoff
> > between speed and accuracy.
> 
> Fine, provided it is clearly stated that it is an ambiguous heuristic
> and not an [unambiguous] definition / term. References / links (possibly
> iterative) hiding ambiguity without declaring it are inappropriate.
> 
> --
> robert jasiek
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] *****SPAM***** Re: Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-02 Thread David Fotland
Amazon uses deep neural nets in many, many areas.  There is some overlap with 
the kind of nets used in AlphaGo.  I passed a link to the paper on to one of 
our researchers and he found it very interesting.  DNN works very well when 
there is a lot of labelled data to learn from.  It can be useful to examine a 
problem area from the point of view: where can I get the most labelled data?

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of "Ingo Althöfer"
> Sent: Tuesday, February 02, 2016 12:31 AM
> To: computer-go@computer-go.org
> Subject: *SPAM* Re: [Computer-go] Mastering the Game of Go with
> Deep Neural Networks and Tree Search
> 
> Hi George,
> 
> welcome, and thanks for your valuable hint on the Google-whitepaper.
> 
> Do/did you have/see any cross-relations between your research and
> computer Go?
> 
> Cheers, Ingo.
> 
> 
> Gesendet: Dienstag, 02. Februar 2016 um 05:14 Uhr Von: "George Dahl"
>  An: computer-go 
> Betreff: Re: [Computer-go] Mastering the Game of Go with Deep Neural
> Networks and Tree Search
> 
> If anything, the other great DCNN applications predate the application
> of these methods to Go. Deep neural nets (convnets and other types) have
> been successfully applied in computer vision, robotics, speech
> recognition, machine translation, natural language processing, and hosts
> of other areas. The first paragraph of the TensorFlow whitepaper
> (http://download.tensorflow.org/paper/whitepaper2015.pdf) even mentions
> dozens at Alphabet specifically.
> 
> Of course the future will hold even more exciting applications, but
> these techniques have been proven in many important problems long before
> they had success in Go and they are used by many different companies and
> research groups. Many example applications from the literature or at
> various companies used models trained on a single machine with GPUs.
> 
> On Mon, Feb 1, 2016 at 12:00 PM, Hideki Kato
>  wrote:Ingo Althofer:
>  bs72>:
> >Hi Hideki,
> >
> >first of all congrats to the nice performance of Zen over the weekend!
> >
> >> Ingo and all,
> >> Why you care AlphaGo and DCNN so much?
> >
> >I can speak only for myself. DCNNs may be not only applied to achieve
> >better playing strength. One may use them to create playing styles, or
> >bots for go variants.
> >
> >One of my favorites is robot frisbee go.
> >http://www.althofer.de/robot-play/frisbee-robot-go.jpg[http://www.altho
> >fer.de/robot-play/frisbee-robot-go.jpg]
> >Perhaps one can teach robots with DCNN to throw the disks better.
> >
> >And my expectation is: During 2016 we will see many more fantastic
> >applications of DCNN, not only in Go. (Olivier had made a similar
> >remark already.)
> 
> Agree but one criticism.  If such great DCNN applications all need huge
> machine power like AlphaGo (upon execution, not training), then the
> technology is hard to apply to many areas, autos and robots, for
> examples.  Are DCNN chips the only way to reduce computational cost?  I
> don't forecast other possibilities.
> Much more economical methods should be developed anyway.
> #Our brain consumes less than 100 watt.
> 
> Hideki
> 
> >Ingo.
> >
> >PS. Dietmar Wolz, my partner in space trajectory design, just told me
> >that in his company they started woth deep learning...
> >___
> >Computer-go mailing list
> >Computer-go@computer-go.org[Computer-go@computer-go.org]
> >http://computer-go.org/mailman/listinfo/computer-go[http://computer-go.
> >org/mailman/listinfo/computer-go]
> --
> Hideki Kato 
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org[Computer-go@computer-go.org]
> http://computer-go.org/mailman/listinfo/computer-
> go___ Computer-go mailing
> list Computer-go@computer-go.org http://computer-
> go.org/mailman/listinfo/computer-go[http://computer-
> go.org/mailman/listinfo/computer-go]
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] What hardware to use to train the DNN

2016-02-02 Thread David Fotland
Detlef, Hiroshi, Hideki, and others,

I have caffelib integrated with Many Faces so I can evaluate a DNN.  Thank you 
very much Detlef for sample code to set up the input layer.  Building caffe on 
windows is painful.  If anyone else is doing it and gets stuck I might be able 
to help.

What hardware are you using to train networks?  I don’t have a cuda-capable GPU 
yet, so I'm going to buy a new box.  I'd like some advice.  Caffe is not well 
supported on Windows, so I plan to use a Linux box for training, but continue 
to use Windows for testing and development.  For competitions I could use 
either windows or linux.

Thanks in advance,

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Hiroshi Yamashita
> Sent: Monday, February 01, 2016 11:26 PM
> To: computer-go@computer-go.org
> Subject: *SPAM* Re: [Computer-go] DCNN can solve semeai?
> 
> Hi Detlef,
> 
> My study heavily depends on your information. Especially Oakfoam code,
> lenet.prototxt and generate_sample_data_leveldb.py was helpful. Thanks!
> 
> > Quite interesting that you do not reach the prediction rate 57% from
> > the facebook paper by far too! I have the same experience with the
> 
> I'm trying 12 layers 256 filters, but it is around 49.8%.
> I think 57% is maybe from KGS games.
> 
> > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > did not do it and was thinking my training is not ok, but as you have
> > the same result probably this is the only difference?!
> 
> I also did not use before 1800AD. And don't use hadicap games.
> Training positions are 15693570 from 76000 games.
> Test positions are   445693 from  2156 games.
> All games are shuffled in advance. Each position is randomly rotated.
> And memorizing 24000 positions, then shuffle and store to LebelDB.
> At first I did not shuffle games. Then accuracy is down each 61000
> iteration (one epoch, 256 mini-batch).
> http://www.yss-aya.com/20160108.png
> It means DCNN understands easily the difference 1800AD games and  2015AD
> games. I was surprised DCNN's ability. And maybe 1800AD games  are also
> not good for training?
> 
> Regards,
> Hiroshi Yamashita
> 
> - Original Message -
> From: "Detlef Schmicker" 
> To: 
> Sent: Tuesday, February 02, 2016 3:15 PM
> Subject: Re: [Computer-go] DCNN can solve semeai?
> 
> > Thanks a lot for sharing this.
> >
> > Quite interesting that you do not reach the prediction rate 57% from
> > the facebook paper by far too! I have the same experience with the
> > GoGoD database. My numbers are nearly the same as yours 49% :) my net
> > is quite simelar, but I use 7,5,5,3,3, with 12 layers in total.
> >
> > Did you strip the games before 1800AD, as mentioned in the FB paper? I
> > did not do it and was thinking my training is not ok, but as you have
> > the same result probably this is the only difference?!
> >
> > Best regards,
> >
> > Detlef
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] *****SPAM***** Re: What hardware to use to train the DNN

2016-02-02 Thread David Fotland
How long does it take to train one of your nets?  Is it safe to assume that 
training time is roughly proportional to the number of neurons in the net?

Thanks,

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Detlef Schmicker
> Sent: Tuesday, February 02, 2016 10:35 AM
> To: computer-go@computer-go.org
> Subject: *SPAM* Re: [Computer-go] What hardware to use to train
> the DNN
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Hi David,
> 
> I use Ubuntu 14.04 LTS with a NVIDIA GTX970 Graphic card (and i7-4970k,
> but this is not important for training I think) and installed CUDNN v4
> (important, at least a factor 4 in training speed).
> 
> This Ubuntu version is officially supported by Cuda and I did only have
> minor problems if an Ubuntu update updated the graphics driver: I had 2
> times in the last year to reinstall cuda (a little ugly, as the graphic
> driver did not work after the update and you had to boot into command
> line mode).
> 
> Detlef
> 
> Am 02.02.2016 um 19:25 schrieb David Fotland:
> > Detlef, Hiroshi, Hideki, and others,
> >
> > I have caffelib integrated with Many Faces so I can evaluate a DNN.
> > Thank you very much Detlef for sample code to set up the input layer.
> > Building caffe on windows is painful.  If anyone else is doing it and
> > gets stuck I might be able to help.
> >
> > What hardware are you using to train networks?  I don t have a
> > cuda-capable GPU yet, so I'm going to buy a new box.  I'd like some
> > advice.  Caffe is not well supported on Windows, so I plan to use a
> > Linux box for training, but continue to use Windows for testing and
> > development.  For competitions I could use either windows or linux.
> >
> > Thanks in advance,
> >
> > David
> >
> >> -Original Message- From: Computer-go
> >> [mailto:computer-go-boun...@computer-go.org] On Behalf Of Hiroshi
> >> Yamashita Sent: Monday, February 01, 2016 11:26 PM To:
> >> computer-go@computer-go.org Subject: *SPAM* Re:
> >> [Computer-go] DCNN can solve semeai?
> >>
> >> Hi Detlef,
> >>
> >> My study heavily depends on your information. Especially Oakfoam
> >> code, lenet.prototxt and generate_sample_data_leveldb.py was helpful.
> >> Thanks!
> >>
> >>> Quite interesting that you do not reach the prediction rate 57% from
> >>> the facebook paper by far too! I have the same experience with the
> >>
> >> I'm trying 12 layers 256 filters, but it is around 49.8%. I think 57%
> >> is maybe from KGS games.
> >>
> >>> Did you strip the games before 1800AD, as mentioned in the FB paper?
> >>> I did not do it and was thinking my training is not ok, but as you
> >>> have the same result probably this is the only difference?!
> >>
> >> I also did not use before 1800AD. And don't use hadicap games.
> >> Training positions are 15693570 from 76000 games. Test
> >> positions are   445693 from  2156 games. All games are shuffled
> >> in advance. Each position is randomly rotated. And memorizing
> >> 24000 positions, then shuffle and store to LebelDB. At first I did
> >> not shuffle games. Then accuracy is down each 61000 iteration (one
> >> epoch, 256 mini-batch). http://www.yss-aya.com/20160108.png
> >> It means DCNN understands easily the difference 1800AD games and
> >> 2015AD games. I was surprised DCNN's ability. And maybe 1800AD games
> >> are also not good for training?
> >>
> >> Regards, Hiroshi Yamashita
> >>
> >> - Original Message - From: "Detlef Schmicker"
> >> <d...@physik.de> To: <computer-go@computer-go.org> Sent: Tuesday,
> >> February 02, 2016 3:15 PM Subject: Re: [Computer-go] DCNN can solve
> >> semeai?
> >>
> >>> Thanks a lot for sharing this.
> >>>
> >>> Quite interesting that you do not reach the prediction rate 57% from
> >>> the facebook paper by far too! I have the same experience with the
> >>> GoGoD database. My numbers are nearly the same as yours 49% :) my
> >>> net is quite simelar, but I use 7,5,5,3,3,
> >>> with 12 layers in total.
> >>>
> >>> Did you strip the games before 1800AD, as mentioned in the FB paper?
> >>> I did not do it and was thinking my training is not ok, but as you
> >>> have the same result probably this is the only difference?!
> >>>
&g

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-01-27 Thread David Fotland
Google’s breakthrough is just as impactful as the invention of MCTS.  
Congratulations to the team.  It’s a huge leap for computer go, but more 
importantly it shows that DNN can be applied to many other difficult problems.

 

I just added an answer.  I don’t think anyone will try to exactly replicate it, 
but a year from now there should be several strong programs using very similar 
techniques, with similar strength.

 

An interesting question is, who has integrated or is integrating a DNN into 
their go program?  I’m working on it.  I know there are several others.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Jason Li
Sent: Wednesday, January 27, 2016 3:14 PM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks 
and Tree Search

 

Congratulations to Aja!

 

A question to the community. Is anyone going to replicate the experimental 
results?

 

https://www.quora.com/Is-anyone-replicating-the-experimental-results-of-the-human-level-Go-player-published-by-Google-Deepmind-in-Nature-in-January-2016?

 

Jason

 

On Thu, Jan 28, 2016 at 9:26 AM, Erik van der Werf  
wrote:

Wow, excellent results, congratulations Aja & team!

 

I'm surprised to see nothing explicitly on decomposing into subgames (e.g. for 
semeai). I always thought some kind of adaptive decomposition would be needed 
to reach pro-strength... I guess you must have looked into this; does this mean 
that the networks have learnt to do it by themselves? Or perhaps they play in a 
way that simply avoids their weaknesses? 

 

Would be interesting to see a demonstration that the networks have learned the 
semeai rules through reinforcement learning / self-play :-) 

 

Best,

Erik

 

 

On Wed, Jan 27, 2016 at 7:46 PM, Aja Huang  wrote:

Hi all,

 

We are very excited to announce that our Go program, AlphaGo, has beaten a 
professional player for the first time. AlphaGo beat the European champion Fan 
Hui by 5 games to 0. We hope you enjoy our paper, published in Nature today. 
The paper and all the games can be found here: 

 

http://www.deepmind.com/alpha-go.html

 

AlphaGo will be competing in a match against Lee Sedol in Seoul, this March, to 
see whether we finally have a Go program that is stronger than any human! 

 

Aja

 

PS I am very busy preparing AlphaGo for the match, so apologies in advance if I 
cannot respond to all questions about AlphaGo.

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Those were the days ...

2015-12-28 Thread David Fotland
The old program had a fixed search, so it wouldn’t get any stronger on faster 
hardware.

Martin understood computer go weaknesses, so he played to exploit them.  A 
modern program not specifically tuned for those weaknesses would have a much 
more difficult time.

I released version 10 in 1997, so it might be more accurate to play against 
version 10 rather than against version 12 at 12 kyu level.  I probably still 
have a version 10 CD somewhere.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
> "Ingo Althöfer"
> Sent: Monday, December 28, 2015 7:07 AM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] Those were the days ...
> 
> Hello,
> 
> >  > Remember 1998: In the US Go Congress an exhibition match  > took
> > place: 5-dan Martin Mueller against Many Faces of Go.
> >  > Martin gave 29 handicap stones - and won "handily".
> >  > 29 stones - can you believe it?
> >
> > Martin Mueller, 5d, won an H-29 game against a 1998 go-program that
> > ran run on a 1998 computer. So, in order to reproduce the playing
> > strength of the old program and get a fair comparison it should again
> > run on an old machine while the modern go-programs use today's hardware.
> > - Michael.
> 
> I discussed your point in depth with David Fotland (father of MFoG).
> Back in 1998, MFoG had long thinking time (2 hours for all) in the game.
> In the current MFoG vesion the 12-kyu level plays quickly and is - according
> to David - comparable in strength with the 1998 long thinker.
> 
> Ingo.
> 
> 
> 
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Facebook Go AI

2015-11-23 Thread David Fotland
1 kyu on KGS with no search is pretty impressive.  Perhaps Darkforest2 is too 
slow.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of Andy
Sent: Monday, November 23, 2015 9:48 AM
To: computer-go
Subject: Re: [Computer-go] Facebook Go AI

 

As of about an hour ago darkforest and darkfores1 have started playing rated 
games on KGS!

 

 

2015-11-23 11:28 GMT-06:00 Andy :

So the KGS bots darkforest and darkfores1 play with only DCNN, no MCTS search 
added? I wish they would put darkfores2 with MCTS on KGS, why not put your 
strongest bot out there?

 

 

 

 

2015-11-23 10:38 GMT-06:00 Petr Baudis :

The numbers look pretty impressive! So this DNN is as strong as
a full-fledged MCTS engine with non-trivial thinking time. The increased
supervision is a nice idea, but even barring that this seems like quite
a boost to the previously published results?  Surprising that this is
just thanks to relatively simple tweaks to representations and removing
features... (Or is there anything important I missed?)

I'm not sure what's the implementation difference between darkfores1 and
darkfores2, it's a bit light on detail given how huge the winrate delta
is, isn't it? ("we fine-tuned the learning rate")  Hopefully peer review
will help.

Do I understand it right that in the tree, they sort moves by their
probability estimate, keep only moves whose probability sum up to
0.8, prune the rest and use just plain UCT with no priors afterwards?
The result with +MCTS isn't at all convincing - it just shows that
MCTS helps strength, which isn't so surprising, but the extra thinking
time spent corresponds to about 10k->150k playouts increase in Pachi,
which may not be a good trade for +27/4.5/1.2% winrate increase.


On Mon, Nov 23, 2015 at 09:54:37AM +0100, Rémi Coulom wrote:
> It is darkforest, indeed:
>
> Title: Better Computer Go Player with Neural Network and Long-term
> Prediction
>
> Authors: Yuandong Tian, Yan Zhu
>
> Abstract:
> Competing with top human players in the ancient game of Go has been a
> long-term goal of artificial intelligence. Go's high branching factor makes
> traditional search techniques ineffective, even on leading-edge hardware,
> and Go's evaluation function could change drastically with one stone change.
> Recent works [Maddison et al. (2015); Clark & Storkey (2015)] show that
> search is not strictly necessary for machine Go players. A pure
> pattern-matching approach, based on a Deep Convolutional Neural Network
> (DCNN) that predicts the next move, can perform as well as Monte Carlo Tree
> Search (MCTS)-based open source Go engines such as Pachi [Baudis & Gailly
> (2012)] if its search budget is limited. We extend this idea in our bot
> named darkforest, which relies on a DCNN designed for long-term predictions.
> Darkforest substantially improves the win rate for pattern-matching
> approaches against MCTS-based approaches, even with looser search budgets.
> Against human players, darkforest achieves a stable 1d-2d level on KGS Go
> Server, estimated from free games against human players. This substantially
> improves the estimated rankings reported in Clark & Storkey (2015), where
> DCNN-based bots are estimated at 4k-5k level based on performance against
> other machine players. Adding MCTS to darkforest creates a much stronger
> player: with only 1000 rollouts, darkforest+MCTS beats pure darkforest 90%
> of the time; with 5000 rollouts, our best model plus MCTS beats Pachi with
> 10,000 rollouts 95.5% of the time.
>
> http://arxiv.org/abs/1511.06410

--
Petr Baudis
If you have good ideas, good data and fast computers,
you can do almost anything. -- Geoffrey Hinton
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Strong engine that maximizes score

2015-11-17 Thread David Fotland
The non-mcts levels of Many Faces try to maximize score, with some bias toward 
safety when ahead.  The non-MCTS version uses dynamic komi to avoid giving up 
points in the endgame, but this is not in the version 12 released engine.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Chun Sun
Sent: Tuesday, November 17, 2015 7:15 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Strong engine that maximizes score

 

taking my question back. the answer is no in the context of mcts.

 

On Tue, Nov 17, 2015 at 10:10 AM, Chun Sun  wrote:

Is the last requirement equivalent to dynamic komi?

 

On Tue, Nov 17, 2015 at 9:49 AM, Darren Cook  wrote:

> I am trying to create a database of games to do some machine-learning
> experiments. My requirements are:
>  * that all games be played by the same strong engine on both sides,
>  * that all games be played to the bitter end (so everything on the board
> is alive at the end), and
>  * that both sides play trying to maximize score, not winning probability.

GnuGo might fit the bill, for some definition of strong. Or Many Faces,
on the level that does not use MCTS.

Sticking with MCTS, you'd have to use komi adjustments: first find two
extreme values that give each side a win, then use a binary-search-like
algorithm to narrow it down until you find the correct value for komi
for that position. This will take approx 10 times longer than normal
MCTS, for the same strength level.

(I'm not sure if this is what Pachi is doing?)

Darren


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Strong engine that maximizes score

2015-11-17 Thread David Fotland
Attempting to maximize the score is not compatible with being a strong engine.  
If you want a dan level engine it is maximizing win-probability.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
> Darren Cook
> Sent: Tuesday, November 17, 2015 6:49 AM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] Strong engine that maximizes score
> 
> > I am trying to create a database of games to do some machine-learning
> > experiments. My requirements are:
> >  * that all games be played by the same strong engine on both sides,
> >  * that all games be played to the bitter end (so everything on the
> > board is alive at the end), and
> >  * that both sides play trying to maximize score, not winning probability.
> 
> GnuGo might fit the bill, for some definition of strong. Or Many Faces, on
> the level that does not use MCTS.
> 
> Sticking with MCTS, you'd have to use komi adjustments: first find two
> extreme values that give each side a win, then use a binary-search-like
> algorithm to narrow it down until you find the correct value for komi for
> that position. This will take approx 10 times longer than normal MCTS, for
> the same strength level.
> 
> (I'm not sure if this is what Pachi is doing?)
> 
> Darren
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mylin Valley The World Computer Weiqi Tournament

2015-11-14 Thread David Fotland
Yu Bin won his game against Dolbaram.  The second official pro game is 
happening now.   51wq.lianzhong.com/yidongwq

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
fotl...@smart-games.com
Sent: Saturday, November 14, 2015 1:35 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Mylin Valley The World Computer Weiqi Tournament

 

Now there are some friendly games, on fast hardware, one hour each time limit.

 

Yu BIn 9P is playing Dol Baram with 5 stones.  The game is so complex I can't 
tell who is winning.

Tang Yi 2P is playing Zen with 5 stones and the game is almost over.  She is 
losing.  Zen won by 12 points.

Many Faces is playing a very strong amateur, taking two stones, and is winning 
by about 20 points.

 

David



On Fri, 13 Nov 2015 13:22:48 +, Aja Huang  wrote:

Congratulations to Dol Baram!  

  

I was wondering if Dol Baram's author is reading this list and if he could 
kindly give a brief description on his main approaches? From my observation Dol 
Baram's style is quite human-like and it reads very well in life-and-death 
situations. I suspect Dol Baram combines a life-and-death solver with the main 
search. 

 

Aja 

 

On Fri, Nov 13, 2015 at 12:30 PM, Rémi Coulom  wrote:

Thanks Hiroshi. This seems to be a more recent post:

http://51wq.lianzhong.com/Home/NewsDetails?newsID=546 

 =%25e7%2584%25a6%25e7%2582%25b9%25e6%2596%25b0%25e9%2597%25bb

Congratulations to Dol Baram!

Rémi 



On 11/13/2015 01:17 PM, Hiroshi Yamashita wrote:

Hi,

It seems DolBaram won. (from last photo on web)

1st DolBaram
2nd Zen
3rd ManyFaces of Go
4th Ray
http://51wq.lianzhong.com/Home/NewsDetails?newsID=539 

 =%25e7%2584%25a6%25e7%2582%25b9%25e6%2596%25b0%25e9%2597%25bb 

Hiroshi Yamashita

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go 


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go 

 

  _  


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go 

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AMAF/RAVE + heavy playouts - is it save?

2015-11-03 Thread David Fotland
Many Faces of Go doesn’t use Remi’s playout policy and I don’t think Zen does 
either.  I don’t think Remi’s and Mogo’s are similar either, since they were in 
some ways competing developments.  The bias issue is very real, so as you add 
knowledge to the playouts you have to be careful to add (for example) both 
attack and defense moves in a situation.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Tobias Pfeiffer
Sent: Tuesday, November 03, 2015 12:39 PM
To: r...@ffles.com; computer-go@computer-go.org
Subject: Re: [Computer-go] AMAF/RAVE + heavy playouts - is it save?

 

This helps very much, thank you for taking the time to answer!

You might be looking for for "Combining Online and Offline Knowledge in UCT" 
[1] by Gelly and Silver. Silver Tesauroreference it in "Monte-carlo Simulation 
Balancing" [2] with "Unfortunately, a stronger simulation policy can actually 
lead to a weaker Monte-Carlo search (Gelly & Silver, 2007), a paradox that we 
explore further in this paper."

I'll make it a priority to read both papers in detail thank you! If you meant 
another paper, someone else knows one I'm happy to see more references.

Thanks!
Tobi


[1] http://www.machinelearning.org/proceedings/icml2007/papers/387.pdf
[2] http://www.machinelearning.org/archive/icml2009/papers/500.pdf



On 03.11.2015 21:03, robertfinkng...@o2.co.uk wrote:

You have to be careful what heuristics you apply. This was a surprising result: 
using a playout policy which in itself is a stronger go player can actually 
make MCTS/AMAF weaker. The reason is that MCTS depends entirely on accurate 
estimations of the value of each position in the tree. Any playout policy which 
introduces a bias therefore weakens MCTS. It may increase precision (lower 
standard deviation) but gives a less accurate assessment of the value (an 
incorrect mean). Most playouts at the moment (at least published ones) are 
based on Remi's Mogo playout policy, which increases precision without 
sacrificing accuracy.

There's a really nice diagram in one of David Silver's papers illustrating the 
effect that bias can have on playouts. As soon as you see it you understand the 
problem. Unfortunately I don't have it to hand and have unfortunately run out 
of time looking for it, otherwise I'd reference it. Hopefully somebody else can 
give the reference. I suspect David probably co-authored the paper in which 
case apologies to the other author for not crediting them here!

I hope this helps

Regards

Raffles

On 03-Nov-15 19:38, Tobias Pfeiffer wrote:

Hi everyone,
 
I haven't yet caught up on most recent go papers. If what I ask is
answered in one of these, please point there.
 
It seems everyone is using quite heavy playouts these days (nxn
patterns, atari escapes, opening libraris, lots of stuff that I don't
know yet, ...) - my question is how does that mix with AMAF/RAVE? I
remember from the early papers, that they said it'd be dangerous to do
it with non random playouts and that they shouldn't have too much logic.
 
Which, well, makes sense (to me) because the argument is that we play
random moves so they are order independent. With patterns that doesn't
hold true anymore.
 
What's the experience out there? Does it just still work? Does it not
matter because you just "warm up" the tree? Or do you need to be careful
with what heuristics you apply not too break RAVE/AMAF?
 
Thank you!
Tobi
 






___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go






-
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2016.0.7163 / Virus Database: 4457/10906 - Release Date: 10/28/15







___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go





-- 
www.pragtob.info
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Number of 3x3 patterns

2015-11-03 Thread David Fotland
Many Faces of Go has 2052 3x3 patterns.  All have a empty point in the center.  
One value is used for all the illegal patterns, so there are 2051 valid 
patterns.  I use Aja’s idea of including in the pattern the Atari status of 
zero to four adjacent groups.  That’s why it’s more than Álvaro’s 1107.

 

There is no reason to iterate over all patterns.  Just iterate over the ones 
that are identical through rotation or symmetry.  One easy way to find the 
canonical pattern is to calculate hashes for all rotations and reflections and 
choose the smallest one as the pattern ID.   I use a table to map the pattern 
IDs to a set of consecutive index, 0-1251.

 

 

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of Jim 
O'Flaherty
Sent: Tuesday, November 03, 2015 11:35 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Number of 3x3 patterns

 

Ah. That makes sense. It's a pattern centered on a possible next move. Very 
cool. Tysvm for explaining.

 

On Tue, Nov 3, 2015 at 1:33 PM, Detlef Schmicker  wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



Am 03.11.2015 um 20:24 schrieb Jim O'Flaherty:
> I don't see how "leave the center empty" works as a valid case,
> assuming this it just any valid 3x3 window on the board. Given bots
> playing each other, there can be 9x9 clumps of a stone of the same
> color. I can see it being argued there is no computational value in
> this specific pattern instance. But, then what are the conditions
> of the exceptions to the generalization? And how do you effectively
> iterate through the other +20,000 variations (not reduced by
> location or color symmetry)?
>
> So, I'm curious, is there some other assumption about the 3x3
> window other than it be a view into any valid 3x3 space on a Go
> board?

Sorry, I did not explain the details, the assumption is:
I play in the middle, so it must be empty. I thought legal moves might
not really reduce the number of 3x3 patterns, as there can be no
suicide known from 3x3 patterns, as a capture is always possible.

Therefore I wonder, what 14 patterns did not appear in my 4 games
harvested:)


>
> On Tue, Nov 3, 2015 at 1:04 PM, Álvaro Begué
>  wrote:
>
>> I get 1107 (954 in the middle + 135 on the edge + 18 on a
>> corner).
>>
>> Álvaro.
>>
>>
>>
>> On Tue, Nov 3, 2015 at 2:00 PM, Detlef Schmicker 
>> wrote:
>>

> Thanks, but I need them reduced by reflection and rotation
> symmetries (and leave the center empty so 3^8 + 3^5 + 3^3 and than
> reduce)
>
>
>
> Am 03.11.2015 um 19:32 schrieb Gonçalo Mendes Ferreira:
> If you are considering only black stone, white, empty and
> border, ignoring symmetry, wouldn't it be
>
> 3^9 + 3^6 + 3^4
>
> 3^9 for patterns away from the border, 3^6 for near the
> sides and 3^4 near the corners, assuming you are also
> interested in the center value.
>
> This makes 20493, then you need to take out illegal
> patterns (surrounded middle stone). So I'd hint it's close
> to 2.
>
> On 03/11/2015 18:17, Detlef Schmicker wrote: I could not
> find the number of 3x3 patterns in Go, if used all
> symmetrie s.
>
> Can anybody give me a hint, were to find. Harvesting 4
> games I get 1093:)
>
> Thanks, Detlef
>> ___
>> Computer-go mailing list Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
> ___ Computer-go
> mailing list Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
>>> ___ Computer-go
>>> mailing list Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>>
>>
>>
>> ___ Computer-go
>> mailing list Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
>
> ___ Computer-go mailing
> list Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWOQv1AAoJEInWdHg+Znf4GTYQAIATv45HU7fR1S4bfiygapDI
IOOnTtHTdjNoqHWGD07Y3MUy8rP24AcWHtEmlH+uwt42HBFXhCW9Hr2ul/Yreofl
e/lxcoawYYWs1tPuHEKV8TPQUVM3aHvPREoQgBMbkMlDpKQA1Jj3Q0Kv8T9cUVOW
S2URrTyOFrLiEbl4znYJwiH7hVI7q0HKom/XGFYWkfwhvJjDdKDrPbTUyl4IWo2Q
v/HdIXC/6WrPSnkeFnkc595w0qTUiXWj+B/0JYMnKvBml3aEsG8W6uT79SdDJ1MN
OJ4iW9L08p68Ovxt6Wp+eXopiPZSQ90PxPtI3cfmWrPWhs3/P95mLPg+u0CEt+PH
iuMaCM/XR68rWqQhMjRVbJkM+udo0f5iIGwN3xSDQiqfD1OO4Ks60Bdbj2qmKu/B
npEMGGeCqQmiyPftCYSdeMTHPScH+CvcL1nZaC4kdW7+aDfrC7JvU3L5nfKhVxMK
RfuXdNeX6mVAI2uL+MvFFea1B38qvdBS4y1XCQ8QObQxuxNJJupzQ8fixYGdOotj
UzuuXI4pyCzEcWWG+dr58pA35MbEpUWVsw/UMSA96RjevaqAUQ7nyFvNxcBahzE/

[Computer-go] some issues with the beijing tournament go client

2015-11-03 Thread David Fotland
I've been helping them test their client and there are some issues.  They
are working on fixing them.  If you are having problems while testing,
please email me directly at fotl...@smart-games.com for details, and I can
save you some time or provide worarounds.  I don't have email addresses for
most of the participants or I would contact them directly.

David


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Understanding and implementing RAVE

2015-10-26 Thread David Fotland
Many Faces only has big nodes with all of the child statistics in one node, 
along with the totals for the position.  Like the right hand of your diagram, 
but also with the 11/22 totals.  There is no tree.  All nodes are in a big 
transposition table and there are no child or parent pointers.  I did this for 
performance (cache locality), and to avoid the complexity of having both a 
transposition table and a tree to maintain.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Petr Baudis
> Sent: Monday, October 26, 2015 10:56 AM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] Understanding and implementing RAVE
> 
> On Mon, Oct 26, 2015 at 04:51:39PM +, Gon alo Mendes Ferreira wrote:
> > >I must admit that I don t quite follow how you ve implemented things,
> > >but then you seem to be further along than me as I haven t even
> > >started with transposition tables.
> > >
> > >Urban
> > >
> > Well I took the liberty to steal and very crudely modify a MCTS
> > diagram from
> > wikipedia:
> >
> > http://pwp.net.ipl.pt/alunos.isel/35393/mcts.png
> >
> > Maybe with images it is clearer. You seem to be using an acyclic graph
> > with pointers for the arcs.
> 
> Ah. In Pachi, at one point I wanted to rewrite things this way to get
> awesome cache locality, but it was such a dull work that I gave up. :-)
> 
> So I wanted to do it when I was writing this again in Michi, but this
> approach breaks encapsulation (you can't just pass around a node object
> reference, but need a (parent, coord) tuple) which makes the code a bit
> dense and clumsy, so I decided it's not very pedagogical to do it this
> way...
> 
> > When you need to find transpositions, and free malloc'd nodes because
> > you're out of memory, I find my solution much more practical because
> > all the information for selecting what play to make is in the current
> > node. But Pachi is no small program so I'm sure I'm wrong somewhere.
> 
> To clarify, Pachi and Michi do not use transposition tables but a simple
> tree structure.  Transpositions (and meaningfully explored transpositions
> at that) are relatively rare in Go, plus there are pitfalls when
> propagating updates, aren't there?  OTOH having a fixed-size list of
> followups carries some overhead when the board is getting filled up. So I
> never bothered to do this; but I know that some other strong programs do
> use transposition tables.
> 
> --
>   Petr Baudis
>   If you have good ideas, good data and fast computers,
>   you can do almost anything. -- Geoffrey Hinton
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Understanding and implementing RAVE

2015-10-26 Thread David Fotland
Many Faces uses 2200 for RAVE_EQUIV.  I found that anything between 2000 and 
3000 was about the same, and CLOP recommended 2200.  1000 was a little worse, 
and 500 was much worse.  In discussions with other programmers I heard numbers 
between  and 5000.

 

For parameter tuning I recommend Emi’s CLOP.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Urban Hafner
Sent: Monday, October 26, 2015 8:38 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Understanding and implementing RAVE

 

On Mon, Oct 26, 2015 at 1:12 PM, Petr Baudis  wrote:

 

This RAVE formula has been derived by David Silver as follows
(miraculously hosted by Hiroshi-san):

http://www.yss-aya.com/rave.pdf

Also note that RAVE_EQUIV (q_{ur} * (1-q_{ur}) / b_r^2) varies widely
among programs, IIRC; 3500 might be on the higher end of the spectrum,
it makes the transition from AMAF to true winrate fairly slow.  You
typically set the value by parameter tuning.

 

Thanks for the link. That’s actually understandable to me. :) It seems that 
very soon there will be no easy wins anymore and I will have to invest the 
resources into some parameter tuning.

 

Urban

-- 

Blog: http://bettong.net/

Twitter: https://twitter.com/ujh

Homepage: http://www.urbanhafner.com/

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Playout speed... again

2015-10-15 Thread David Fotland
Yes, that is correct.  Actually more than a factor of ten since the machine in 
2008 was slower.  I wrote the MFGO Monte carlo engine between January 2008 and 
the world championship in September 2008.  The fast playout timing was probably 
sometime in April, when it was still weaker than gnugo.

 

I implemented light playouts first, and highly optimized the code for speed.  
The light playouts kept track of liberty count, but not liberty location.  Then 
I added 3x3 patterns, lists of liberty points, move generation heuristics, and 
eventually included the entire old Many Faces of Engine move generator as a 
prior.

 

Here is a plot of MFGO win rate vs gnugo for each version of the program I 
tested in 2008 between January and August.  This was 9x9 board with 5000 
playouts.  I had a big test cluster so I could run several test versions a day, 
typically 1000 games per test run.  I’m only on version 1250 now.  My effort 
slowed way down after 2009.

 

 

 



 

> -Original Message-

> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf

> Of Lucas, Simon M

> Sent: Thursday, October 15, 2015 12:51 AM

> To: computer-go@computer-go.org

> Subject: Re: [Computer-go] Playout speed... again

> 

> Did I read that correctly?  The number of playouts per second for Many

> Faces has gone DOWN by a factor of 10?  (25,000 -> 2,500) Presumably do to

> the playouts being heavier.

> 

> Simon

> 

> 

> -Original Message-

> From: Computer-go [ <mailto:computer-go-boun...@computer-go.org> 
> mailto:computer-go-boun...@computer-go.org] On Behalf

> Of David Fotland

> Sent: 15 October 2015 06:51

> To:  <mailto:computer-go@computer-go.org> computer-go@computer-go.org

> Subject: Re: [Computer-go] Playout speed... again

> 

> In 2008 Many Faces was getting about 25k light playouts per second on

> 19x19.  Today it gets 2500 playouts per second on one thread of an i7-

> 3770.  I don t use a probability distribution in the UCT tree.  I both

> count liberties and maintain lists of liberty points, but all

> incrementally.  In the playouts I use 3x3 patterns with gammas like

> Crazystone, not just the Mogo patterns, but I only do a partial

> distribution.  I also do local ladder searches and many other local

> tactics things.

> 

> David

> 

> > -Original Message-

> > From: Computer-go [ <mailto:computer-go-boun...@computer-go.org> 
> > mailto:computer-go-boun...@computer-go.org] On

> > Behalf Of Gon alo Mendes Ferreira

> > Sent: Wednesday, October 14, 2015 3:27 PM

> > To: [mailing list] Computer Go

> > Subject: [Computer-go] Playout speed... again

> >

> > Hi, I've been searching the mailing list archive but can't find an

> > answer to this.

> >

> > What is currently the number of playouts per thread per second that

> > the best programs can do, without using the GPU?

> >

> > I'm getting 2075 in light playouts and just 55 in heavy playouts. My

> > heavy playouts use MoGo like patterns and are probability distributed,

> > with liberty/capture counts/etc only updated when needed, so it should

> > be pretty efficient.

> >

> > What would be a good ballpark for this?

> >

> > Thank you,

> > Gon alo F.

> > ___

> > Computer-go mailing list

> >  <mailto:Computer-go@computer-go.org> Computer-go@computer-go.org

> >  <http://computer-go.org/mailman/listinfo/computer-go> 
> > http://computer-go.org/mailman/listinfo/computer-go

> 

> ___

> Computer-go mailing list

>  <mailto:Computer-go@computer-go.org> Computer-go@computer-go.org

>  <http://computer-go.org/mailman/listinfo/computer-go> 
> http://computer-go.org/mailman/listinfo/computer-go

> ___

> Computer-go mailing list

>  <mailto:Computer-go@computer-go.org> Computer-go@computer-go.org

>  <http://computer-go.org/mailman/listinfo/computer-go> 
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Playout speed... again

2015-10-14 Thread David Fotland
In 2008 Many Faces was getting about 25k light playouts per second on 19x19.  
Today it gets 2500 playouts per second on one thread of an i7-3770.  I don’t 
use a probability distribution in the UCT tree.  I both count liberties and 
maintain lists of liberty points, but all incrementally.  In the playouts I use 
3x3 patterns with gammas like Crazystone, not just the Mogo patterns, but I 
only do a partial distribution.  I also do local ladder searches and many other 
local tactics things.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Gonçalo Mendes Ferreira
> Sent: Wednesday, October 14, 2015 3:27 PM
> To: [mailing list] Computer Go
> Subject: [Computer-go] Playout speed... again
> 
> Hi, I've been searching the mailing list archive but can't find an answer
> to this.
> 
> What is currently the number of playouts per thread per second that the
> best programs can do, without using the GPU?
> 
> I'm getting 2075 in light playouts and just 55 in heavy playouts. My heavy
> playouts use MoGo like patterns and are probability distributed, with
> liberty/capture counts/etc only updated when needed, so it should be
> pretty efficient.
> 
> What would be a good ballpark for this?
> 
> Thank you,
> Gon alo F.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] KGS bot tournaments - what are your opinions?

2015-10-10 Thread David Fotland
There is an easy way to enforce computational limits.  Ask everyone to run on 
an identical AWS instance.  Nevertheless, I’m against identical hardware 
tournaments except as a special rare exception.

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
David Doshay
Sent: Saturday, October 10, 2015 10:31 AM
To: computer-go@computer-go.org
Subject: *SPAM* Re: [Computer-go] KGS bot tournaments - what are your 
opinions?

 

I agree completely that there is no way to enforce computational limits over 
the internet.

 

I am against ‘identical hardware’ tournaments because people have worked to get 
their programs working on the hardware they have, and some people will be on 
the other side of any hardware decision, Mac v.s. PC being the most obvious.

 

I am left wondering what the point is for such a tournament. Is it to show who 
is the most efficient programmer? Is it to show how these programs might run on 
somebody’s home computer? These things are not important for research code that 
is not intended for resale.


Cheers,
David G Doshay


ddos...@mac.com

 





 

On 10, Oct 2015, at 8:33 AM, Peter Drake  wrote:

 

I'm also for no limits, if only because there's no way to enforce them.

 

If there is to be a limited division, I'd like to see all programs run on 
identical hardware.

 

On Fri, Oct 9, 2015 at 6:07 AM, Hiroshi Yamashita  wrote:

Hi Nick,

I'd like no limit. Restriction will lose a chance of massive
computer's programming. But one thread limit tournament
once a year may be interesting.

I like (2), and (3) is nice, but I'm already happy with your reports!

Regards,
Hiroshi Yamashita



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go





 

-- 

Peter Drake
https://sites.google.com/a/lclark.edu/drake/

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] *****SPAM***** Re: KGS bot tournaments - what are your opinions?

2015-10-10 Thread David Fotland
You could what they do in bridge tournaments, and provide two sets of results 
from the same tournament.

Hardware would be unrestricted for everyone

The Open result would include all participants, exactly as today.

A "single machine" result would only include participants that ran on a single 
node, perhaps with no more than 4 or 6 cores, or perhaps a single CPU 
(typically 4 to 6 cores).  The idea would be to compare programs on hardware 
that is generally available to consumers.

So a single machine participant would get two results, perhaps something like 
4th in open, 2nd in single machine.  The KGS tournament and pairings would 
include everyone, so there is no overhead for the organizer other than making a 
second results table that only includes single machine results.

You would have to decide whether to allow two entries for the cluster programs, 
one using the cluster, and one single-machine.  In that case, the open result 
would only report the better of the two for that program.

I don’t think it is a good idea to make a general restriction, and it's 
impractical to require identical hardware for all participants.  If you want to 
do a special tournament with identical hardware, you should consider requiring 
an AWS instance, since they are identical, and inexpensive.  It does take a few 
days to get AWS figured out and tested, so the prep is not completely trivial.  
The New DNN technology works best with a GPU, and other programs get no benefit 
from GPU, so including or not including a GPU in your standard machines will 
highly favor one type of program over the other.

Conflict of interest disclosure.  Many Faces is designed for single machine, 
shared memory, because that's what my customers run it on.  I had multinode MPI 
code for a Microsoft cluster in 2008 when I won, but I ripped it out long ago.  
I'm interested in seeing how programs compare on consumer hardware.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Gonçalo Mendes Ferreira
> Sent: Saturday, October 10, 2015 10:40 AM
> To: computer-go@computer-go.org
> Subject: *SPAM* Re: [Computer-go] KGS bot tournaments - what are
> your opinions?
> 
> 
> 
> On 10/10/2015 18:30, David Doshay wrote:
> > I agree completely that there is no way to enforce computational limits
> over the internet.
> >
> > I am against  identical hardware  tournaments because people have worked
> to get their programs working on the hardware they have, and some people
> will be on the other side of any hardware decision, Mac v.s. PC being the
> most obvious.
> There is no "Mac hardware".
> >
> > I am left wondering what the point is for such a tournament. Is it to
> show who is the most efficient programmer? Is it to show how these
> programs might run on somebody s home computer? These things are not
> important for research code that is not intended for resale.
> I'm also against identical hardware restrictions, but divisions can be
> very flexible. Not everyone cares for research and you wouldn't be using
> open tournaments for research results either way.
> 
> Gon alo F.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] I would like people to play with correct handicap to get a more reliable rating

2015-10-02 Thread David Fotland
I don’t share or take code from other programs because Many Faces of Go is 
commercial.  Many other programs have licenses that are not compatible with 
commercial use, so I'm careful not to even look at their source code.  We share 
ideas all the time, through publications, informal conversations at 
tournaments, and this forum.

Because our core data structures are different, or we use different languages, 
sharing code is typically not going to be practical.  

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of djhbrown .
> Sent: Thursday, October 01, 2015 3:37 PM
> To: computer-go
> Subject: Re: [Computer-go] I would like people to play with correct
> handicap to get a more reliable rating
> 
> .
> whereas the majority of teenage kgsers i encounter suffer from hyperhubris
> amongst a plethora of other sociopathological handicaps, mathematically,
> it ought not to matter a whit to a bot's own self-esteem when the witless
> up themselves against it because the kgs ranking algorithm surely ought to
> either take rank aberrations into account or ignore delusionally-
> handicapped-game results because they don't just do it against you, they
> do it against everybody.
> 
> but in any case, how reliable is poking a stick around in the dark anyway?
> 
> PS  Whereas it is true that a camel is a horse that was designed by a
> committee, why on earth don't you guys stop trying to get one over each
> other in the kgs playground and start working as a team?  For example,
> DCNNigo plays a very respectable opening but falls apart in the yose, so a
> different technique is required for that phase of the game.  If you put
> your heads together, you might come up with a multibot that arguably could
> be considered smart instead of just lucky.  You should be able to hook up
> across internet so the many heads don't all have to be in the same room.
> Yonks ago Oliver Selfridge [1] proposed just such an architecture for an
> intracranial artificial intelligence - but you could go intercranial.  Now
> that would be an IT worth checking out.
> [1] http://sites.sinauer.com/wolfe4e/wa04.02.html
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] re good article on playout ending

2015-09-10 Thread David Fotland
I never tried to optimize stopping, so my stopping rule is very conservative.  
Many Faces stops at twice the number of points on the board, or if the mercy 
rule triggers.  The mercy rule requires one side to have many more stones on 
the board than the other (at least 1/3 of the number of points on the board 
more).

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Erik van der Werf
Sent: Wednesday, September 09, 2015 9:18 AM
To: computer-go
Subject: Re: [Computer-go] re good article on playout ending

 

Steenvreter stops its playouts when it detects a proven win or loss. The 
evaluation function it uses is an improved version of what I made to solve the 
small boards. I once tried adding the mercy rule, but it did not improve the 
program.

 

Erik

 

 

On Wed, Sep 9, 2015 at 5:46 PM, Peter Drake  wrote:

I don't know of an article, but unless your ending detection is VERY fast, it's 
better to just finish the playout.

 

One possibility is a "mercy threshold": if one player's stone count (which you 
update incrementally) far exceeds the other, declare the player with more 
stones the winner. The relevant class from Orego is attached.

 

 

On Wed, Sep 9, 2015 at 7:53 AM, Gonçalo Mendes Ferreira  wrote:

Does anyone know of a good article on ending a MCTS playout early, outcome 
estimation, the quality of interrupted outcomes, and so on?
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go





 

-- 

Peter Drake
https://sites.google.com/a/lclark.edu/drake/


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] re comments on Life and Death

2015-09-10 Thread David Fotland
Yes, in the old engine, I roll everything up into a single number, with a 
resolution of 1/100th of a point (only so the total score would fit in a 16 bit 
integer on the 16 bit machine I used for development in 1982).

 

I would say rather, that expert systems are dead in Go because many smart and 
talented people, including professional experts, worked diligently for two 
decades on this approach and none were able to get stronger than about 5 kyu.  
This is a strong experimental result, not an opinion.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Petri Pitkanen
Sent: Wednesday, September 09, 2015 12:53 AM
To: computer-go
Subject: Re: [Computer-go] re comments on Life and Death

 

David said "estimate final score" which implies that all relevant things are 
factored in, merely the unit of estimation is territory. Just like in chess 
there are several things factored in - other than material - and all are 
estimated as pawns.



I guess expert systems really are a dead  end in Go. Too many contradicting 
heurestics

 

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] re comments on Life and Death

2015-09-10 Thread David Fotland
No, simple radiation is not the best, although some programs (including mine) 
started with something like this.  I think the best approach was Reiss' Go4++, 
where territory was modelled using connectivity.  If a new stone can be 
connected to a living group of the same color, then this point can't be 
territory of the opposite color.  

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
> Behalf Of Robert Jasiek
> 
> Was your influence function like radiated light? Such would have too
> little meaning.


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] re comments on Life and Death

2015-09-08 Thread David Fotland
I agree that group strength can't be a single number.  That's why I classify 
groups instead.  Each classification is treated differently when estimating 
territory, when generating candidate moves, etc.  The territory counts depend 
on the strength of the nearby groups.

Monte Carlo has a big advantage in that it estimates the probability of winning 
the game, rather than my old approach of trying to estimate the final score.

David 

> 
> > For group strength I had about 20 classes with separate evaluators
> > (two clear eyes, one big eyes, seki, semeai, run-or-live, one-eye-ko-
> threat-to-live, dead-if-move-first, etc, etc).
> 
> Was group strength an object of several parameters or was it a single
> number derived from all those parameters? IMO, a single number cannot be
> meaningful in general.
> 
> > Groups strength was the core concept feeding into the full board
> evaluation, which tried to estimate the score.
> 
> But what WAS your group strength...?:)
> 
> Score estimation of a given position should also depend on territory
> counts, not only on group strength etc.
> 
> --
> robert jasiek
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] re comments on Life and Death

2015-09-05 Thread David Fotland
Some details at http://www.smart-games.com/knowpap.txt

Completely agree that connections and group strength estimates are key to 
strength, and are very difficult to get right.  There are many tricky cases.  
For connections I used shapes and local tactics to determine connectivity and 
threat points, and handled some cases of adjacent connections with shared 
threats.  For group strength I had about 20 classes with separate evaluators 
(two clear eyes, one big eyes, seki, semeai, run-or-live, 
one-eye-ko-threat-to-live, dead-if-move-first, etc, etc).

Connection status was used to collect stones into groups.  Groups strength was 
the core concept feeding into the full board evaluation, which tried to 
estimate the score.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
> Behalf Of Robert Jasiek
> Sent: Friday, September 04, 2015 12:34 AM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] re comments on Life and Death
> 
> On 04.09.2015 07:25, David Fotland wrote:
> > group strength and connection information
> 
> For this to work, group strength and connection status must be a)
> assessed meaningfully and b) applied meaningfully within a broader
> conceptual framework. What were your definitions for group strength
> and connection status, for what purposes did you use them and how did
> you apply them?
> 
> --
> robert jasiek
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] re comments on Life and Death

2015-09-05 Thread David Fotland
I forgot, I did publish a paper on Many Faces: 
https://www.researchgate.net/publication/220174515_Static_Eye_Analysis_in_The_Many_Faces_of_Go

I'm not sure it's available online.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
> Behalf Of Robert Jasiek
> Sent: Friday, September 04, 2015 10:29 AM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] re comments on Life and Death
> 
> On 04.09.2015 17:55, Stefan Kaitschick wrote:
> >It is just too far removed from MC concepts to be productively
> >integrated into an MC system. And no matter what, MC has to be the
> >starting  point
> 
> No. It is also possible to construct it the other way round. Start
> with an expert system. Whenever that needs some "calculation" and
> basic counting or limited reading fail, MC can come in to do the
> calculation.
> E.g., an expert system can identify groups of likely connected
> strings, then MC can calculate if indeed (statistically) the
> connection is given.
> 
> --
> robert jasiek
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] re comments on Life and Death

2015-09-04 Thread David Fotland
Many Faces of Go is MC + expert system (plus local search, etc).  The reason I 
won the world championship in 2008 is because I implemented MCTS but 
incorporated the old Many Faces expert system move generator and ranking.  This 
is pretty slow (a few hundred positions a second), so when the tree part of 
MCTS got a node up to about 100 visits, the Many Faces move generator was 
called and it applied a bias to the moves to favor moves that look good to the 
expert system.  This was stronger than all the other pure MCTS programs.

 

Now the other MCTS programs are stronger because they incorporate more go 
knowledge, either through machine learning from expert games or from strong 
player input.

 

I don’t think MCTS is stagnating.  I think DH Brown is correct about how very 
much more difficult it is to climb the higher ranks.  The rate of progress is 
about the same, but the rate of rank improvement is much, much slower.  Many 
Faces has been stagnating because I have hardly touched the engine in the last 
18 months.  That’s changing soon.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Stefan Kaitschick
Sent: Friday, September 04, 2015 8:55 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] re comments on Life and Death

 

So far I have not criticised but asked questions. I am a great fan of the 
expert system approach because a) I have studied go knowledge a lot and see, in 
principle, light at the end of the tunnel, b) I think that "MC + expert system" 
or "only expert system" can be better than MC if the expert system is well 
designed, c) an expert system can, in principle, provide more meaningful 
insight for us human duffers than an MC because the expert system can express 
itself in terms of reasoning. (Disclaimer: There is a good chance that I will 
criticise anybody presenting his definitions for use in an expert system. But 
who does not dare to be criticised does not learn!)

 

MC is currently stagnating, so looking at new (or old discarded) approaches has 
become more attractive again.

But I don't think that a "classic" rules based system will be of much use from 
here. It is just too far removed from MC concepts to be productively integrated 
into an MC system. And no matter what, MC has to be the starting point, because 
it is so much more effective than anything else that has been tried.What you 
are left to work with, is the trail of statistics that MC leaves behind. That 
is the only tunnel with a possible end to it that I see. And who knows, maybe 
someone will find statistical properties that can be usefully mapped back to 
human concepts of go.

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] re comments on Life and Death

2015-09-04 Thread David Fotland
No.  Since MF's search is so highly pruned, and directly by the expert system 
move generator, it scales poorly with computer power.  If I went back to the 
pure MFGO engine and added the modern ELO based pattern from Remi's approach, I 
think it would be a couple of stones stronger, but still weaker than MCTS.

A lot of MFGO's increase from about 3 Kyu in 2008 to about 2 dan now is due to 
my implementing some of the basic MFGO knowledge inside the MCTS policies to 
bias move selection

David

> 
> BTW, David, if you choose the CPU to suit it, can the traditional Many
> Faces beat the best MCTS programs?
> 
> 
> Darren
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Dynamic komi - VBSC

2015-09-04 Thread David Fotland
Probably everyone does something different for Dynamic komi.  There have a few 
publications.  

In MFGO, the shipping version 12 doesn’t use dynamic komi, but the KGS version 
has it, and it's probably worth about half a stone.  My algorithm tries to keep 
the win rate between 55% and 60% when winning and between 40% and 45% when 
losing.  So if the win rate is between 45% and 55% there is no dynamic komi, 
but I map a win rate from 55% to 100% to the range 55% to 60%.  This lets it 
play more conservatively when winning, but avoids the crazy give-away moves 
when the win rate gets close to 100%.

I don’t know if this is better since I didn’t try other alternatives, but it 
seem s to make it play a more natural game.  For MFGO, I prefer methods that 
make the play more human-like.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
> Behalf Of Gonçalo Mendes Ferreira
> Sent: Friday, September 04, 2015 7:13 AM
> To: computer-go@computer-go.org
> Subject: [Computer-go] Dynamic komi - VBSC
> 
> I've been wrapping my head about dynamic komi adjustments for MCTS,
> namely on the thesis by the Pachi creator, Petr Baudic.
> 
> On value-based situational compensation the author uses the average on
> win rates from the previous simulations to decide whether or not to
> change the komi. But I don't see how this criteria makes sense, if
> we're interested in finding the best play, shouldn't we be trying to
> have good sensibility around the best plays? Trying to average the
> game only worsens the ability of the search to differentiate the best
> contenders.
> 
> Am I seeing this wrong? Has this been addressed before? What do other
> engines do?
> 
> - Gon alo F.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] re comments on Life and Death

2015-09-03 Thread David Fotland
Many Faces of Go gives reasons for its moves after fact.  It reasons about the 
position using go proverbs, life and death analysis, group strength and 
connection information, etc.  If you have a copy, you can ask it to explain its 
reasons for making a move.  There were far more than a few efforts in this 
direction.  Many people spent decades on this problem.  This approach has been 
explored thoroughly and it doesn’t work.  

 

I believed in this approach as strongly as you do, for many years, before the 
data proved it to be a false belief.

 

We now know how to make much stronger programs with far less effort.

 

Of you are welcome to try again, and I would be really happy to see a strong 
program using this kind of approach.  Do have a plan to write some code or this 
just philosophy?

 

Regards,

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
djhbrown .
Sent: Thursday, September 03, 2015 5:48 AM
To: computer-go@computer-go.org
Subject: [Computer-go] re comments on Life and Death

 

Plans, evaluation functions, ect failed for over 20 years to produce true 
(amateur) dan level programs. 


True.  However, the failure of a few efforts to make progress in a direction 
does not imply that the direction is a dead end.  I will be addressing this 
issue in a future video in the series.

Also, you cannot give reasons for moves "after the fact" if reason wasn't used 
to obtain the selected move in the first place.

 

Exactly so.  As stated in "Life and Death", the principal research objective of 
HALy is for it to be able to formulate and explain its reasons.  I feel that 
the domain of Go is a useful microworld for experimenting with perception and 
reasoning representations.

Current research in volition and conscious choice
indicates that conscious choice is actually an after the fact explanation
of decisions based on unconscious processes.

 

Yes indeed.  This suggests that science is just beginning to discover that 
philosophical intuitions about consciousness based on no experimentation at all 
are mere speculations.

I think you forgot to suggest which pharmaceuticals, legal or otherwise, to be 
using while watching this. Without said pharmacological assistance, that video 
doesn't make a bit of sense to me.

 

I am unaware of any chemicals that could viably substitute for doing a bit of 
homework.  I would be happy to explain any specific issues outside the domain 
of computer go that you do not understand if you raise them in a YouTube 
comment.  I am aware that the video touches on several myths whose historical 
origins and current implications are not common knowledge.

-- 

http://sites.google.com/site/djhbrown2/home
https://www.youtube.com/user/djhbrown








___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] EGC2015 Events

2015-07-29 Thread David Fotland
Congratulations to Aya.  The commentary on the ManyFaces vs Aya game is very 
interesting.

David

 -Original Message-
 From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
 Behalf Of Petr Baudis
 Sent: Wednesday, July 29, 2015 2:21 PM
 To: computer-go@computer-go.org
 Subject: [Computer-go] EGC2015 Events
 
   Hi!
 
   There are several Computer Go events on EGC2015.  There was a small
 tournament of programs, played out on identical hardware by each, won
 by Aya:
 
   https://www.gokgs.com/tournEntrants.jsp?sort=sid=981
 
   Then, one of the games, Aya vs. Many Faces, was reviewed by Lukas
 Podpera 6d:
 
   https://www.youtube.com/watch?v=_3Lk1qVoiYM
 
   Right now, Hajin Lee 3p (known for her live commentaries on Youtube
 as Haylee) is playing Aya (giving 5 stones) and commenting live:
 
   https://www.youtube.com/watch?v=Ka2ilmu7Eo4
 
 --
   Petr Baudis
   If you have good ideas, good data and fast computers,
   you can do almost anything. -- Geoffrey Hinton
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Determining the final board state for finished games

2015-07-25 Thread David Fotland
In general this is beyond the state of the art of the strongest go programs.  
You can’t score without determining the status of every group (live, dead, 
seki), and you may need to identify required interior defensive moves that have 
not been played.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Justin .Gilmer
Sent: Saturday, July 25, 2015 7:13 PM
To: computer-go@computer-go.org
Subject: [Computer-go] Determining the final board state for finished games

 

Hello!

   I'm new to computer Go, it's nice to find this mailing list! I've downloaded 
the GoGod dataset of completed professional games, and for the games that been 
fully played out (no resign) I'd like to determine the final state of the board 
(i.e. which groups are live/dead and what territory belongs to which players). 
I've written a simple script which can do this for maybe 75% of the completed 
games, but I'm a little stuck on how to best do this when the pros end the game 
in less obvious states. Can anyone recommend some resources on how to best do 
this? Are there any publicly available scripts which already do this?

Thanks so much and nice to meet everyone!

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] EGC2015 Computer Go Tournament: Reminder

2015-07-06 Thread David Fotland
I can't travel to Europe for this tournament.  The main issue for me is 
arranging a local operator.  I have no way to do that.

Regards,

David

 -Original Message-
 From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
 Behalf Of Petr Baudis
 Sent: Sunday, July 05, 2015 11:37 PM
 To: computer-go@computer-go.org
 Subject: [Computer-go] EGC2015 Computer Go Tournament: Reminder
 
   Hi!
 
   We have extended the registration deadline to July 20 - you still
 have a chance to participate!
 (http://pasky.or.cz/iggsc2015/compgo_spec.html)
 
   We'd also like to ask for some feedback - what would make the
 tournament more attractive for you to attend?  We offer prize money,
 some reasonably nice hardware, the games will be played on KGS and we
 can arrange on-site operators for a limited number of developers who
 cannot attend in person.  Unfortunately, only single (1) participant
 has registered so far...
 
 On Tue, Jun 16, 2015 at 12:07:17PM +0200, Petr Baudis wrote:
  On Mon, Jun 15, 2015 at 01:21:49PM +, Josef Moudrik wrote:
   The tournament will take place on 29th July 2015. The winner of the
   tournament will have a chance to play against a strong professional
   player In the evening. The programs will compete on equal hardware
   arranged by the organizer. We can guarantee a prize budget of 600
 EUR.
  
   I would like to invite operator/bot teams to participate in the
 tournament!
  
   If you are interested, the full specification is available here:
   http://pasky.or.cz/iggsc2015/compgo_spec.html
 
We'd also like to remind you that the registration deadline is as
  soon as July 4.
 
(Officially, it is your responsibility to find an operator to run
  your program on-site if you don't attend in person.  But we may be
  able to help in a limited number of cases, roughly on a first come,
  first served
  basis.)
 
 --
   Petr Baudis
   If you have good ideas, good data and fast computers,
   you can do almost anything. -- Geoffrey Hinton
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] OT (maybe): Arimaa bot notably stronger

2015-04-22 Thread David Fotland
Converting back and forth from eval to winning probability is interesting, as 
is combining the quick win threat and long term advantage evals.

David

 -Original Message-
 From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
 Behalf Of Darren Cook
 Sent: Wednesday, April 22, 2015 10:26 AM
 To: computer-go@computer-go.org
 Subject: [Computer-go] OT (maybe): Arimaa bot notably stronger
 
 The Slashdot article was low on info, but an Arimaa program, Sharp,
 apparently beat the humans to win the $12000 prize described here:
   http://arimaa.com/arimaa/challenge/2015/
 
 There is some description of what it did to improve here:
 
 http://arimaa.com/arimaa/forum/cgi/YaBB.cgi?board=devTalk;action=displ
 ay;num=1429402345;start=1#1
 
 To my untrained eye it looks like they are all game-specific, rather
 than something we could steal from to use in other games and other
 domains :-)
 
 Darren
 
 --
 Darren Cook, Software Researcher/Developer My new book: Data Push Apps
 with HTML5 SSE Published by O'Reilly: (ask me for a discount code!)
   http://shop.oreilly.com/product/0636920030928.do
 Also on Amazon and at all good booksellers!
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] UCB-1 tuned policy

2015-04-15 Thread David Fotland
I didn’t notice a difference.  Like everyone else, once I had RAVE implemented 
and added biases to the tree move selection, I found the UCT term made the 
program weaker, so I removed it.  

David

 -Original Message-
 From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
 Igor Polyakov
 Sent: Tuesday, April 14, 2015 3:37 AM
 To: computer-go@computer-go.org
 Subject: [Computer-go] UCB-1 tuned policy
 
 I implemented UCB1-tuned in my basic UCB-1 go player, but it doesn't seem
 like it makes a difference in self-play.
 
 It seems like it's able to run 5-25% more simulations, which means it's
 probably exploiting deeper (and has less steps until it runs out of room to
 play legal moves), but I have yet to see any strength improvements on
 9x9 boards.
 
 As far as I understand, the only thing that's different is the formula.
 Has anyone actually seen any difference between the two algorithms?
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2015-01-27 Thread David Fotland
For many faces, moves like, any Atari, fill a liberty in a losing semeai, 
attack a group that is alive but doesn’t have two clear eyes yet.

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Stefan Kaitschick
Sent: Saturday, January 10, 2015 1:13 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play 
Go

 

 

But, I imagine this is more fuss than it is worth; the NN will be
integrated into MCTS search, and I think the strong programs already
have ways to generate ko threat candidates.

Darren

 

Do they? What would look like? Playing 2 moves in a row for the same side?

I thought the programs naively discovered ko threats.

Stefan

 

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] alternative for cgos

2015-01-12 Thread David Fotland
Won’t hosting limit your usability?  With cgos I can build and immediately test 
on cgos on my development machine.  With your service, how do I get my new 
executable to run?  If my engine uses a GPU or is a multinode cluster, how does 
that run on your docker service?

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Chris LaRose
Sent: Monday, January 12, 2015 5:35 PM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] alternative for cgos

 

Hi,

 

I'm actually working on something similar at http://baduk.io. Right now, you 
can log in an play against a handful of bots over the web, but one day I'd love 
to make it so you can add your own bots to let them compete against the others. 
It's not quite ready for the public, but I'm working to get something small 
working quickly. Unlike CGOS, the bots are all hosted--they all run inside 
Docker (http://docker.io/) containers. The Dockerfiles I've written for a few 
public bots are available at my github repository 
https://github.com/baduk-io/ai-dockerfiles.

 

What sorts of things would you expect from such a service? I was planning on 
modeling baduk.io after CGOS in a lot of ways as far as the rules that are used 
(area scoring, no dead stones removed, etc), and distinct ratings for 9x9, 
13x13, and 19x19 boards. What sorts of improvements do you think could be made 
in a new service? Do you have a preference for ELO ratings over kyu/dan ratings?

 

Chris LaRose

 

On Fri, Jan 9, 2015 at 2:47 AM, folkert folk...@vanheusden.com wrote:

Hi,

I have the feeling that cgos won't come back in even the distant future
so I was wondering if there are any alternatives?
E.g. a server that constantly lets go engines play against each other
and then determines an elo rating for them.


Folkert van Heusden

--
Afraid of irssi? Scared of bitchx? Does xchat gives you bad shivers?
In all these cases take a look at http://www.vanheusden.com/fi/ maybe
even try it or use it for all your day-to-day IRC conversations!
---
Phone: +31-6-41278122 tel:%2B31-6-41278122 , PGP-key: 1F28D8AE, 
www.vanheusden.com
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Datasets for CNN training?

2015-01-11 Thread David Fotland
Why don’t you make a dataset of the raw board positions, along with code to 
convert to Clark and Storkey planes?  The data will be smaller, people can 
verify against Clark and Storkey, and they have the data to make their own 
choices about preprocessing for network inputs.

David

 -Original Message-
 From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
 Behalf Of Hugh Perkins
 Sent: Sunday, January 11, 2015 12:24 AM
 To: computer-go
 Subject: [Computer-go] Datasets for CNN training?
 
 Thinking about datasets for CNN training, of which I lack one currently
 :-P  Hence I've been using MNIST , but also since MNIST results are
 widely known, and if I train with a couple of layers, and get 12%
 accuracy, obviously I know I have to fix something :-P
 
 But now, my network consistently gets up into the 97-98%s for mnist,
 even with just a layer or two, and speed is ok-ish, and probably want
 to start running training against 19x19 boards instead of 28x28.  The
 optimization is different.  On my laptop, an OpenCL workgroup can hold
 a 19x19 board, with one thread per intersection, but 28x28 threads
 would exceed the workgroup size.  Unless I loop, or break into two
 workgroups, or something else equally buggy, slow, and high-maintenance
 :-P
 
 So, I could crop the mnist boards down to 19x19, but whoever heard of
 training on 19x19 mnist boards?
 
 So, possibly time to start hitting actual Go boards.  Many other
 datasets are available in a standardized generic format, ready to feed
 into any machine learning algorithm.  For example, those provided at
 libsvm website http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
 , or mnist, yann.lecun.com/exdb/mnist/ .  The go datasets are not
 (yet) available in any kind of standard format so I'm thinking, maybe
 that could be useful to do so?  But there are three challenges:
 
 1. what data to store?  Clark and Storkey planes? Raw boards? Maddison
 et al planes? Something else?  For now, my answer is: something
 corresponding to an actual existing paper, and Clark and Storkey's
 network has the advantage of costing less than 2000usd to train, so
 that's my answer to 'what data to store?'
 2. copyright.  gogod is apparently a. copyrighted as a collection b.
 compiled by hand as a result of painstakingly going through each game,
 move by move, and entering into the computer, one move at a time.
 Probably not really likely that one could put this, even preprocessed,
 as a standard dataset?  However, the good news is that the gks dataset
 seems publically available, and big, maybe just use that?
 3. size . this is where I dont have an answer yet.
 - 8 million states, where each state is 8 planes * 351 locations =
 20GB :-P
 - the raw sgfs only take 3KB per game, for a total of about 80MB,
 but needs a lot of preprocessing, and if one were to feed each game
 through, in order, might not be the best sequence for effective
 learning?
 - current idea: encode one column through the planes as a single
 byte?  For Clark and Storkey they only have 8 planes, so this should be
 easy enough :-)
 - which would be 2.6GB instead
 - but still kind of large, to put on my web hosting :-P
 
 I suppose a compromise could be needed, which would also solve problem
 number 1 somewhat, of just providing a tool, eg in Python, or C, or
 Cython, which will take the kgs downloads, possibly the gogod download,
 and transform it into a 2.6GB dataset, ready for training, and possibly
 pre-shuffled?
 
 But this would be quite non-standard, although this is not unheard of,
 eg for imagenet, there is a devkit http://image-
 net.org/challenges/LSVRC/2011/index#devkit
 
 Maybe I will create a github project, like 'kgs-dataset-preprocessor'?
  Could work something like ?:
 
python kgs-dataset-preprocessor.py [targetdirectory]
 
 Results:
 - the datasets are downloaded from http://u-go.net/gamerecords/
 - decompressed
 - loaded once at a time, and processed into a 2.5GB datafile, in
 sequence (clients can handle shuffling themselves I suppose?)
 
 Thoughts?
 
 Hugh
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional Neural Networks

2014-12-25 Thread David Fotland
You can do some GPU experiments on Amazon AWS before you buy.  65 cents per hour

David

http://aws.amazon.com/ec2/instance-types/

G2
This family includes G2 instances intended for graphics and general purpose GPU 
compute applications. 
Features:

High Frequency Intel Xeon E5-2670 (Sandy Bridge) Processors
High-performance NVIDIA GPU with 1,536 CUDA cores and 4GB of video memory

GPU Instances - Current Generation
g2.2xlarge  $0.650 per Hour

 -Original Message-
 From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
 Of Detlef Schmicker
 Sent: Thursday, December 25, 2014 2:00 AM
 To: computer-go@computer-go.org
 Subject: Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional
 Neural Networks
 
 Hi,
 
 as I want to by graphic card for CNN: do I need double precision
 performance? I give caffe (http://caffe.berkeleyvision.org/) a try, and as
 far as I understood most is done in single precision?!
 
 You get comparable single precision performance NVIDA (as caffe uses CUDA
 I look for NVIDA) for about 340$ but the double precision performance is
 10x smaller than the 1000$ cards
 
 thanks a lot
 
 Detlef
 
 Am Mittwoch, den 24.12.2014, 12:14 +0800 schrieb hughperkins2:
  Whilst its technically true that you can use an nn with one hidden
  layer to learn the same function as a deeper net, you might need a
  combinatorally large number of nodes :-)
 
 
  scaling learning algorithms towards ai, by bengio and lecunn, 2007,
  makes a convincing case along these lines.
 
 
 
  ___
  Computer-go mailing list
  Computer-go@computer-go.org
  http://computer-go.org/mailman/listinfo/computer-go
 
 
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional Neural Networks

2014-12-20 Thread David Fotland
This would be very similar to the integration I do in Many Faces of Go.  The 
old engine provides a bias to move selection in the tree, but the old engine is 
single threaded and only does a few hundred evaluations per second.  I 
typically get between 40 and 200 playouts through a node before Old Many Faces 
adjusts the biases.

David

 -Original Message-
 From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
 Of Mark Wagner
 Sent: Saturday, December 20, 2014 11:18 AM
 To: computer-go@computer-go.org
 Subject: Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional
 Neural Networks
 
 Thanks for sharing. I'm intrigued by your strategy for integrating with
 MCTS. It's clear that latency is a challenge for integration. Do you have
 any statistics on how many searches new nodes had been through by the time
 the predictor comes back with an estimation? Did you try any prefetching
 techniques? Because the CNN will guide much of the search at the frontier
 of the tree, prefetching should be tractable.
 
 Did you do any comparisons between your MCTS with and w/o CNN? That's the
 direction that many of us will be attempting over the next few months it
 seems :)
 
 - Mark
 
 On Sat, Dec 20, 2014 at 10:43 AM,  lvaro Begu  alvaro.be...@gmail.com
 wrote:
  If you start with a 19x19 grid and you take convolutional filters of
  size
  5x5 (as an example), you'll end up with a board of size 15x15, because
  a 5x5 box can be placed inside a 19x19 board in 15x15 different
  locations. We can get 19x19 outputs if we allow the 5x5 box to be
  centered on any point, but then you need to do multiply by values
 outside of the original 19x19 board.
  Zero-padding just means you'll use 0 as the value coming from outside
  the board. You can either prepare a 23x23 matrix with two rows of
  zeros along the edges, or you can just keep the 19x19 input and do
  your math carefully so terms outside the board are ignored.
 
 
 
  On Sat, Dec 20, 2014 at 12:01 PM, Detlef Schmicker d...@physik.de
 wrote:
 
  Hi,
 
  I am still fighting with the NN slang, but why do you zero-padd the
  output (page 3: 4 Architecture  Training)?
 
  From all I read up to now, most are zero-padding the input to make
  the output fit 19x19?!
 
  Thanks for the great work
 
  Detlef
 
  Am Freitag, den 19.12.2014, 23:17 + schrieb Aja Huang:
   Hi all,
  
  
   We've just submitted our paper to ICLR. We made the draft available
   at http://www.cs.toronto.edu/~cmaddis/pubs/deepgo.pdf
  
  
  
   I hope you enjoy our work. Comments and questions are welcome.
  
  
   Regards,
   Aja
   ___
   Computer-go mailing list
   Computer-go@computer-go.org
   http://computer-go.org/mailman/listinfo/computer-go
 
 
  ___
  Computer-go mailing list
  Computer-go@computer-go.org
  http://computer-go.org/mailman/listinfo/computer-go
 
 
 
  ___
  Computer-go mailing list
  Computer-go@computer-go.org
  http://computer-go.org/mailman/listinfo/computer-go
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

RE: [computer-go] Strong programs on cgos 19x19?

2010-02-18 Thread David Fotland
Is this 23 cores SMP working on the same tree, or four by 6-cores?  I'm
running a cluster of four 4-core 2.3 GHz machines, using MPI to share the
core of the trees a few times a second.

 

The results between zen-1c, mfgo-16c and pachi-23c are interesting.

 

Zen wins about 60% against many faces and pachi, but pachi wins 70% against
many faces.  There aren't very many games so this could be just a
statistical anomaly.

 

david

 

From: computer-go-boun...@computer-go.org
[mailto:computer-go-boun...@computer-go.org] On Behalf Of Jean-loup Gailly
Sent: Thursday, February 18, 2010 1:31 AM
To: computer-go
Subject: Re: [computer-go] Strong programs on cgos 19x19?

 

 The strong pachi is really strong!  What hardware is it running on?

 Can you say how it differs from the vanilla pachi?

 

It's exactly the same software. The only difference is that is

running on 23 cores. I am amazed at how well MCTS scales on 19x19.

Looking forward to desktop machines with thousands of cores

in a few years...

 

Jean-loup

 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Strong programs on cgos 19x19?

2010-02-17 Thread David Fotland
The strong pachi is really strong!  What hardware is it running on?  Can you
say how it differs from the vanilla pachi?

 

David

 

From: computer-go-boun...@computer-go.org
[mailto:computer-go-boun...@computer-go.org] On Behalf Of Jean-loup Gailly
Sent: Wednesday, February 17, 2010 9:50 AM
To: computer-go
Subject: Re: [computer-go] Strong programs on cgos 19x19?

 

 or the strong version of pachi.

 

Done.

 

Jean-loup

2010/2/16 David Fotland fotl...@smart-games.com

My old MPI code had a scaling bug.  Performance scaling (playouts per
second) was linear, but the strength did not scale well, and 64 cores was
weaker than 32 cores.  I have a 16 core cluster of my own now (four 2.3 GHz
Q8200 quad core), and I discovered that the MPI code hangs when using MPICH2
rather than the Microsoft MPI library.  So last week I rewrote it with a
different algorithm, and it seems to scale much better.

It's on CGOS now, for at least a few days.  It would be great if some other
strong programs could join, zen, or fuego-8c, or aya-4c, or the strong
version or pachi.

Thanks,

David


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Strong programs on cgos 19x19?

2010-02-16 Thread David Fotland
My old MPI code had a scaling bug.  Performance scaling (playouts per
second) was linear, but the strength did not scale well, and 64 cores was
weaker than 32 cores.  I have a 16 core cluster of my own now (four 2.3 GHz
Q8200 quad core), and I discovered that the MPI code hangs when using MPICH2
rather than the Microsoft MPI library.  So last week I rewrote it with a
different algorithm, and it seems to scale much better.

It's on CGOS now, for at least a few days.  It would be great if some other
strong programs could join, zen, or fuego-8c, or aya-4c, or the strong
version or pachi.

Thanks,

David


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Rank on servers at 9x9

2010-02-14 Thread David Fotland

Many Faces has the same issue.  The pruning and tuning that is required for
19x19  doesn't help 9x9.  It seems that now the programs are strong enough
that 9x9 requires a good opening book, and I'd rather spend my time making
19x19 stronger.

David

 
 Zen's algorithm is getting heavier and heavier. It works well on 19x19
 but does not so on 9x9. 
 --
 Yamato
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Dynamic Komi's basics

2010-02-11 Thread David Fotland
Many Faces does almost the same thing (handicap games with black only, 7
points per handicap stone, decreasing linearly with move number to move 90).
It looks like this change gained about half a rank on KGS.

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of Petr Baudis
 Sent: Thursday, February 11, 2010 4:52 AM
 To: computer-go
 Subject: Re: [computer-go] Dynamic Komi's basics
 
   Hi!
 
   I forgot two important points:
 
 On Thu, Feb 11, 2010 at 01:06:34PM +0100, Petr Baudis wrote:
   On the other hand, 9 handicaps are supposedly giving an advantage of
 90 to
   120 points, so my natural thought would be that the bot would give
 itself at
   least a negative komi of that many points ?
  
   I can't figure out well how komi is determined at first move.
 
  extra_komi = 7.5 * handicap_stones_count
 
  Then it is linearly decreased until it hits 0 at move 200.
 
   The amount of extra komi applied is determined at the tree leaf where
 it is applied. That is, it's 1 - current_move_number / 200 in tree root,
 but
 deeper in the tree, it's 1 - (current_move_number + node_depth) / 200.
 This ensures some kind of sanity especially when reusing older trees,
 the values have well defined characteristics.
 
  This is the most naive implementation. In practice, neither extra_komi
  determination nor its application throughout the game should be linear,
  probably. There is a lot of experiments to be done. :-)
 
   In relation with the previous question, I'm wondering how komi is
 determined
   and what its value is for every handicap game ( as black and as
 white). Is
   there a specific value for each ( before first move is played) or is
 it only
   determined by the way it was programmed and the programmer's
 preferences ?
 
   Dynamic komi is used only in games where the program is black. I have
 found that it does not work well at all when the program is white, it
 played too slow moves, then found itself hopelessly behind when it was
 too late to do anything about it. But I also think UCT without dynamic
 komi plays a lot worse as black than white in high handicap games, so
 the problem to solve is not as big (if indeed any).
 
 --
   Petr Pasky Baudis
 A great many people think they are thinking when they are merely
 rearranging their prejudices. -- William James
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] benchmark tests for static evaluation functions

2010-01-17 Thread David Fotland
I think you can only evaluate static evaluation in the context of a search
and a tournament between programs.  You could start with a simple 1-ply
search and play against gnugo.  Strength in life and death or predicting pro
moves doesn't correlate with the ability to win games.

David

-Original Message-
From: computer-go-boun...@computer-go.org
[mailto:computer-go-boun...@computer-go.org] On Behalf Of Thomas Wolf
Sent: Sunday, January 17, 2010 9:03 AM
To: computer-go@computer-go.org
Subject: [computer-go] benchmark tests for static evaluation functions

Last year I was working on a static evaluation function.

Does anyone know references about published benchmark tests for static
evaluation functions, for example, in predicting moves in professional games
or best moves in life and death problems or predicting the status of
semeai problems?

The published benchmarks need not be for a static evaluation function in the
traditional sense, they could be for an opening book or a MCTS program with
very short times available.

Thanks,

Thomas Wolf
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] 13x13 human vs computer

2010-01-13 Thread David Fotland
Many Faces is not 6 dan on 9x9.  More like a strong 3 dan.

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of Stefan Kaitschick
 Sent: Wednesday, January 13, 2010 1:46 AM
 To: computer-go
 Subject: Re: [computer-go] 13x13 human vs computer
 
 9*9: 6 dan
 19*19 :1 kyu
 13*13   1 dan?
 
 not the expected interpolation :-)
 Looks like programing for a specific board size is important.
 
 Stefan
 
 - Original Message -
 From: David Fotland fotl...@smart-games.com
 To: 'computer-go' computer-go@computer-go.org
 Sent: Monday, January 11, 2010 2:23 AM
 Subject: RE: [computer-go] 13x13 human vs computer
 
 
 On 13x13, Many Faces is probably 1 Dan on KGS.  I don't know how that
 translates to German ranks, but probably 2 stones is fair, or 3 stones if
 you want the computer to likely win.
 
 David
 
  -Original Message-
  From: computer-go-boun...@computer-go.org [mailto:computer-go-
  boun...@computer-go.org] On Behalf Of Ingo Althöfer
  Sent: Sunday, January 10, 2010 11:17 AM
  To: computer-go@computer-go.org
  Subject: [computer-go] 13x13 human vs computer
 
  Hello,
  at a public event (during an exhibition on Claude
  Shannon; in Nixdorf museum in Paderborn) I want to
  arrange an exhibition game human vs computer go on
  13x13 board. (Thinking time about 45 minutes for both
  sides.)
 
  Does someone here know about human vs computer games
  on 13x13?
 
  The human opponents will be around 3-dan or 2-dan (German
  amateur level). What would be an appropriate handicap when
  they play against, for instance, Many Faces of Go on a
  standard dual core notebook?
 
  Ingo.
 
  --
  Preisknaller: GMX DSL Flatrate für nur 16,99 Euro/mtl.!
  http://portal.gmx.net/de/go/dsl02
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] 13x13 human vs computer

2010-01-10 Thread David Fotland
On 13x13, Many Faces is probably 1 Dan on KGS.  I don’t know how that
translates to German ranks, but probably 2 stones is fair, or 3 stones if
you want the computer to likely win.

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of Ingo Althöfer
 Sent: Sunday, January 10, 2010 11:17 AM
 To: computer-go@computer-go.org
 Subject: [computer-go] 13x13 human vs computer
 
 Hello,
 at a public event (during an exhibition on Claude
 Shannon; in Nixdorf museum in Paderborn) I want to
 arrange an exhibition game human vs computer go on
 13x13 board. (Thinking time about 45 minutes for both
 sides.)
 
 Does someone here know about human vs computer games
 on 13x13?
 
 The human opponents will be around 3-dan or 2-dan (German
 amateur level). What would be an appropriate handicap when
 they play against, for instance, Many Faces of Go on a
 standard dual core notebook?
 
 Ingo.
 
 --
 Preisknaller: GMX DSL Flatrate für nur 16,99 Euro/mtl.!
 http://portal.gmx.net/de/go/dsl02
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Pattern radius, fast probability player

2010-01-02 Thread David Fotland
You say radius = 3, then 3x3 patterns.  Which is it?  Radius 3 would be 5x5
to 7x7, depending on how you define the radius.

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of Petr Baudis
 Sent: Saturday, January 02, 2010 4:49 AM
 To: computer-go
 Subject: [computer-go] Pattern radius, fast probability player
 
 On Wed, Dec 16, 2009 at 08:57:24PM +0100, Rémi Coulom wrote:
  The problem is that I am not using the same set of features for
  biasing the tree, and for playouts. Playouts only use fast light
  features. The tree part uses slow complex features. In particular, I
  use patterns of radius 3 and 4 in the tree, and only radius 3 in the
  playouts. When 3x3 patterns are learnt together with r=4 patterns,
  they get different gammas.
 
   Interesting, your paper said that you are using patterns up to r=10,
 did you find out that anything larger than r=4 is irrelevant in
 practice?
 
   I have trouble even *nearing* the performance you reported; you say
 you can play 13500 games per second, do you have any data on how fast
 your engine runs with uniformly random playouts to put this in scale?
 My engine does 20k games/s with random playouts, 10k games/s with
 random playouts and board implementation incrementally maintaining 3x3
 patterns, and 1600 games/s when using probability distribution. There
 is some room for optimization, but not to reach 13k games/s...
 
   So I wonder if you have any tips for fast implementation of the
 probability distribution based simulations? Do you maintain the
 probability distribution itself incrementally over moves, or only
 the shape features?
 
   Thanks,
 
 --
   Petr Pasky Baudis
 A lot of people have my books on their bookshelves.
 That's the problem, they need to read them. -- Don Knuth
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] one more look at the scoring function

2009-12-21 Thread David Fotland
When I've looked at these losses they were due either to bugs, or to bias in
the playouts, for example when there is a semeai.  The program will think it
has 80% win rate when it is actually already behind.

David

-Original Message-
From: computer-go-boun...@computer-go.org
[mailto:computer-go-boun...@computer-go.org] On Behalf Of Petr Baudis
Sent: Monday, December 21, 2009 6:44 AM
To: computer-go
Subject: Re: [computer-go] one more look at the scoring function

On Mon, Dec 21, 2009 at 01:57:16PM +0200, Petri Pitkanen wrote:
 They also all lose games on endgame same manner. Having won a game by 30
pts
 they start giving away those points for - sometimes - imaginary safety,
 allowing other player to come within striking distance. Some sort dynamic
 komi would be nice  in endgame, but would probably not work.

  If this leads to a loss, I think that should happen only when there is
some consistent bias in the tree that misevaluates some part of the
position, and that imposing the komi *might* in some specific cases
patch up the deficiency, but probably won't have big impact in general
case. This is good experimentation subject though.

Petr Pasky Baudis
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Simple gogui problem

2009-12-14 Thread David Fotland
This is what I do (no tree, just a hash table).  The cost is that the nodes
become very large because every node also holds all the child information,
all rave counters, etc.  So memory usage is higher.

 

David

 

From: computer-go-boun...@computer-go.org
[mailto:computer-go-boun...@computer-go.org] On Behalf Of Corey Harris
Sent: Monday, December 14, 2009 5:34 AM
To: computer-go
Subject: Re: [computer-go] Simple gogui problem

 

Is it possible to just use a hash table (no tree) and just update the hash
entry's node? Advantages/disadvantages of this approach?

On Sun, Dec 13, 2009 at 10:30 AM, Corey Harris charri...@gmail.com wrote:

Was looking for a basic UCT data structure. I guess a tree structure is
created in memory. How is this managed, because memory can be exausted
pretty fast. 

 

. record results for all visited
nodes___

 

Where do you record the results? 

 

I appologize for the simple questions, I'm new at this.

On Sun, Dec 13, 2009 at 9:48 AM, Jason House jason.james.ho...@gmail.com
wrote:

On Dec 13, 2009, at 9:38 AM, Corey Harris charri...@gmail.com wrote:

I know this is a simple issue but I'm not sure of the solution. I am
currently in the very early stages of writing a go engine. I have the board
state and simple opening library implemented (no play logic yet). I'm would
like to output debugging/developnent output statements to the gogui shell
window. If the engine sends printf(some output\n); gogui  says Sent a
malformed response. If it fprintf(stderr, some output\n); nothing is
displayed.

How can you print messages to the shell without disrupting the message
protocol?

 

Writing to stderr works fine for me, but gogui does not show shell output
immediately. It waits until some point in overall execution before showing
anything in the shell output. 






Also, is there a site that describes the workings of a UCT bot in detail
similiar to some chess programming tutorial sites?

 

Not that I'm aware of, but senseis.xmp.net http://senseis.xmp.net/  might
be a good place to start. Basic UCT is simple:
. always start at tree root
. pick the child with the highest metric (upper confidence bound on win
rate)
. repeat last step until you reach a leaf
. if simulations of the leaf  N, expand leaf and pick child with highest
metric
. play random game
. record results for all visited
nodes___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

 

 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Simple gogui problem

2009-12-13 Thread David Fotland
Many Faces keeps the tree from move to move.  I discard nodes with few visits 
(or old nodes) after each move to free up most of the tree memory, but I keep 
the core of the tree.  When MF runs out of memory it garbage collects some 
nodes.

 

David

 

From: computer-go-boun...@computer-go.org 
[mailto:computer-go-boun...@computer-go.org] On Behalf Of Jason House
Sent: Sunday, December 13, 2009 12:27 PM
To: computer-go
Subject: Re: [computer-go] Simple gogui problem

 

On Dec 13, 2009, at 11:30 AM, Corey Harris charri...@gmail.com wrote:

 

Was looking for a basic UCT data structure. I guess a tree structure is created 
in memory. How is this managed, because memory can be exausted pretty fast. 

 

 

It isn't as fast as you might think. You want to use zobrist hashing for 
looking up nodes. IIRC, Many Faces discards the search tree after each move and 
simply does not create more nodes when it runs out of memory.

 





 

• record results for all visited 
nodes___

 

Where do you record the results? 

 

Logically, every node in the search tree has an estimated win rate. It's also 
possible to store the win rate of all follow-up moves for a given node. That's 
friendlier on the cache but uses more memory per node. I'm unsure what most 
bots do.

 

tracking of win rates can be done in a few different ways:

• Total simulations, Win percentage

• Total simulations, # of wins

• Total simulations, # of wins - # losses

• # of wins, # of losses

 

More important than how to store those values is how they're initialized based 
on domain knowledge. 

 





 

I appologize for the simple questions, I'm new at this.

On Sun, Dec 13, 2009 at 9:48 AM, Jason House jason.james.ho...@gmail.com 
wrote:

On Dec 13, 2009, at 9:38 AM, Corey Harris charri...@gmail.com wrote:

I know this is a simple issue but I'm not sure of the solution. I am currently 
in the very early stages of writing a go engine. I have the board state and 
simple opening library implemented (no play logic yet). I'm would like to 
output debugging/developnent output statements to the gogui shell window. If 
the engine sends printf(some output\n); gogui  says Sent a malformed 
response. If it fprintf(stderr, some output\n); nothing is displayed.

How can you print messages to the shell without disrupting the message protocol?

 

Writing to stderr works fine for me, but gogui does not show shell output 
immediately. It waits until some point in overall execution before showing 
anything in the shell output. 






Also, is there a site that describes the workings of a UCT bot in detail 
similiar to some chess programming tutorial sites?

 

Not that I'm aware of, but senseis.xmp.net might be a good place to start. 
Basic UCT is simple:
• always start at tree root
• pick the child with the highest metric (upper confidence bound on win rate)
• repeat last step until you reach a leaf
• if simulations of the leaf  N, expand leaf and pick child with highest metric
• play random game
• record results for all visited 
nodes___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Kinds of Zobrist hashes

2009-12-08 Thread David Fotland
I use two values.  I never even occurred to me to use three.

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of Petr Baudis
 Sent: Tuesday, December 08, 2009 2:50 PM
 To: computer-go@computer-go.org
 Subject: [computer-go] Kinds of Zobrist hashes
 
   Hi!
 
   In most papers I've read, three-valued Zobrist hashes are used - per
 intersection, values for empty, black and white coloring [1].
 I'm not clear on why is the empty value needed, however, shouldn't
 only black, white values work just fine? Is the hash behaving better
 with extra values for empty intersections? Has anyone measured it?
 
   [1] In pattern-matching, it is desirable to also use edge coloring.
 
   Thanks,
 
   Petr Pasky Baudis
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Fuego parameter question

2009-12-05 Thread David Fotland
I'm not testing Fuego against MFGO.  I'm using Fuego as part of the MFGO
regression tests in place of Gnugo.  My test machines are all Windows, as
are all of my test scripts.  So I don't need the strongest Fuego.  I just
want a fast, strong program that is better than gnugo.

 

David

 

From: computer-go-boun...@computer-go.org
[mailto:computer-go-boun...@computer-go.org] On Behalf Of Ben Lambrechts
Sent: Saturday, December 05, 2009 1:45 AM
To: computer-go
Subject: Re: [computer-go] Fuego parameter question

 

If you really want to test MFOG against Fuego, it is better to run Fuego on
a strong Linux-machine.
The Cygwin-version is significantly slower than the full-build I have on the
same machine with Fedora.

I provide the Cygwin for people who are not familiar enough with linux or
are not able to build the engines themselves with Cygwin.

---
With kind regards,
Ben Lambrechts

Windows builds for GNU Go and Fuego : http://gnugo.baduk.org/
Fuego opening books : http://gnugo.baduk.org/fuegoob.htm



On Sat, Dec 5, 2009 at 5:54 AM, David Fotland fotl...@smart-games.com
wrote:

Many Faces is getting too strong for Gnugo.  I test using 8K playouts per
move on 19x19 and win about 89% of the games.

I just tried testing against Fuego to get a stronger opponent.  I used
fuego-svn985 from http://gnugo.baduk.org/, already built for Windows.

I ran it with:
fuego c:\go\goprograms\fuego-svn985\fuego -srand 0 -quiet -config config.txt

config.txt is:
uct_param_player ignore_clock 1
uct_param_player max_games 8000
uct_param_search number_threads 1
uct_command_player ponder 0

I expected to win 50 to 60% of the games, but won 88% of 1300 games.  There
were several games that Fuego lost due to a superko violation.

Am I missing a parameter to set the rules to Chinese with superko?

Am I missing a parameter to give the strength I've seen in KGS tournaments?
Perhaps Many Faces is relatively stonger with few playouts due to its
knowledge and Fuego will do better with more playouts?

David

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Fuego parameter question

2009-12-05 Thread David Fotland
Good suggestion.  I was getting a little worried about overtuning against
Gnugo, since it's the only program I've ever tested against.

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of Petr Baudis
 Sent: Saturday, December 05, 2009 2:40 AM
 To: computer-go
 Subject: Re: [computer-go] Fuego parameter question
 
 On Fri, Dec 04, 2009 at 08:54:39PM -0800, David Fotland wrote:
  Many Faces is getting too strong for Gnugo.  I test using 8K playouts
 per
  move on 19x19 and win about 89% of the games.
 
 Is there a specific reason for insisting on even games? I still
 sometimes do tests on 9x9 (e.g. some slight re-tuning for tomorrow
 tournament) and I'm simply testing with giving gnugo black with no
 komi; I was actually already thinking about giving reverse komi
 to get even more accurate results.
 
   Petr Pasky Baudis
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Fuego parameter question

2009-12-05 Thread David Fotland
Thanks.  I tried giving both fuego and MFGO 16K playouts and stopped at with
MFGO winning 123/135 = 91% +- 4%.  I'm starting again with your suggestions:

uct_param_player ignore_clock 1
uct_param_player max_games 16000
uct_param_search number_threads 1
uct_param_player ponder 0
go_param_rules ko_rule pos_superko
go_param timelimit 99
uct_param_search max_nodes 40
uct_param_search lock_free 1
uct_param_search virtual_loss 1

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of Darren Cook
 Sent: Saturday, December 05, 2009 4:05 PM
 To: computer-go
 Subject: Re: [computer-go] Fuego parameter question
 
  uct_param_player ignore_clock 1
  uct_param_player max_games 8000
  uct_param_search number_threads 1
  uct_command_player ponder 0
 
 I learnt the other day that ignore_clock only ignores the game time
 settings, but there is still a default 10 seconds per move. To get rid
 of that add:
   go_param timelimit 99
 
 You also need to set max_nodes quite high or Fuego will keep stopping to
 clear out its tree. I'm setting it to max_games*50, so for 8000:
   uct_param_search max_nodes 40
 
 According to my notes fuego uses 75M + (65M per million max_nodes). So
 15 million nodes will use about 1Gb.  (That is on 32-bit linux.)
 
  I expected to win 50 to 60% of the games, but won 88% of 1300 games.
 There
  were several games that Fuego lost due to a superko violation.
 
  Am I missing a parameter to set the rules to Chinese with superko?
 
 go_rules chinese
 
 (I think the tromp  taylor, but I'm surprised superko is not part of it)
 
 You can also use go_param_rules to set each rule aspect separately.
 (BTW, I find using gogui is the best way to understand the fuego
 settings.)
 
  Am I missing a parameter to give the strength I've seen in KGS
 tournaments?
  Perhaps Many Faces is relatively stonger with few playouts due to its
  knowledge and Fuego will do better with more playouts?
 
 Can you compare the total CPU time spent on a game by each of Fuego and
 Many Faces and if Fuego is using less then increase max_games accordingly?
 Or, given that you just want a strong opponent for regression testing,
 forget cpu time and simply keep doubling max_games until you reach 50% :-)
 
 To answer your question I also have these in my config, which I got from
 [1].
  uct_param_search lock_free 1
  uct_param_search virtual_loss 1
 
 The first makes it stronger when using multiple threads. I'm not sure
 what the second is doing...
 
 Darren
 
 [1]:http://www.cs.ualberta.ca/TechReports/2009/TR09-09/TR09-09.pdf
 
 --
 Darren Cook, Software Researcher/Developer
 http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
 http://dcook.org/mlsn/ (Multilingual open source semantic network)
 http://dcook.org/work/ (About me and my work)
 http://dcook.org/blogs.html (My blogs and articles)
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Fuego parameter question

2009-12-04 Thread David Fotland
Many Faces is getting too strong for Gnugo.  I test using 8K playouts per
move on 19x19 and win about 89% of the games.

I just tried testing against Fuego to get a stronger opponent.  I used
fuego-svn985 from http://gnugo.baduk.org/, already built for Windows.

I ran it with:
fuego c:\go\goprograms\fuego-svn985\fuego -srand 0 -quiet -config config.txt

config.txt is:
uct_param_player ignore_clock 1
uct_param_player max_games 8000
uct_param_search number_threads 1
uct_command_player ponder 0

I expected to win 50 to 60% of the games, but won 88% of 1300 games.  There
were several games that Fuego lost due to a superko violation.

Am I missing a parameter to set the rules to Chinese with superko?

Am I missing a parameter to give the strength I've seen in KGS tournaments?
Perhaps Many Faces is relatively stonger with few playouts due to its
knowledge and Fuego will do better with more playouts?

David

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Live broadcasting at UEC Cup

2009-11-29 Thread David Fotland
I watched the pro matches.  It looked like a 4 dan beat Zen with 6 stones,
and a 9 dan beat KCC with 6 stones.

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of Ian Osgood
 Sent: Sunday, November 29, 2009 9:16 AM
 To: computer-go
 Subject: Re: [computer-go] Live broadcasting at UEC Cup
 
 Thanks for the early report!  (I was sorry not to see Fudo Go in the
 tournament. Were you involved with any of the other teams?)
 
 Here are the second day knockout tournament unofficial results. Any
 mistakes are my own. Thanks to the organizers for the live screencast!
 
   1 KCC Igo
   2 Katsunari
   3 Zen
   4 Shikousakugo
   5 Many Faces of Go
   6 Erica
   7 Kiseki
   8 Galileo  (upset Crazy Stone, due to a Chinese/Japanese rules
 mismatch)
   9 Crazy Stone
 10 Aya
 11 GOGATAKI
 12 Rock
 13 Nomitan
 14 Kinoa Igo
 15 Boon
 16 Kerebos
 
 Also, several exhibition matches were held (Zen beat Crazy Stone, and
 MFGO beat Aya).  I did not stay up for the two professional exhibition
 games.
 
 Ian
 
 On Nov 28, 2009, at 5:42 AM, Hideki Kato wrote:
 
  You can watch some interesting games and exhibition games on the
  second day of the third UEC Cup at
  http://jsb.cs.uec.ac.jp/~igo/eng/broadcast.html.
 
  Schedule: http://jsb.cs.uec.ac.jp/~igo/eng/schedule.html
 
  Unofficial quick results of the first day:
  pos program win-lose-draw
  1 Zen   6-0-0
  2 Nomitan   5-1-0
  3 KCC Igo   5-1-0
  4 Kinoa Igo 5-1-0
  5 GOGATAKI  4-1-1
  6 Shikosakugo   4-2-0
  7 Erica 4-2-0
  8 Aya   4-2-0
  9 Kiseki4-2-0
  10 Rock 4-2-0
  11 boon 3-2-1
  12 Kerberos 3-3-0
  13 Galileo  3-3-0
  --- cut line to the second day ---
  14 Island   3-3-0
  15 caren3-3-0
  16 PerStone 3-3-0
  17 Tombo3-3-0
  18 Sango alpha  3-3-0
  19 Igoppi   2-4-0
  20 Boozer   2-4-0
  21 Kasumi   2-4-0
  22 ME_arc   2-4-0
  23 Mayoigo  2-4-0
  24 Martha   2-4-0
  25 Njarahojara  1-5-0
  26 kudok1-5-0
  27 Gekishin 0-6-0
  28 Sanshine 0-6-0
 
  Solkoff and SB are omitted.
 
  Seeded programs to the second day:
  1  Crazy Stone
  2  Many Faces of Go
  3  Katsunari
 
  The Third UEC Cup: http://jsb.cs.uec.ac.jp/~igo/eng/index.html
 
  Short random comments:
  Aya was unlucky, lost two games against Erica and KCC.  Erica gets
  much stronger by implementing Remi's larger patterns; lost two
  games by a communication trouble and waisting time by a sleeping
  laptop.  Zen vs. KCC was a close game.  Zen uses 512 cores.
 
  Hideki
  --
  g...@nue.ci.i.u-tokyo.ac.jp (Kato)
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [SPAM] Re: [computer-go] Live broadcasting at UEC Cup

2009-11-29 Thread David Fotland
I think it was a single elimination, not a swiss tournament, and I think
many of the strong programs were in the same bracket.  I think Many Faces
lost to KCC in an early round and wasn't paired against the other strong
programs.  We'll have to wait for the full results to check.

 

David

 

From: computer-go-boun...@computer-go.org
[mailto:computer-go-boun...@computer-go.org] On Behalf Of Olivier Teytaud
Sent: Sunday, November 29, 2009 10:18 AM
To: computer-go
Subject: Re: [SPAM] Re: [computer-go] Live broadcasting at UEC Cup

 

Wow! This is quite surprising to me:

 1 KCC Igo
 2 Katsunari
 3 Zen
 4 Shikousakugo
 5 Many Faces of Go
 6 Erica
 7 Kiseki
  8 Galileo  (upset Crazy Stone, due to a Chinese/Japanese rules mismatch)
  9 Crazy Stone
 10 Aya

There are so many programs stronger than Zen, 
ManyFaces, CrazyStone and Aya ? 

There was, as sometimes in the past,
a trouble with the communication time or something like that, or there
are really so many very strong bots now ?

Best regards,
Olivier



2009/11/29 Ian Osgood i...@quirkster.com

Thanks for the early report!  (I was sorry not to see Fudo Go in the
tournament. Were you involved with any of the other teams?)

Here are the second day knockout tournament unofficial results. Any mistakes
are my own. Thanks to the organizers for the live screencast!

 1 KCC Igo
 2 Katsunari
 3 Zen
 4 Shikousakugo
 5 Many Faces of Go
 6 Erica
 7 Kiseki
 8 Galileo  (upset Crazy Stone, due to a Chinese/Japanese rules mismatch)
 9 Crazy Stone
10 Aya
11 GOGATAKI
12 Rock
13 Nomitan
14 Kinoa Igo
15 Boon
16 Kerebos

Also, several exhibition matches were held (Zen beat Crazy Stone, and MFGO
beat Aya).  I did not stay up for the two professional exhibition games.

Ian



On Nov 28, 2009, at 5:42 AM, Hideki Kato wrote:

You can watch some interesting games and exhibition games on the
second day of the third UEC Cup at
http://jsb.cs.uec.ac.jp/~igo/eng/broadcast.html
http://jsb.cs.uec.ac.jp/%7Eigo/eng/broadcast.html .

Schedule: http://jsb.cs.uec.ac.jp/~igo/eng/schedule.html
http://jsb.cs.uec.ac.jp/%7Eigo/eng/schedule.html 

Unofficial quick results of the first day:
pos program win-lose-draw
1 Zen   6-0-0
2 Nomitan   5-1-0   
3 KCC Igo   5-1-0
4 Kinoa Igo 5-1-0
5 GOGATAKI  4-1-1
6 Shikosakugo   4-2-0
7 Erica 4-2-0   
8 Aya   4-2-0
9 Kiseki4-2-0
10 Rock 4-2-0
11 boon 3-2-1
12 Kerberos 3-3-0
13 Galileo  3-3-0
--- cut line to the second day ---
14 Island   3-3-0
15 caren3-3-0
16 PerStone 3-3-0
17 Tombo3-3-0
18 Sango alpha  3-3-0
19 Igoppi   2-4-0
20 Boozer   2-4-0
21 Kasumi   2-4-0
22 ME_arc   2-4-0
23 Mayoigo  2-4-0
24 Martha   2-4-0
25 Njarahojara  1-5-0
26 kudok1-5-0
27 Gekishin 0-6-0
28 Sanshine 0-6-0

Solkoff and SB are omitted.

Seeded programs to the second day:
1  Crazy Stone
2  Many Faces of Go
3  Katsunari

The Third UEC Cup: http://jsb.cs.uec.ac.jp/~igo/eng/index.html
http://jsb.cs.uec.ac.jp/%7Eigo/eng/index.html 

Short random comments:
Aya was unlucky, lost two games against Erica and KCC.  Erica gets
much stronger by implementing Remi's larger patterns; lost two
games by a communication trouble and waisting time by a sleeping
laptop.  Zen vs. KCC was a close game.  Zen uses 512 cores.

Hideki
--
g...@nue.ci.i.u-tokyo.ac.jp (Kato)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/




-- 
=
Olivier Teytaud (TAO-inria) olivier.teyt...@inria.fr
Tel (33)169154231 / Fax (33)169156586
Equipe TAO (Inria-Futurs), LRI, UMR 8623(CNRS - Universite Paris-Sud),
bat 490 Universite Paris-Sud 91405 Orsay Cedex France
(one of the 56.5 % of french who did not vote for Sarkozy in 2007)



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [SPAM] Re: [computer-go] Live broadcasting at UEC Cup

2009-11-29 Thread David Fotland
I'd like to thank all of the people who organized the UEC tournament for
providing machines and operators to allow Many Faces and others to
participate.  I'd like to suggest that the UEC organizers consider using a
Swiss tournament system in the future since it gives a more accurate
assessment of program strength.

Regards,

David Fotland

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of 
 Sent: Sunday, November 29, 2009 5:08 PM
 To: computer-go
 Subject: Re: [SPAM] Re: [computer-go] Live broadcasting at UEC Cup
 
 David,
 
 MFG lost by KCC at the second stage of the tournament,
 and then won against Gallileo and Erica, obtaining the fifth position.
 
 I admit that the most of the strong programs such as KCC, Aya, MFG, Zen
 are assigned to the same right group, eliminating each other, and
 in the left group, CrazyStone is the only strong Monte-Carlo program.
 The assignment is not balanced, but because we had decided
 how to assign programs to the tournament according to the results of
 the preliminary league, we couldn't change it.
 
 Finally, we thank all the participants of the third UEC Cup, including
 Hideki Kato, David Fotland, Remi Coulom, and Shih-Chieh Huang.
 
  Masakazu Muramatsu
 
 2009/11/30 David Fotland fotl...@smart-games.com:
  I think it was a single elimination, not a swiss tournament, and I think
  many of the strong programs were in the same bracket.  I think Many
 Faces
  lost to KCC in an early round and wasn’t paired against the other strong
  programs.  We’ll have to wait for the full results to check.
 
 
 
  David
 
 
 
  From: computer-go-boun...@computer-go.org
  [mailto:computer-go-boun...@computer-go.org] On Behalf Of Olivier
 Teytaud
  Sent: Sunday, November 29, 2009 10:18 AM
  To: computer-go
  Subject: Re: [SPAM] Re: [computer-go] Live broadcasting at UEC Cup
 
 
 
  Wow! This is quite surprising to me:
 
  1 KCC Igo
  2 Katsunari
  3 Zen
  4 Shikousakugo
  5 Many Faces of Go
  6 Erica
  7 Kiseki
   8 Galileo  (upset Crazy Stone, due to a Chinese/Japanese rules
 mismatch)
   9 Crazy Stone
  10 Aya
 
  There are so many programs stronger than Zen,
  ManyFaces, CrazyStone and Aya ?
 
  There was, as sometimes in the past,
  a trouble with the communication time or something like that, or there
  are really so many very strong bots now ?
 
  Best regards,
  Olivier
 
  2009/11/29 Ian Osgood i...@quirkster.com
 
  Thanks for the early report!  (I was sorry not to see Fudo Go in the
  tournament. Were you involved with any of the other teams?)
 
  Here are the second day knockout tournament unofficial results. Any
 mistakes
  are my own. Thanks to the organizers for the live screencast!
 
   1 KCC Igo
   2 Katsunari
   3 Zen
   4 Shikousakugo
   5 Many Faces of Go
   6 Erica
   7 Kiseki
   8 Galileo  (upset Crazy Stone, due to a Chinese/Japanese rules
 mismatch)
   9 Crazy Stone
  10 Aya
  11 GOGATAKI
  12 Rock
  13 Nomitan
  14 Kinoa Igo
  15 Boon
  16 Kerebos
 
  Also, several exhibition matches were held (Zen beat Crazy Stone, and
 MFGO
  beat Aya).  I did not stay up for the two professional exhibition games.
 
  Ian
 
  On Nov 28, 2009, at 5:42 AM, Hideki Kato wrote:
 
  You can watch some interesting games and exhibition games on the
  second day of the third UEC Cup at
  http://jsb.cs.uec.ac.jp/~igo/eng/broadcast.html.
 
  Schedule: http://jsb.cs.uec.ac.jp/~igo/eng/schedule.html
 
  Unofficial quick results of the first day:
  pos program     win-lose-draw
  1 Zen           6-0-0
  2 Nomitan       5-1-0
  3 KCC Igo       5-1-0
  4 Kinoa Igo     5-1-0
  5 GOGATAKI      4-1-1
  6 Shikosakugo   4-2-0
  7 Erica 4-2-0
  8 Aya           4-2-0
  9 Kiseki        4-2-0
  10 Rock         4-2-0
  11 boon         3-2-1
  12 Kerberos     3-3-0
  13 Galileo      3-3-0
  --- cut line to the second day ---
  14 Island       3-3-0
  15 caren        3-3-0
  16 PerStone     3-3-0
  17 Tombo        3-3-0
  18 Sango alpha  3-3-0
  19 Igoppi       2-4-0
  20 Boozer       2-4-0
  21 Kasumi       2-4-0
  22 ME_arc       2-4-0
  23 Mayoigo      2-4-0
  24 Martha       2-4-0
  25 Njarahojara  1-5-0
  26 kudok        1-5-0
  27 Gekishin     0-6-0
  28 Sanshine     0-6-0
 
  Solkoff and SB are omitted.
 
  Seeded programs to the second day:
  1  Crazy Stone
  2  Many Faces of Go
  3  Katsunari
 
  The Third UEC Cup: http://jsb.cs.uec.ac.jp/~igo/eng/index.html
 
  Short random comments:
  Aya was unlucky, lost two games against Erica and KCC.  Erica gets
  much stronger by implementing Remi's larger patterns; lost two
  games by a communication trouble and waisting time by a sleeping
  laptop.  Zen vs. KCC was a close game.  Zen uses 512 cores.
 
  Hideki
  --
  g...@nue.ci.i.u-tokyo.ac.jp (Kato)
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go

RE: [computer-go] Joseki Book

2009-11-09 Thread David Fotland
The tradition program did this, because it only did a very shallow global 
search.  So in a two play global search, an entire joseki sequence would be one 
ply.

 

In the MCTS version, joseki moves get a strong bias, but there is no special 
handling of sequences.

 

David

 

From: computer-go-boun...@computer-go.org 
[mailto:computer-go-boun...@computer-go.org] On Behalf Of ? ?
Sent: Monday, November 09, 2009 7:54 PM
To: computer-go
Subject: Re: [computer-go] Joseki Book

 

From what David Fotland has said, Many Faces will lay out whole josekis as 
single moves in its searches, which seems like a great way of biasing the 
mcts tree early on.

On Mon, Nov 9, 2009 at 13:13, Robert Jasiek jas...@snafu.de wrote:

Magnus Persson wrote:

I think it may make more sense to break down the joseki into common local 
patterns

 

Patterns are doubtful. Even the best shape can be dead.

-- 
robert jasiek


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Joseki Book

2009-11-09 Thread David Fotland
Two ply (typo) was an example.  The original program did one ply global search 
plus local quiescence.  Local quiescence for a joseki move was to complete a 
few sequences.  Obviously not ideal, but better than trying to evaluate a 
position in the middle of a joseki.

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of Robert Jasiek
 Sent: Monday, November 09, 2009 9:38 PM
 To: computer-go
 Subject: Re: [computer-go] Joseki Book
 
 David Fotland wrote:
  in a two play global search, an entire joseki sequence would be one ply.
 
 This works only ALA the programs don't depart from stored josekis,
 right? How could they adapt to non-standard global side-conditions while
 treating a joseki as fixed one-ply sequence? They must iteratively
 broaden their search again, at least locally while embedding the local
 stable results in a global judgement context. So pure one ply seems
 improper to me, although one might try to start from it using multiple
 local pseudo-one-ply regions before combining them by means of a
 possibly / hopefully only / rather global (and therefore relatively
 thin) search.
 
 When you say two play, do you want to stop global search after exactly
 two moves? Wouldn't that be an exaggeration?
 
 --
 robert jasiek
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] New list member

2009-11-01 Thread David Fotland
Knowpap.txt is how The Many Faces of Go represents knowledge.  Smart Go is a 
different program.

Until a few years ago the strongest programs all used knowledge-intensive 
approaches with highly pruned local searches, like Many Faces.

Now the strong programs all use Monte Carlo Tree Search, although Many Faces of 
Go is a hybrid.

There are only a few programs that use Neural Networks, and they are not among 
the strongest.

Regards,

David

 Most of the programs out there now are Neural Networks, it seems. Are
 there any
 who tried to play with knowledge hard-wired in there, such as Smart Go
 (http://www.smart-games.com/knowpap.txt) ? And if so, what is that
 knowledge?
 


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] about the solution of Japanese Rule

2009-10-31 Thread David Fotland
I'm not sure what you are asking, but when you are playing with Japanese
rules, don't pass in the middle game.  The simple solution is to wait until
the game is over and all dame are filled before you pass.

 

From: computer-go-boun...@computer-go.org
[mailto:computer-go-boun...@computer-go.org] On Behalf Of xiefan
Sent: Saturday, October 31, 2009 5:40 AM
To: computer-go@computer-go.org
Subject: [computer-go] about the solution of Japanese Rule

 

Hi all,

 

I heard that the UCE cup is set to use Japanese rule, which is quite
different from Chinese rule when players play PASS. It is ok to play another
pass after a pass, but it seems to be problem if two players all pass in the
middle game. is there any better solution to this problem? 

 

Fan

 

 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] MPI vs Thread-safe

2009-10-30 Thread David Fotland
I share all uct-nodes with more than N visits, where N is currently 100, but
performance doesn't seem very sensitive to N.

Does Mogo share RAVE values as well over MPI?

I agree that low scaling is a problem, and I don't understand why.  

It might be the MFGO bias.  With low numbers of playouts MFGO bias boosts
the strength since it provides a lot of go knowledge.  With high playouts
the brittleness and missing pieces of the MFGO knowledge are exposed so the
strength plateaus.  Perhaps with more nodes the strength would start
increasing again, but we didn't test it. 

It might be due to a bug in my MPI implementation, or any number of other
possible bugs.  If a bug causes it to lose a small number of games no matter
how many playouts are made, it would look like a lack of scaling.

My focus this year has been making the single node version as strong as
possible since that's what I'm selling, so I haven't spent much effort in
fixing this MPI scaling problem (and I don't have a lot of time available on
a cluster with more than 4 nodes).

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of Brian Sheppard
 Sent: Friday, October 30, 2009 7:49 AM
 To: computer-go@computer-go.org
 Subject: [computer-go] MPI vs Thread-safe
 
 I only share UCT wins and visits, and the MPI version only
 scales well to 4 nodes.
 
 The scalability limit seems very low.
 
 Just curious: what is the policy for deciding what to synchronize? I
 recall
 that MoGo shared only the root node.
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] MPI vs Thread-safe

2009-10-30 Thread David Fotland
In the MPI runs we use an 8-core node, so the playouts per node are higher.
I don't ponder, since the program isn't scaling anyway.

The number of nodes with high visits is smaller, and I only send nodes that
changed since the last send.

I do progressive unpruning, so most children have zero vists.  There are at
most 30 active children in a node.

I don't broadcast.  I use a reduce algorithm, which doesn't save a lot with
four nodes, but saves something.

The actual bandwidth is quite small.  Also the Microsoft cluster has
Infiniband.

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of Brian Sheppard
 Sent: Friday, October 30, 2009 11:50 AM
 To: computer-go@computer-go.org
 Subject: [computer-go] MPI vs Thread-safe
 
 Back of envelope calculation: MFG processes 5K nodes/sec/core * 4 cores
 per
 process = 20K nodes/sec/process. Four processes makes 80K nodes/sec. If
 you
 think for 30 seconds (pondering + move time) then you are at 2.4
 million
 nodes. Figure about 25,000 nodes having 100 visits or more. UCT data is
 roughly 2500 bytes per node (2 counters of 4 bytes for 300 legal
 moves). If
 you have 4 nodes then bandwidth is 25,000 * 4 * 2500 = 250 megabytes.
 That
 is how much data the master has to broadcast for a complete refresh.
 And the
 data has to be sent to the master for aggregation, but that is a
 smaller
 number because most processes will only modify a small number of nodes
 within the refresh cycle.
 
 Now, there are all kinds of reductions to that number. For instance,
 MPI
 supports process groups and there are network layer tricks that can (in
 theory) supply all listeners in a single message. And nodes don't have
 to be
 updated if they are not changed, and you can compress the counters, and
 so
 on.
 
 I'm just saying that it looks like a lot of data. It can't be as much
 as I
 just calculated, because Gigabit Ethernet would saturate at something
 less
 than 100 megabytes/sec. But how much is it?
 
 
 Does Mogo share RAVE values as well over MPI?
 
 I would think so, because RAVE is critical to MoGo's move ordering
 policies.
 
 
 It might be the MFGO bias.
 
 This doesn't add up to me. If the problem were in something so basic
 then
 the serial program wouldn't play more strongly beyond 4 times as much
 clock
 time.
 
 
 It might be due to a bug in my MPI implementation, or any number
 of other possible bugs.
 
 Bugs in non-MPI areas would also kill off improvement in the serial
 program
 beyond 4x. So if you aren't observing that then you can look elsewhere.
 
 But of course it is possible for problems to exist only in scaling.
 
 For example, suppose that a node is created in process A and gets 100
 trials. It would then have its UCT data passed to other processes, but
 other
 nodes have not necessarily created the node, nor done progressive
 widening
 and so on. Such a node exists in a state that could not be created in a
 serial program, which would cause a loss in move ordering. This is a
 bug in
 parallelization, though not in MPI specifically.
 
 And you are quite right that bugs can manifest as a limit to
 scalability.
 
 The difficulty of debugging MPI processes is perhaps the biggest
 complaint
 about that model.
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


  1   2   3   4   5   >