Re: [Computer-go] NiceGoZero games during learning

2017-11-06 Thread Detlef Schmicker
Not in this weak state of the learned net. I measure with a net trained
from 4d+ kgs games right now on CGOS (NG-learn-ref).

This should be the line, which could be beaten by Zero after enough
learning. If I manage to beat this version (I check every learning cycle
10 games against this version) than I will probably also measure the
strength of this, but I think this will take some weeks:)


Am 06.11.2017 um 17:05 schrieb uurtamo .:
> Detlef,
> 
> I misunderstand your last sentence. Do you mean that eventually you'll put
> a subset of functioning nets on CGOS to measure how quickly their strength
> is improving?
> 
> s.
> 
> On Nov 6, 2017 4:54 PM, "Detlef Schmicker" <d...@physik.de> wrote:
> 
>> I thought it might be fun to have some games in early stage of learning
>> from nearly Zero knowledge.
>>
>> I did not turn off the (relatively weak) playouts and mix them with 30%
>> into the result from the value network. I started at an initial random
>> neural net (small one, about 4ms on GTX970) and use a relatively wide
>> search for MC (much much wider, than I do for good playing strength,
>> unpruning about 5-6 moves) and 100 playouts expanding every 3 playouts,
>> thus 33 network evaluations per move.
>>
>> Additionally I add Gaussian random numbers with a standard derivation of
>> 0.02 to the policy network.
>>
>> With this setup I play 1000 games and do an reinforcement learning cycle
>> with them. One cycle takes me about 5 hours.
>>
>> The first 2 days I did not archive games, than I noticed it might be fun
>> having games from the training history: now I always archive one game
>> per cycle.
>>
>>
>> Here are some games ...
>>
>>
>> http://physik.de/games_during_learning/
>>
>>
>> I will probably add some more games, if I have them and will try to
>> measure, how strong the bot is with exactly this (weak broad search )
>> configuration but a pretrained net from 4d+ kgs games on CGOS...
>>
>>
>> Detlef
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
> 
> 
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
> 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] NiceGoZero games during learning

2017-11-06 Thread Detlef Schmicker
I thought it might be fun to have some games in early stage of learning
from nearly Zero knowledge.

I did not turn off the (relatively weak) playouts and mix them with 30%
into the result from the value network. I started at an initial random
neural net (small one, about 4ms on GTX970) and use a relatively wide
search for MC (much much wider, than I do for good playing strength,
unpruning about 5-6 moves) and 100 playouts expanding every 3 playouts,
thus 33 network evaluations per move.

Additionally I add Gaussian random numbers with a standard derivation of
0.02 to the policy network.

With this setup I play 1000 games and do an reinforcement learning cycle
with them. One cycle takes me about 5 hours.

The first 2 days I did not archive games, than I noticed it might be fun
having games from the training history: now I always archive one game
per cycle.


Here are some games ...


http://physik.de/games_during_learning/


I will probably add some more games, if I have them and will try to
measure, how strong the bot is with exactly this (weak broad search )
configuration but a pretrained net from 4d+ kgs games on CGOS...


Detlef
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-26 Thread Detlef Schmicker
This is a quite natural approach, I think every go program which needs
to play with different komi does it in one way.

At least oakfoam does :)


Detlef

Am 26.10.2017 um 15:55 schrieb Roel van Engelen:
> @Gian-Carlo Pascutto
> 
> Since training uses a ridiculous amount of computing power i wonder if it
> would
> be useful to make certain changes for future research, like training the
> value head
> with multiple komi values 
> 
> On 26 October 2017 at 03:02, Brian Sheppard via Computer-go <
> computer-go@computer-go.org> wrote:
> 
>> I think it uses the champion network. That is, the training periodically
>> generates a candidate, and there is a playoff against the current champion.
>> If the candidate wins by more than 55% then a new champion is declared.
>>
>>
>>
>> Keeping a champion is an important mechanism, I believe. That creates the
>> competitive coevolution dynamic, where the network is evolving to learn how
>> to beat the best, and not just most recent. Without that dynamic, the
>> training process can go up and down.
>>
>>
>>
>> *From:* Computer-go [mailto:computer-go-boun...@computer-go.org] *On
>> Behalf Of *uurtamo .
>> *Sent:* Wednesday, October 25, 2017 6:07 PM
>> *To:* computer-go 
>> *Subject:* Re: [Computer-go] Source code (Was: Reducing network size?
>> (Was: AlphaGo Zero))
>>
>>
>>
>> Does the self-play step use the most recent network for each move?
>>
>>
>>
>> On Oct 25, 2017 2:23 PM, "Gian-Carlo Pascutto"  wrote:
>>
>> On 25-10-17 17:57, Xavier Combelle wrote:
>>> Is there some way to distribute learning of a neural network ?
>>
>> Learning as in training the DCNN, not really unless there are high
>> bandwidth links between the machines (AFAIK - unless the state of the
>> art changed?).
>>
>> Learning as in generating self-play games: yes. Especially if you update
>> the network only every 25 000 games.
>>
>> My understanding is that this task is much more bottlenecked on game
>> generation than on DCNN training, until you get quite a bit of machines
>> that generate games.
>>
>> --
>> GCP
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
> 
> 
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
> 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] FineArt/JuEZy plays on CGOS!

2017-07-18 Thread Detlef Schmicker
Hi Nick

best info I have is:

http://computer-go.org/pipermail/computer-go/2016-June/009444.html

http://computer-go.org/pipermail/computer-go/2016-February/008638.html

Detlef

Am 18.07.2017 um 18:20 schrieb Nick Wedd:
> Hi Magnus,
> 
> Thank you for the information.  I don't know how to interpret it.  Is there
> any relationship between these two lists of Elo ratings?
> http://www.yss-aya.com/cgos/19x19/bayes.html
> https://en.wikipedia.org/wiki/Go_ranks_and_ratings
> 
> Best,
> Nick
> 
> On 17 July 2017 at 12:40,  wrote:
> 
>> http://www.yss-aya.com/cgos/19x19/cross/JuEYi.html
>>
>> Best
>> Magnus
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
> 
> 
> 
> 
> 
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
> 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Alphago

2017-06-07 Thread Detlef Schmicker
Hi,

might be a little impolite, but I wonder about the strength of alphago.

The version playing Ke Jie seems to be about as strong (or stronger) as
Ke Jie is. I have the feeling the playing strength is carefully chosen
not to be too strong.

In the press conference it was told, alphago is running on one machine
this time with 10% of the computational power of the version playing Lee
Sedol.

The thinking time seems to be quite low too, at least twice the time per
move should be no problem.

I wonder, if alphago still scales good with computational power. The
(not so) strong go programs on CGOS seem all still scale quite good with
computational power, about 100ELO stronger with twice of the
computational power.

Probably alphago team tested this and they know, if a version with 20
times more computational power can give the actual version really 3
stones, and so might be able to give 3 stones to very strong pros too?!


Detlef
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Zen lost to Mi Yu Ting

2017-03-22 Thread Detlef Schmicker
oakfoam value network does exactly this, we have 6 komi layers -7.5 -5.5
-0.5 0.5 5.5 7.5 (+ and - due to color played) and trained from 4d+ kgs
games with this:
if (c_played==1):
  if ("0.5" in komi):
komiplane=1;
  if ("6.5" in komi or "2.75" in komi or "5.5" in komi):
#komi 6.5 and 5.5 not very different in chinese scoring
komiplane=2;
  if ("7.5" in komi or "3.75" in komi):
komiplane=3;
if (c_played==2):
  if ("0.5" in komi):
komiplane=4;
  if ("6.5" in komi or "2.75" in komi or "5.5" in komi):
#komi 6.5 and 5.5 not very different in chinese scoring
komiplane=5;
  if ("7.5" in komi or "3.75" in komi):
komiplane=6;


But I was unable to get a sgf file from the japanese language site :)

I did not really test if this layers help, but they are there and
trained and you might check yourself :)

Detlef

Am 21.03.2017 um 21:08 schrieb David Ongaro:
> On Mar 21, 2017, at 7:00 AM, Paweł Morawiecki  
> wrote:
>>
>> Hideki,
>>
>>  Using this for Japanese and 6.5, there will be some
>> error in close games.  We knew this issue and thought such
>> chances would be so small that postponed correcting (not so
>> easy).
>>
>> But how would you fix it? Isn't that you'd need to retrain your value 
>> network from the scratch?
> 
> I would think so as well. But I some months ago I already made a proposal in 
> this list to mitigate that problem: instead of training a different value 
> network for each Komi, add a “Komi adjustment” value as input during the 
> training phase. That should be much more effective, since the “win/lost” 
> evaluation shouldn’t change for many (most?) positions for small adjustments 
> but the resulting value network (when trained for different Komi adjustments) 
> has a much greater range of applicability.
> 
> Regards
> 
> David O.
> 
> 
>>
>> Oh, so that's why! Good luck with Zen's next two games. 
>>
>> Aja
>>  
>> Best,
>> Hideki
>>
>> Pawe  Morawiecki: 
>> > >:
>>> Hi,
>>>
>>> After an interesting game DeepZen lost to Mi Yu Ting.
>>> Here you can replay the complete game:
>>> http://duiyi.sina.com.cn/gibo_new/live/viewer.asp?sno=13 
>>> 
>>>
>>> According to pro experts, Zen fought really well, but it seems there is
>>> still some issue how Zen (mis)evaluates its chances. At one point it showed
>>> 84% chance of winning (in the endgame), whereas it was already quite clear
>>> Zen is little behind (2-3 points).
>>>
>>> Regards,
>>> Pawel
>>>  inline file
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org 
>>> http://computer-go.org/mailman/listinfo/computer-go 
>>> 
>> --
>> Hideki Kato >
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go 
>> 
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go 
>> 
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go 
>> 
> 
> 
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
> 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What a week ...

2017-03-21 Thread Detlef Schmicker

> * Which of the currently three top bots will show up in the 
> European Go Congress in Oberhof in July/August 2017?

just set up one of the top open source bots:

on moderate hardware

ray: http://www.dragongoserver.net/userinfo.php?uid=97868

and if this is too strong for europe

oakfoam: http://www.dragongoserver.net/userinfo.php?uid=97704
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] New AMD processors (oakfoam)

2017-03-04 Thread Detlef Schmicker
yes, it is trained from 4d+ kgs games (about a million I think). I am
not sure if I put the actual training scripts into my branch right now.
And I did not document how to combine both nets after training, but it
is straight forward, caffe allows to load multiple caffemodel files and
saves a new one :)

at the moment it is in my branch:
https://bitbucket.org/dsmic/oakfoam

I use a combined policy value net, a call gives the move prediction and
the value. The full net and configuration for NG-06 (10k playouts, about
5s per move on i7-4790k gtx 970) is in the branch (you can not run it
nicely on kgs at the moment, it is missing passing if opponent passes)

Detlef

Am 04.03.2017 um 11:27 schrieb Álvaro Begué:
> Oh, you are using a value net? How did you train it? I don't see anything
> about it in the bitbucket repository...
> 
> Álvaro.
> 
> P.S.- Sorry about the thread hijacking, everyone.
> 
> 
> On Sat, Mar 4, 2017 at 4:29 AM, Detlef Schmicker <d...@physik.de> wrote:
> 
>> I looked into this too:
>>
>> oakfoam would not benefit a lot from more cpu power at the moment, with
>> 4 cores I mix 10 playouts with the value net in the ratio (3:7) at the
>> moment.
>>
>> In case of buying a Ryzen: take care the board allows two GTX1080 Ti
>> (wait till end of march to buy them) and buy a power supply with 1kw
>> (and send a copy of the machine to me) :)
>>
>> Detlef
>>
>> P.S.: oakfoam status
>> http://www.dragongoserver.net/userinfo.php?uid=97704
>> http://www.yss-aya.com/cgos/19x19/cross/NG-06.html
>>
>> Am 03.03.2017 um 21:29 schrieb "Ingo Althöfer":
>>> Hi,
>>>
>>> AMD has published a new (fast and cool) processor, the Ryzen.
>>> Did some go programmers already collect experiences with it?
>>> Do they combine well with GPUs?
>>>
>>> Ingo.
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
> 
> 
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
> 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] New AMD processors

2017-03-04 Thread Detlef Schmicker
I looked into this too:

oakfoam would not benefit a lot from more cpu power at the moment, with
4 cores I mix 10 playouts with the value net in the ratio (3:7) at the
moment.

In case of buying a Ryzen: take care the board allows two GTX1080 Ti
(wait till end of march to buy them) and buy a power supply with 1kw
(and send a copy of the machine to me) :)

Detlef

P.S.: oakfoam status
http://www.dragongoserver.net/userinfo.php?uid=97704
http://www.yss-aya.com/cgos/19x19/cross/NG-06.html

Am 03.03.2017 um 21:29 schrieb "Ingo Althöfer":
> Hi,
> 
> AMD has published a new (fast and cool) processor, the Ryzen.
> Did some go programmers already collect experiences with it?
> Do they combine well with GPUs?
> 
> Ingo. 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
> 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] How is zen so strong on CGOS?

2017-01-25 Thread Detlef Schmicker
Hi,

I'd like to start a discussion on what zen might do being so strong on
CGOS with only one core and no graphic card :)

The version actual playing (and therefore best comparable to the
programs actual playing) is

RankNameElo +   −   Games
12  Zen-13.5-1c0g   363948  48  569
16  Aya792p2v2cn50_12t  355846  46  476
24  Rn.3.3-4c   345934  34  918
26  No335-4.5-gpu-4c3438115 115 112
29  CGI1407_1_475_7c339973  73  142
53  CrazyStone-0002 3239114 114 141
54  NG-05   323567  67  228



No335 is probably Hirabot
NG is oakfoam
from crazystone I dont know, if this version uses gpu
Rn is probably ray

All but Zen use gpu and so have probably about 300 policy network calls
per second.

Zen does this on cpu and I got Hideki to let me know, that this can do
only about 20 calls per second :)

It might be that zen uses cnn only in the upper nodes and relies on the
faster node generation zen used before cnn for deeper nodes?

I can not believe, that the policy network is so much stronger than the
ones of all the others?!


Any ideas


Detlef
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Golois5 is KGS 4d

2017-01-10 Thread Detlef Schmicker
Very interesting,

but lets wait some days for getting an idea of the strength,
4d it reached due to games against AyaBotD3, now it is 3d again...


Detlef

Am 10.01.2017 um 15:29 schrieb Gian-Carlo Pascutto:
> On 10-01-17 15:05, Hiroshi Yamashita wrote:
>> Hi,
>>
>> Golois5 is KGS 4d.
>> I think it is a first bot that gets 4d by using DCNN without search.
> 
> I found this paper:
> 
> https://openreview.net/pdf?id=Bk67W4Yxl
> 
> They are using residual layers in the DCNN.
> 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] it's alphago (How to get a strong value network)

2017-01-06 Thread Detlef Schmicker
Hi,

this sounds interesting! AlphaGo paper plays only with RL network, if I
understood correctly. If we start this huge approach we should try to
carefully discuss the way (and hopefully get some hints from people
tried with much computational power :)

If I understood correctly you would try to use a program using value net
with (let's say 2000 playouts) in selfplay? Using only one result, or
doing some games per position? Or are you thinking of using only the win
percentage such a program gives from his own mixing of SL network,
search and value net?

By the way to make some promotion :) oakfoam is not far away from Ray
for this kind of approach, where you will probably try to reduce cpu/gpu
usage per game (at least if Rn3.3-4c is Ray on CGOS with 4 cores, NG04b
is oakfoam on CGOS with 10k and saving GPU usage by using only 50% of GX970)

Detlef


Am 06.01.2017 um 10:39 schrieb Hiroshi Yamashita:
> If value net is the most important part for over pro level, the problem
> is making strong selfplay games.
> 
> 1. make 30 million selfplay games.
> 2. make value net.
> 3. use this value net for selfplay program.
> 4. go to (1)
> 
> I don't know when the progress will stop by this loop.
> But if once strong enough selfplay games are published, everyone can
> make pro level program.
> 30 million is big number. It needs many computers.
> Computer Go community may be able to share this work.
> I can offer Aya, it is not open-source though. Maybe Ray(strongest open
> source so far)  is better choice.
> 
> Thanks,
> Hiroshi Yamashita
> 
> - Original Message - From: 
> To: 
> Sent: Friday, January 06, 2017 4:50 PM
> Subject: Re: [Computer-go] it's alphago
> 
> 
> Competitive with Alpha-go, one developer, not possible. I do think it is
> possible to make a pro level program with one person or a small team.
> Look at Deep Zen and Aya for example. I expect I’ll get there (pro
> level) with Many Faces as well.
> 
> David
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
> 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Are the AlphaGols coming?

2017-01-05 Thread Detlef Schmicker
Hi,

what makes you think the opening theory with reverse komi would be the
same as with standard komi?

I would be afraid to invest an enormous amount of time just to learn,
that you have to open differently in reverse komi games :)


Detlef

Am 05.01.2017 um 10:50 schrieb Paweł Morawiecki:
> 2017-01-04 21:07 GMT+01:00 David Ongaro :
> 
>> 
>> [...]So my question is: is it possible to have reverse Komi games
>> by feeding the value network with reverse colors?
>> 
> 
> In the paper from Nature (subsection "Features for policy/value
> network"), authors state:
> 
> *the stone colour at each intersection was represented as either
> player or opponent rather than black or white. *
> 
> Then, I think the AlphaGo algorithm would be fine with a reverse
> komi. Namely, a human player takes black and has 7.5 komi. The next
> step is that AlphaGo gives 2 stones of handicap but keeps 7.5 komi
> (normally you have 0.5).
> 
> Aja, can you confirm this?
> 
> 
>> Also having 2 stone games is not so interesting since it would
>> reveal less insights for even game opening Theory.
>> 
> 
> I agree with David here, most insights we would get from even
> games. But we can imagine the following show. Some games are played
> with a reverse komi, some games would be played with 2 stones (yet,
> white keeps 7.5 komi) and eventually the main event with normal
> even games to debunk our myths on the game. Wouldn't be super
> exciting!?
> 
> Best regards, Paweł
> 
> 
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Some experiences with CNN trained on moves by the winning player

2016-12-11 Thread Detlef Schmicker
Hi Erik,

as far as I understood it, it was 250ELO in policy network alone ...


section

2Reinforcement Learning of Policy Networks

We evaluated the performance of the RL policy network in game play,
sampling each move (...) from its output probability distribution over
actions.   When played head-to-head,
the RL policy network won more than 80% of games against the SL policy
network.

> W.r.t. AG's reinforcement learning results, as far as I know,
> reinforcement learning was only indirectly helpful. The RL policy net
> performed worse then the SL policy net in the over-all system. Only by
> training the value net to predict expected outcomes from the
> (over-fitted?) RL policy net they got some improvement (or so they
> claim). In essence this just means that RL may have been effective in
> creating a better training set for SL. Don't get me wrong, I love RL,
> but the reason why the RL part was hyped so much is in my opinion more
> related to marketing, politics and personal ego.


Detlef
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Some experiences with CNN trained on moves by the winning player

2016-12-11 Thread Detlef Schmicker
I want to share some experience training my policy cnn:

As I wondered, why reinforcement learning was so helpful. I trained
from the Godod database with only using the moves by the winner of
each game.

Interestingly the prediction rate of this moves was slightly higher
(without training, just taking the previously trained network) than
taking into account the moves by both players (53% against 52%)

Training on winning player moves did not help a lot, I got a
statistical significant improvement of about 20-30ELO.

So I still don't understand, why reinforcement should do around
100-200ELO :)

Detlef
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Aya reaches pro level on GoQuest 9x9 and 13x13

2016-11-21 Thread Detlef Schmicker
You are absolutely right, as I was in understanding RL policy network
mode I thought, everything is about this, sorry

Am 21.11.2016 um 15:22 schrieb Gian-Carlo Pascutto:
> On 20-11-16 11:16, Detlef Schmicker wrote:
>> Hi Hiroshi,
>>
>>> Now I'm making 13x13 selfplay games like AlphaGo paper. 1. make a
>>> position by Policy(SL) probability from initial position. 2. play a
>>> move uniformly at random from available moves. 3. play left moves
>>> by Policy(RL) to the end. (2) means it plays very bad move usually.
>>> Maybe it is because making completely different position? I don't
>>> understand why this (2) is
>> needed.
>>
>> I did not read the alphago paper like this.
>>
>> I read it uses the RL policy the "usual" way (I would say it means
>> something like randomizing with the net probabilities for the best 5
>> moves or so)
>>
>> but randomize the opponent uniformaly, meaning the net values of the
>> opponent are taken from an earlier step in the reinforcement learning.
>>
>> Meaning e.g.
>>
>> step 1 playing against step 7645 in the reinforcement history?
>>
>> Or did I understand you wrong?
> 
> You are confusing the Policy Network RL procedure with the Value Network
> data production.
> 
> For the Value Network indeed the procedure is as described, with one
> move at time U being uniformly sampled from {1,361} until it is legal. I
> think it's because we're not interested (only) in playing good moves,
> but also analyzing as diverse as possible positions to learn whether
> they're won or lost. Throwing in one totally random move vastly
> increases the diversity and the number of odd positions the network
> sees, while still not leading to totally nonsensical positions.
> 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Aya reaches pro level on GoQuest 9x9 and 13x13

2016-11-20 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Hiroshi,

> Now I'm making 13x13 selfplay games like AlphaGo paper. 1. make a
> position by Policy(SL) probability from initial position. 2. play a
> move uniformly at random from available moves. 3. play left moves
> by Policy(RL) to the end. (2) means it plays very bad move usually.
> Maybe it is because making completely different position? I don't
> understand why this (2) is
needed.

I did not read the alphago paper like this.

I read it uses the RL policy the "usual" way (I would say it means
something like randomizing with the net probabilities for the best 5
moves or so)

but randomize the opponent uniformaly, meaning the net values of the
opponent are taken from an earlier step in the reinforcement learning.

Meaning e.g.

step 1 playing against step 7645 in the reinforcement history?

Or did I understand you wrong?


Detlef
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJYMXf5AAoJEInWdHg+Znf4MI4QAJw3EAKpjbkQKtrO/3gFDazy
ASbIAChXNQEfmqOyq40d/PerlUUat+xkMJInlmnE+qwkrpM1ityTKT6Q8Yee1TWW
HmjRj4CQ4qxXWEGwWdIY4n2P36cz3x6xiItM9v7MJ0/p/WXJJyhH0MgmXpJuFJN5
rMxqol6b0ilr29UL5nY4L8pMsBI9dtOI0+DYg/eNKtg9lOfJEYfByGP7BENQV0GD
sqMKmMfHgnQ7swZhIm4nLB4R78m4GJUEFsvTHMm8rOyFJoulwRvaBYRfdtu3x4kF
kigJ3VmfAcowVUER7fDjL4/KzWcVlUGEw0gBTIK+xIheqIglLIHFLToM+FDwM3T8
poOI+f2tXWcPu1V0r85rpVFJ6nBrPey0pai0GcEL6I5N+ooG7fb5XorX7TAeOjhH
ASuzlUO2lBpdSjcpVX+9l5nniXfdM6zNE6XPZW6JEOmIHEkjo7kaREd91I+GRhKW
l6cRuVhAiudix3j31+PS7qIUdeKodipTR5qqfdxeYljiSQwJgw5tbucD6Db8/HJh
Jg9PQvdfAnTAj93jWxE/dSsbB7GOy1vThJiQcSNP1PNcw4l62hwsSZC9MCkEzOFk
Sqb8D/8eMHoiTwZMwjZN+GdDy9XoFFGwVWG1HEHcgZO3hhN8ntR2D71Y8grbKFKu
LpuegNW6/ChRCRAo73k/
=RR28
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Aya reaches pro level on GoQuest 9x9 and 13x13

2016-11-19 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Hiroshi,

thanks a lot for your info.

You did not try reinforcement learning I think. Do you have any idea,
why this would make the policy network 250ELO stronger, as mentioned
in the alphago paper (80% winrate)?

Are pros playing so bad?

Do you think playing strength would be better, if one only takes into
account the moves of the winning player?

Detlef

Am 19.11.2016 um 05:18 schrieb Hiroshi Yamashita:
> Hi,
> 
>> Did you not find a benefit from a larger value network? Too
>> little data and too much overfitting? Or more benefit from more
>> frequent evaluation?
> 
> I did not find larger value network is better. But I think I need
> more taraining data and stronger selfplay. I did not find
> overfitting so far, and did not try more frequent evaluation.
> 
>>> Policy + Value vs Policy, 1000 playouts/move, 1000 games. 9x9,
>>> komi 7.0 0.634  using game result. 0 or 1
>> 
>> I presume this is a winrate, but over what base? Policy network?
> 
> Yes. Policy network(only root node) + value network  vs  Policy
> network(only root node).
> 
>> How do you handle handicap games? I see you excluded them from
>> the KGS dataset. Can your value network deal with handicap?
> 
> I excluded hadicap games. My value network can not handle hadicaps.
> It it only for komi 7.5.
> 
> Thanks, Hiroshi Yamashita
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJYMAhCAAoJEInWdHg+Znf4M+4P/RcgEbK7TpyPOf3BKdEEaw1u
hGkCFYRDhTKHyqCDtlCTKAyoi8sUl0fCMCNOvzV17Cg46uZwNgS3PDqkPFVDuD7I
GBZQgNDXmc9+80Vn0KdDbbBAwGhsH0emzKLndwcN9oshk6cylpIiwB73JC7kvijY
uZb9iA+nOQNBbAvDDNxJNiTVz0qe3XPYSIZOaYa/HTwdnG3aFAkiC8bom3vs8Bn4
h45NkY5YkcScQug4hWP7g9IWa3wEdbVPVKtE/B1SxcjOm5aksuOkJvoFFJwEsId1
tifcT81JzThGJt1TgFpotgbA8QgDRGc6z3BXNggw5AuIU32zonqbljHiynG6Uz7I
djxywrngr9Xif8KYlteSYVViA9cJZRwbE+nHFT1Fn8lc3BDk2lypG++IaMq0QwWM
UmEn8U9TKhD4um8HcFSJGvrqUZBnsO8bcp9rUTFssqFm5ZGsoY0nwRt8EezKZ/Sh
jZqbqplmYDIBoZ6f/VwQfe3OtPLSzmDtCYpx7lh4eXBTLQ74gr8NxksyE9JGXHk4
tQ5bfRq4gobCkFuwHf2ypIhw8TNRvzq9QI4B3Hin7XcR6KKE27zqh3pNChH9VnXN
jv5Elre4y71HCYlc5pZdeu6WK8RS+ju3nwsWJhfgZGsu5J0apFlt5XSzW2UnL+I5
0p6AUG2zTq7iuxuAZlaO
=jpkW
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] October KGS bot tournament

2016-10-01 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

sorry, in your announcement you wrote five, but everything is good

Am 01.10.2016 um 22:27 schrieb Nick Wedd:
> Hi Detlef,
> 
> On 1 October 2016 at 20:18, Detlef Schmicker <d...@physik.de> 
> wrote:
> 
> Hi Nick,
> 
> you created the game with 4 min each on KGS?
> 
> I created the tournament with 4 min each, as promised at
>> http://www.weddslist.com/kgs/future.html . Have I done it wrong 
>> soemhow?
> 
>> Nick
> 
> 
> 
> 
> Detlef
> 
> Am 01.10.2016 um 16:26 schrieb Nick Wedd:
>>>> The October KGS bot tournament will be on Sunday, October 
>>>> 9th, starting at 16:00 UTC and ending by 23:55 UTC.  It will 
>>>> use 9x9 boards, with time limits of five minutes each plus 
>>>> very fast Canadian overtime, and komi of 7.
>>>> 
>>>> It will be a 40-round Swiss tournament with two divisions. 
>>>> Only bots that have shown that they can beat gnugo3pt8 
>>>> (currently rated as 5k on KGS) may enter the Formal
>>>> Division. Bots of any strength may enter the Open Division.
>>>> If there are fewer than three entrants (not counting
>>>> gnugo3pt8) for the Formal Division, they will be transferred
>>>> to the Open Division, and the Formal Division will be
>>>> cancelled.
>>>> 
>>>> See http://www.gokgs.com/tournEntrants.jsp?id=1078 and 
>>>> http://www.gokgs.com/tournEntrants.jsp?id=1079 .  Please 
>>>> register by emailing me at mapr...@gmail.com, with the words 
>>>> "KGS Tournament Registration" in the email title, saying 
>>>> which division you want to play in.
>>>> 
>>>> Nick
>>>> 
>>>> 
>>>> 
>>>> ___ Computer-go 
>>>> mailing list Computer-go@computer-go.org 
>>>> http://computer-go.org/mailman/listinfo/computer-go
>>>> 
>> ___ Computer-go 
>> mailing list Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
>> 
> 
> 
> 
> 
> 
> ___ Computer-go
> mailing list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJX8CBQAAoJEInWdHg+Znf4AxMP/1oW/Uxw0uSuZNqN8pXAbRis
d1UiFHXbg/EG0rq9CKBhJX0SuxyzavfLMzFi/MdaDsahb3Z0/thfOKDa+Kbly4Ox
JhRFBBgTocMLGb7E7hFeko7Vj/FDTUeO1b9vERcBH48cxeatMsG31SrXKJHzQYST
dpiB0H7TbsRBNuL5ieN5LcQisfb6QI8lRHK+tTXvGLCxKNJ2VqGZpNsqPtyHlDjG
UK/UWHjsMJ41uTox+g46qWu3gSz+tj/yRNCoRs2I1PK4sONMatYZeuBzxxg/Vkq4
iiKqzKMzZZlZyX/8kqpaQkPeY5yK0hwPER0OsC7qk/eTT1V2ypR7cmmLFPBpAlXG
ORmlQi5BoagPqkRNfkX5mqEvUUnU91wUlY2Nn8J1XvoaKtCdzUlKjcpLaG2mchE6
JUg0DPbkhEejyXpLrSV8SHn5D9XCE/8MCOsdk4gof1Mibvh5cPfWDi7p0BEpuRfR
4Dv32JDRq6K1F7J8/9GU5WkKyJiG40YwUnSIxdPbknQ1Pp7kg6dFygDckjDg7AWN
GuwzgZvJaSh+NWsHIihZaWWsufu8qS25oxKppcW62VqiriXhw/qUu8ynt89KwTb6
0q2j6ASorg/HQRMwsfYph2X3BYl8VBo/N1Rl9DoT5Xw+aoHZW/uuLgbQSOTr8dXO
lyNlWTJSd3tNowkjYZ5+
=6Ako
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] October KGS bot tournament

2016-10-01 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Nick,

you created the game with 4 min each on KGS?

Detlef

Am 01.10.2016 um 16:26 schrieb Nick Wedd:
> The October KGS bot tournament will be on Sunday, October 9th,
> starting at 16:00 UTC and ending by 23:55 UTC.  It will use 9x9
> boards, with time limits of five minutes each plus very fast
> Canadian overtime, and komi of 7.
> 
> It will be a 40-round Swiss tournament with two divisions.  Only
> bots that have shown that they can beat gnugo3pt8 (currently rated
> as 5k on KGS) may enter the Formal Division. Bots of any strength
> may enter the Open Division. If there are fewer than three entrants
> (not counting gnugo3pt8) for the Formal Division, they will be
> transferred to the Open Division, and the Formal Division will be
> cancelled.
> 
> See http://www.gokgs.com/tournEntrants.jsp?id=1078 and 
> http://www.gokgs.com/tournEntrants.jsp?id=1079 .  Please register
> by emailing me at mapr...@gmail.com, with the words "KGS Tournament
> Registration" in the email title, saying which division you want to
> play in.
> 
> Nick
> 
> 
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJX8AwdAAoJEInWdHg+Znf4qa4QAKoRVKn2oUO/jrB2JkQHkdQX
9LPFziq4MffGhx6G8WUWxSZjz/PF3vtGSiyP3BovPvVT0BOtZ4CFrVQR7/vSdggn
wdguvjECHCb6ubS0l0ExmXhzLglaUk/d1cBCMCo7ywjTWzt5i9WYLy9T4eADh134
Ky3QmSsUHqkrLCx+bRCgeiJft8IhSM1ZtktwRqYZHpWkyROXbCk3XaRJsk6kxx4m
5ugiCgXnBiGUmq4Q9PPHr+J25DnQBv5xm+E9b9BdCfRUdD94oChmNXlaD6ZQwuYO
jKFfTByYq+RUDXVWXDtVLjZXLw6eBBpL29lCzoMXEzD6rKo9ArfMtWJ9H3dHx8UA
HyepuBLvfdIjXa3fIrn/WFnBBhgmAEwzkYawj4cDwVoBlG1QoX75sGSbwKYb97U7
6GZ/0td9O8YyAfhQuvoFuu/Un1bfrcmMrUNbmBdKTiNkEZMEJzwXeINuSV2nIHmO
FvjW0zF9k+v3AtJDBm6NLNOlMmCb7yLzuw7DZhzDzlKMwdoTWwkGER7mLhVPYDca
seBLlOtgNp3QD9fPaoaKConGyVfDaxhFupYPc/edAf//HPHRYGAT3PyfFet/WuDl
TRksI8Go2wr6kz+hVhqrD8VQKo1X33USaQ7i0tk/mqglJL+h83dO0Owp4z8AupIQ
UDNdpZyLz8DW7893Q2XF
=Dex5
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Converging to 57%

2016-08-23 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

good to start this discussion here. I had the discussion some times,
and we (discussion partner and me) were not sure, against which test
set the 57% was measured.

If trained and tested with kgs 6d+ dataset, it seems reasonable to
reach 57% (I reached >55% within 2 weaks, but than changed to GoGod),
with test against GoGod I do not get near that value too (around >51%
within 3 weeks on gtx 970). By the way, both get roughly the same
playing strength in my case.

So, if somebody is sure, it is measured against GoGod, I think a
number of other go programmers have to think again. I heard them
reaching 51% (e. g. posts by Hiroshi in this list)

Detlef

Am 23.08.2016 um 08:40 schrieb Robert Waite:
> I had subscribed to this mailing list back with MoGo... and
> remember probably arguing that the game of go wasn't going to be
> beat for years and years. I am a little late to the game now but
> was curious if anyone here has worked with supervised learning
> networks like in the AlphaGo paper.
> 
> I have been training some networks along the lines of the AlphaGo
> paper and the DarkForest paper.. and a couple others... and am
> working with a single 660gtx. I know... laugh... but its a fair
> benchmark and i'm being cheap for the moment.
> 
> Breaking 50% accuracy is quite challenging... I have attempted
> many permutations of learning algorithms... and can hit 40%
> accuracy in perhaps 4-12 hours... depending on the parameters set.
> Some things I have tried are using default AlphaGo but wtih 128
> filters, 32 minibatch size and .01 learning rate, changing the
> optimizer from vanilla SGD to Adam or RMSProp. Changing batching to
> match DarkForest style (making sure that a minibatch contains
> samples from game phases... for example beginning, middle and 
> end-game).Pretty much everything seems to converge at a rate that
> will really stretch out. I am planning on picking a line and going
> with it for an extended training but was wondering if anyone has
> ever gotten close to the convergence rates implied by the
> DarkForest paper.
> 
> For comparison... Google team had 50 gpus, spend 3 weeks.. and
> processed 5440M state/action pairs. The FB team had 4 gpus, spent 2
> weeks and processed 150M-200M state/action pairs. Both seemed to
> get to around 57% accuracy with their networks.
> 
> I have also been testing them against GnuGo as a baseline.. and
> find that GnuGo can be beaten rather easily with very little
> network training... my eye is on Pachi... but have to break 50%
> accuracy i think to even worry about that.
> 
> Have also played with reinforcement learning phase... started with
> learning rate of .01... which i think was too high that does
> take quite a bit of time on my machine.. so didnt play too much
> with it yet.
> 
> Anyway does anyone have any tales of how long it took to break
> 50%? What is the magic bullet that will help me converge there
> quickly!
> 
> Here is a long-view graph of various attempts:
> 
> https://drive.google.com/file/d/0B0BbrXeL6VyCUFRkMlNPbzV2QTQ/view
> 
> Red and Blue lines are from another member that ran 32 in a
> minibatch, .01 learning rate and 128 filters in the middle layers
> vs. 192. They had 4 k40 gpus I believe. They also used 4
> training pairs to 4 validation pairs... so I imagine that is
> whey they had such a spread. There is a jump in the accuracy which
> was when learning rate was decreased to .001 I believe.
> 
> Closer shot:
> 
> https://drive.google.com/file/d/0B0BbrXeL6VyCRVUxUFJaWVJBdEE/view
> 
> Most stay between the lines... but looking at both graphs makes me
> wonder if any of the lines are approaching the convergence of
> DarkForest. My gut tells me they were onto something... and am
> rather curious of the playing strength of the DarkForest SL network
> and the AG SL network.
> 
> Also... a picture of the network's view on a position... this one
> was trained to 41% accuracy and played itself greedily.
> 
> https://drive.google.com/file/d/0B0BbrXeL6VyCNkRmVDBIYldraWs/view
> 
> Oh... and another thing AG used KGS amateur data... FB and my
> networks have been trained on pro games only. At one point I tested
> the 41% network in the image (trained on pro data) and a 44%
> network trained on amateur (KGS games) against GnuGo... and the pro
> data network soundly won... and the amateur network soundly lost...
> so I stuck with pro since. Not sure if the end result is the
> same... and kinda glad AG team used amateur as that removes the
> argument that it somehow learned Le Sedol's style.
> 
> 
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJXu/PCAAoJEInWdHg+Znf4WmkP/iMpRo/jI/ZYh83C0CTOh2HB
1nq4cb9CG1fubaOIfH5T7veu3pqQWTxPLUI5FOIN99BkESc0NbuSP5geoyLy0hVY

Re: [Computer-go] Aja presentation

2016-06-30 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Seriously, this was all on game 4?

Am 30.06.2016 um 07:43 schrieb "Ingo Althöfer":
> Hi,
> 
> the organizers taped it on video. I will let you know as soon as I
> learn where it is put online.
> 
> The event was: 10 minutes honorings 35 minutes presentation by Aja 
> 20 minutes questions and answers
> 
> The japanese "Computer Go Forum", presented by Hideki Kato, awarded
> the Alpha Go team an honorary diploma. And for each of Aja Huang,
> David Silver, and Demis Hassabis they gave a decorative fan. See
> here for a photo: 
> http://www.dgob.de/yabbse/index.php?topic=5922.msg202733#msg202733
> 
> The slide with comments on problems in game 4 is this one: 
> http://www.dgob.de/yabbse/index.php?topic=5922.msg202745#new
> 
> Best regards, Ingo.
> 
> 
> Gesendet: Mittwoch, 29. Juni 2016 um 18:40 Uhr Von: "Paweł
> Morawiecki"  An: 3-hirn-ver...@gmx.de 
> Betreff: Aja presentation Hi Ingo, Are slides or video available
> from today's talk by Aja on Alphago? Im particularly interested in
> the conclusion on 4th game where Alphago lost and also any future
> plans of Deepmind. Can you say something? Best regards, Pawel 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJXdMAgAAoJEInWdHg+Znf4hRUQAIxV4TXtC/JEaFCrys7cVECB
y93al6UjYqPK36JcIGgrQvFdJJ6xBMXdNdfcEU/BQCeHSxpv4CB1LcpsPnut8+uQ
cVRFr4aHCQgwTnnYEw59boWswKL1tFgfprxCfBm2lG2DFnvJ6IRrflr+FXqsy+F3
UjNJ9K13ZlCah50DDXvsD25MYB/V1CjE82ykogvug0IGz76+N2/etoF8m/37toXb
ATVDb9c9Hn4GCOyy+YWIop1g7kESu1eDk/dRs4urOTCbxKPd6K/fvVKssmyWkvw9
bRUP5TZOFBgnxoOQsrDtS1Zp33cQ2tlppNUkFa7fP14roshU8D1yPrQ0veKn79ca
K2A2tGv4GlCL7I6UrXzuC8kyx3jV5HnW1SnHINNOJsXkV0sARsTf2rZEMccbVPlC
TC8N0wV4kJCRw/v/Wi77enYhvRHEPduGyHxbdWAAbjKBvbROuGUuM8NYM12ceXpB
fVD7I0+E/QURCk7zLwiFeOPe3dSfi4Qr6L172gfwotL9SK9tmgjWIzJZNWxAc60P
Ei0FLxC63QGFOBsuwA2sYW7aRCjlrgs7WzbYMqvcGb3eHVXPGZSfR5dXOJZdaIYn
PKoFy4oMV4EBdj2MmtWOCsltxxTbuvRi0ezVWdxxXd642VPsoLoG/zHkRxU3W3t/
sZI3e//AaSLMYG52gBHI
=WeQC
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] cgos <--> kgs rating

2016-06-27 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hay,

I wonder if somebody has the same program (with the same settings)
rated on cgos 19x19 and kgs?

I am still fighting with resigning in the case of value-network and
playouts disagree, so I can not run oakfoam on kgs, but would like to
have a strength hint :)


Thanks Detlef
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJXcWraAAoJEInWdHg+Znf4vFgP/18AbsZpOH9mjjE4j30j4WZm
R3MJFQW/519IczwBNkps80Q1Gl0K7LYJ6ss7Ok0FUmqVB+3V4+ZXoMLp5wD637VO
nRTXCvdx+xUyyR+fPgj7PEtGCr+hNsbvDlwxqsqdX5laK+L5NanvPMIWz0ZMCFlV
7CUHweIwac2f56l679Urf+wqBXysQiGcOHbXmcWlUVlJ6Og4xKfCJM38Gx2gUyfK
+yPecX6c84t1qe/h9rw2E9tkP9KHM0rpyTsJGLvVh2ia8mmkdP1KZeeFd7sM9pQV
vClV53JJdU//iCqi5/XB2Sn4YTgElk4GjTsb5pC7EmG1QM7+PhU6v7mfYll/4XV1
xGeLuFRU+eqQvkd0yW+x/GRlgTfbo212nNwuHWNpGe9InCwFEIvcNNe2lPo6B+sD
ldK13nrYdkYVTr4Wa0+WpQaEewv0gA/9nQcjTntKikVcvDwYMn/OwBvukzSb12e3
yTvpy6Yx41EL4+D8bpI373/1MvLnbLAKssEYSNTKT2o/1gtBqVF40zXya+lnPb+m
s8TOOxWbS4EktrLIbwpD+xzpb2O91TfPQO1/xLA+12g/SMIvW4OB6ghAiPP8WHsA
ETZwBwZ9kcmQeB1LQ0LjigjBzQZOM7Iid8oWZMjlrXMBjAbKHafXSDFwD1iK+O9S
X0nUgI+ECtMr5Ya7k72i
=smID
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to Zen!

2016-05-10 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

OK, this thread is quite long, and I am not sure I saw all posts :)

My suggestion, rate the bots on CGOS before the tournament and take
this rating for McMahon or for handicaps. I think we can thrust the
bot authors to take the correct rating and report it during tournament
registration.

This would even calibrate the CGOS rating to KGS as a side effect.

Detlef

Am 10.05.2016 um 16:11 schrieb Nick Wedd:
> You understand correctly.
> 
> I think that between us, would could reach agreement on what
> rankings to use for seeding.  But no-one has the power to input
> seeds anyway.
> 
> On 10 May 2016 at 14:09, Gian-Carlo Pascutto 
> wrote:
> 
>> On 10-05-16 00:14, Erik van der Werf wrote:
>>> Oh that's silly! IIRC if your bot is not ranked than users can
>>> do all kind of cheating in the scoring phase (e.g., mark all
>>> your living stones dead).
>> 
>> I've not observed this behavior so far. Perhaps because in an
>> unranked game there's no rating to lose anyway.
>> 
>> I suspect that for estimating a good enough rating to seed a
>> tournament it would work fine. But I understand from Nick's post
>> that it doesn't matter because he can't "input" the ratings into
>> the McMahon pairing software.
>> 
>> -- GCP ___ 
>> Computer-go mailing list Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
>> 
> 
> 
> 
> 
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJXMe4bAAoJEInWdHg+Znf4CWgP/RlKVWWiJaYHhlCovAOrTmdi
AeeXwFdjRkoTqH9h0OZ1SSYAhEl7GK9ZwVI7Lmy27cnky/JJhtvTz91X4T20ZzG8
ggHFYXslQRar42pvuJVhpdEzLpx/Uo0TLxAqBXncmBy9llW8gKlrQ54uMcejlR1o
LekDRR9A15mGFGqdlm7/t+OsgSWXbNcUAaS9sSmgEpqMLogLrnPoedACY7xX46nx
eWuHTCnVc/jPIPFaKfsw+LxGJtdOjAx+juKcgliULBl/n1VXd/cZLml20nb2sD+a
/MBbUgqIRsGsg9A6EzKZOYgQDPxb5esD/Z8ARbELnYho9xQ/oMx8q3edu6WObgQR
9JYJQtL9kEhR3TLzVHpMuoT/69DGcXCQOqExSGpH4QvP2I6l+rXnFYsz+yxSjYis
ZNkYF/QPFM93VpMFcYcLCSuEB1eS2mpDE5FkZFmAbA1FLY7QZOaI2noKpsX4sWTb
3ols3lO8Bn9XmfKweWuB5OpthSj3yiMUWBb3pXe5PjYI9LAbYZXT7uvSbJ1iGQii
J2fRlvGzHxX+e1FovF1l7YBaVvSPwIgMFRW3zUfgPwmKfa2HXf4VF12fKUlJgxjG
vo1y0JSqkuk+uFa57JhjQ6ImCfD8khTYClXRAw5dAp7KKbVT0Y4+DPFAdAACNKwh
KrVKQ5WFsu/ekrwzly/p
=ylNA
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Is Go group status recognition by CNN possible?

2016-04-21 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

You are right, usually they do quite well, but e.g. liberty races with
large dragons are quite difficult.

And there must be a reason, why the value net was so wrong in the game
alphgo lost:)



Am 21.04.2016 um 13:51 schrieb Erik van der Werf:
> On Thu, Apr 21, 2016 at 1:20 PM, "Ingo Althöfer"
> <3-hirn-ver...@gmx.de> wrote:
> 
>> Likely it is almost impossible for neural nets of "moderate"
>> size to identify life/death stati of a groups.
>> 
> 
> No. Neural nets (even shallow ones like we used over a decade ago)
> are quite capable to identify life/death. Sure you can construct
> pathological examples that in theory require some form of
> recursion, but in practice this now seems to be mostly a
> non-issue.
> 
> 
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJXGMvEAAoJEInWdHg+Znf4aGkQAKS30Q20g5OzCEqFizt985j1
83kU3JfJgv9kT648+O9xmKvmtgLl7lv9cA6sNDaHlGfTwIGZ3/jM70gk6jkpK9X1
BemYBaZGgxEVgY94gVPTF7HH2WRLe/fLp1kXQeFFYTsp1RXxxBUQklYE9dBjWKuD
tExA2hK8l5dNvYPQsfuHz9wR2t3BXMj2Oo6td8guxSgvOwsRRd8n3cQYxNFmXuGt
J1yWAKegbhoRwlWdKWsWdmp3PGAS0I1fTruXDLDTFhJX6W7Rp9UmpmuMxixpdjVM
qM9veVLzVOiYMyXb2ngk7QNyEKyUBlQlp9R1aHCiWTIMhWaBhbZxha5Vo7fCoI8r
mSq4I2jfn0QyTwbPzjlikRhUiqA5J64QzuFxZ/AG0DaA5R828vUT90PmmeqMvhqq
SsXf7l9iQkkOZ1jVRdr13Ta0HfDW3sPbGhiLz2sdtrpn8R6zyAhdfk+kmSHPF6MD
vAxqhPtpjw66pvrHkeo/EuGV1TDYKtUmmNtdAShruihr9G5O50htiCYix/lhLqBN
jBcTosWkPjlnSNWOKRYOgJMPuZxlFEtRgYCwSXUn7GsYBvz5qED0IEFwMMR616Db
dRson60EBzvx83Z8KxMN2xjagYQWamkL6KjElqBfu7olE6vzwNFt0SkL51R4FwTN
BdvTQXk/UUlM+M0KeVJ+
=0ZrL
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Value Network

2016-03-19 Thread Detlef Schmicker
t; num_output: 128 kernel_size: 3 pad: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu8" type: RELU bottom: "conv8" top: "conv8" }
>> 
>> layers { name: "conv9_3x3_128" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv8" top: "conv9" convolution_param { 
>> num_output: 128 kernel_size: 3 pad: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu9" type: RELU bottom: "conv9" top: "conv9" }
>> 
>> layers { name: "conv10_3x3_128" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv9" top: "conv10" convolution_param { 
>> num_output: 128 kernel_size: 3 pad: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu10" type: RELU bottom: "conv10" top: "conv10" }
>> 
>> layers { name: "conv11_3x3_128" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv10" top: "conv11" convolution_param { 
>> num_output: 128 kernel_size: 3 pad: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu11" type: RELU bottom: "conv11" top: "conv11" }
>> 
>> layers { name: "conv12_1x1_1" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv11" top: "conv12" convolution_param { 
>> num_output: 1 kernel_size: 1 pad: 0 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu12" type: RELU bottom: "conv12" top: "conv12" }
>> 
>> layers { name: "fc13" type: INNER_PRODUCT bottom: "conv12" top:
>> "fc13" inner_product_param { num_output: 256 weight_filler { 
>> type: "xavier" } bias_filler { type: "constant" } } } layers { 
>> name: "relu13" type: RELU bottom: "fc13" top: "fc13" }
>> 
>> layers { name: "fc14" type: INNER_PRODUCT bottom: "fc13" top:
>> "fc14" inner_product_param { num_output: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "tanh14" type: TANH bottom: "fc14" top: "fc14" }
>> 
>> layers { name: "loss" type: EUCLIDEAN_LOSS bottom: "fc14" bottom:
>> "label" top: "loss" } 
>> 
>> 
>> Thanks, Hiroshi Yamashita
>> 
>> - Original Message - From: "Detlef Schmicker"
>> <d...@physik.de> To: <computer-go@computer-go.org> Sent: Saturday,
>> March 19, 2016 7:41 PM Subject: Re: [Computer-go] Value Network
>> 
>> 
>> 
>> What are you using for loss?
>>> 
>>> this:
>>> 
>>> layers { name: "loss4" type:  EUCLIDEAN_LOSS loss_weight: 2.0 
>>> bottom: "vvv" bottom: "pool2" top: "accloss4" }
>>> 
>>> 
>>> ?
>>> 
>>> Am 04.03.2016 um 16:23 schrieb Hiroshi Yamashita:
>>> 
>>>> Hi,
>>>> 
>>>> I tried to make Value network.
>>>> 
>>>> "Policy network + Value network"  vs  "Policy network"
>>>> Winrate Wins/Games 70.7%322 / 455,1000 playouts/move
>>>> 76.6%141 / 184,   1 playouts/move
>>>> 
>>>> It seems more playouts, more Value network is effetctive.
>>>> Games is not enough though. Search is similar to AlphaGo.
>>>> Mixing parameter lambda is 0.5. Search is synchronous. Using
>>>> one GTX 980. In 1 playouts/move, Policy network is called
>>>> 175 times, Value network is called 786 times. Node Expansion
>>>> threshold is 33.
>>>> 
>>>> 
>>>> Value network is 13 layers, 128 filters. (5x5_128, 3x3_128
>>>> x10, 1x1_1, fully connect, tanh) Policy network is 12 layers,
>>>> 256 filters. (5x5_256, 3x3_256 x10, 3x3_1), Accuracy is
>>>> 50.1%
>>>> 
>>>> For Value network, I collected 15804400 positions from
>>>> 987775 games. Games are from GoGoD, tygem 9d,  22477
>>>> games http://baduk.sourceforge.net/TygemAmateur.7z KGS 4d
>>>> over, 1450946 games http://www.u-go.net/gamerecords-4d/
>>>> (except handicaps games). And select 16 positions randomly
>>>> from one game. One game is divided 16 game stage, and select
>>>> one of each. 1st and 9th position are rotated in same
>>>> symmetry. Then Aya searches with 500 playouts, with Policy
>>>> network. And store winrate (-1 to +1). Komi is 7.5. This 500 
>>>> playouts is around 2730 BayesElo on CGOS.
>>>> 
>>>> I did some of this on Amazon EC2 g2.2xlarge, 11 instances. It
>>>> took 2 days, and costed $54. Spot instance is reasonable.
>>>> However g2.2xlarge(GRID K520), is 3x slower than GTX 980. My
>>>> Pocicy network(12L 256F) takes 5.37ms(GTX 980), and
>>>> 15.0ms(g2.2xlarge). Test and Traing loss are 0.00923 and
>>>> 0.00778. I think there is no big overfitting.
>>>> 
>>>> Value network is effective, but Aya has still fatal semeai 
>>>> weakness.
>>>> 
>>>> Regards, Hiroshi Yamashita
>>>> 
>>> 
>> ___ Computer-go
>> mailing list Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
>> 
> 
> 
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJW7ZeFAAoJEInWdHg+Znf4RJAQAKwTOidHeQjutSUYKhNCcAcj
X5LWSBg72PEGInlvS6qz3BDIlLI/ftOmQwcHpAvA+Ci91wCbiZlH7n+DI+YZqixm
s1lAryvpQgp8EyhgNqH4H3URtQvbZsjaEqjIeDPA8Xiqvx+Yi0sKlH5Tbkcyhy5H
7uHb0ls0VTf0q0DOCTcwkbdOd3nfXNj0xKwZ4JMh0s5d1OE1XFRqzNjZsre2uTUN
Fdj1YBOkAsW1Ja31IDwEK9eM/aoBoaxWrbnLV/1pLhhHYDwxEJvo9V9JroxN3sTR
0ll1xrNzfMAXyPY+yRk7SgYTayD8dUZKj0WbThvx389CJqnWZFtXog8HuybVLeI3
fr9PDGOx9quN07SXvdjVAjrOP01YZHfTqh31nKK4hnfH/krXpFivc/l2zs5CvkGs
PtsS61wfRflUPZiiTwrnRT/sHJn8Eqw99u9GeS4v2J3of9BtnKs8JAKUL4pbXcVT
5Lfxml1stBVABAXJoPXrHyFbUkSusPoHHppaGfG+E9uBoaEGXE2xTpdXzr3u1rUv
aSOvqt+Pbe4u1eboStOVtDjwOAGmrLBSu9X5HkcnvOQ6L10dS52WTkvPzB7i6Hoa
RuMZIFT1iIzJ9ZHiJRx+icgEE/Kh3bObbPuCuueHH2315eaIshLqtlrj65g5M+sU
r/z6Oc8pk5xRDcfTpfK5
=4Apv
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Value Network

2016-03-19 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

What are you using for loss?

this:

layers {
  name: "loss4"
  type:  EUCLIDEAN_LOSS
  loss_weight: 2.0
  bottom: "vvv"
  bottom: "pool2"
  top: "accloss4"
}


?

Am 04.03.2016 um 16:23 schrieb Hiroshi Yamashita:
> Hi,
> 
> I tried to make Value network.
> 
> "Policy network + Value network"  vs  "Policy network" Winrate
> Wins/Games 70.7%322 / 455,1000 playouts/move 76.6%141 /
> 184,   1 playouts/move
> 
> It seems more playouts, more Value network is effetctive. Games is
> not enough though. Search is similar to AlphaGo. Mixing parameter
> lambda is 0.5. Search is synchronous. Using one GTX 980. In 1
> playouts/move, Policy network is called 175 times, Value network is
> called 786 times. Node Expansion threshold is 33.
> 
> 
> Value network is 13 layers, 128 filters. (5x5_128, 3x3_128 x10,
> 1x1_1, fully connect, tanh) Policy network is 12 layers, 256
> filters. (5x5_256, 3x3_256 x10, 3x3_1), Accuracy is 50.1%
> 
> For Value network, I collected 15804400 positions from 987775
> games. Games are from GoGoD, tygem 9d,  22477 games
> http://baduk.sourceforge.net/TygemAmateur.7z KGS 4d over, 1450946
> games http://www.u-go.net/gamerecords-4d/ (except handicaps
> games). And select 16 positions randomly from one game. One game is
> divided 16 game stage, and select one of each. 1st and 9th position
> are rotated in same symmetry. Then Aya searches with 500 playouts, 
> with Policy network. And store winrate (-1 to +1). Komi is 7.5. 
> This 500 playouts is around 2730 BayesElo on CGOS.
> 
> I did some of this on Amazon EC2 g2.2xlarge, 11 instances. It took 
> 2 days, and costed $54. Spot instance is reasonable. However 
> g2.2xlarge(GRID K520), is 3x slower than GTX 980. My Pocicy 
> network(12L 256F) takes 5.37ms(GTX 980), and 15.0ms(g2.2xlarge). 
> Test and Traing loss are 0.00923 and 0.00778. I think there is no
> big overfitting.
> 
> Value network is effective, but Aya has still fatal semeai
> weakness.
> 
> Regards, Hiroshi Yamashita
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJW7SzYAAoJEInWdHg+Znf4gV8P/jylYxgTHDY/i2RdGy4TaviL
sL7M2y1d/7xpcUYZHmCBV32jhDfoIEh3hzW9ZYIVylak6JJgwy+czDVbCy30akZL
MMLKYUjtcHrQJphOXcYMsBJ/CYgsRVO45AAUgFXiHRlXuCs9LVrB41QjjQ291Cph
sTgNOUaxa62CHT85eKFWGfyKNIRo4p0uWQhnen2ZCrrVyaghV8Iqzjcjgxotlzuf
Ur2bwBb3SlkE31x9slQYpiFJ+jkfTDLqF0Z2gFqiZTf+sUxA03LP+j64+3cMaMou
kyHizVJMGk+JOg2z4cdIVFoLkT4FdrmD5R7zpv5RLQ6Q9r7wOLE5Ptho6fxkRShU
d0qofNnEQwIKz48knqBdPVuo+yZlv8b/JopyVaPEN1isq0Hab0I2ASCMWAAZRbeF
5xdCP2MdvMoSuDSSBTeZRQnLCAntGZ5O+PDYbGVqVaperDSeqszSdXQ+DrhpkGwm
uWT2D3YYO0/P9ovdTd0NEf11A+PlOF/CfGbdOfCbjxOT9cE2WyZOpy2hPu2SMj8d
2cgt36FVD+1lQueaWiJlTa6Q/8mehBCgLJLNHMuF3v8TTCUP2ofwWr1eZ5Y7T1DU
D2xX2eVo6HUdaehwjc9X8Gsa37nJIYtEAHm2BFxzlASdntj9eMgbg3GNYtSwr/2r
T6QqYjbRQJO6+mw0iU0t
=S3fg
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

You are right, but from fig 2 of the paper can see, that mc and value
network should give similar results:

70% value network should be comparable to 60-65% MC winrate from this
paper, usually expected around move 140 in a "human expert game" (what
ever this means in this figure :)

Am 13.03.2016 um 12:48 schrieb Seo Sanghyeon:
> 2016-03-13 17:54 GMT+09:00 Darren Cook :
>> From Demis Hassabis: When I say 'thought' and 'realisation' I
>> just mean the output of #AlphaGo value net. It was around 70% at
>> move 79 and then dived on move 87
>> 
>> https://twitter.com/demishassabis/status/708934687926804482
>> 
>> Assuming that is an MCTS estimate of winning probability, that
>> 70% sounds high (i.e. very confident);
> 
> That tweet says 70% is from value net, not from MCTS estimate.
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJW5WAmAAoJEInWdHg+Znf4HSAQAI1e9iyGlSrJ0QdsmheuGDiz
09vK+mWGHZ+QAcIEkJEQ7wciKYc2IuRejAZrF6lQTQcV9GsAcVKVoqTmuJ7BnFTI
ZEZzN8nk1DKilTGs6P9BALwk0V3zvCI0Mo9AdMpequ6LV+D7vbY9gjkJgKU6O9td
zXrQhP9tdK8M/BEy5caz6uzsbP+5ESorK4X9Xt84bgv3p7aSIaCwVrkTjOPSQAZw
smErgmxAkOIvGABtexkcfgxyYXtYSfHoMsM80d9APoK9fY32TkGYSHvSvHCZks6x
LUcVnEBu7WHHekV6genv6GS2cbM+La8ghIRQrJwuN2lS+q1YEYy8lHbNmEEY0A2s
FeYIIVA9i8HmMJmw6g0XYVt1ODsOl8sJfMzxUxcG9qgppica/Xbkrq/yFe7ktm2l
lD2WxbRlny9s/ZJSUtoM12OvYmXRmvoFRMRooJ1eydI25Weld2dpHNvT6L3BMLlt
E5HyE46QU+lzFLCrpOQTHGRoWyx1ShmfNuBat3XWm0nHMg+SARJ95795agSjGbNt
tO0kaXqK5GkY2HtOSZcMistLIqiOBde8iDNXWJqQ5kKndguBQ7mXArEG0t10AT6g
yQQ6/6ffi5WM92HEUy5R/wPzNaLUCsAzzSu7hrRxlg//vj7oKf93DlhTnHaigdFj
/u+/r+j6IuJboAQtAYcf
=ZdLw
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



Am 13.03.2016 um 11:28 schrieb Josef Moudrik:
> How well do you think the mcts-weakness we have witnessed today is
> hidden in AG? Or, how can one go about exploiting it
> systematically?
> 
> I think it might be well hidden by the value network being very
> strong and true most of the time - it is much harder to get AG to
> this state, than traditional mcts bots with much less truthful
> evaluations.
> 
> So, what would be Lee's best effort to exploit this? Complicating
> and playing hopefully-unexpected-tesuji moves?
> 
> Detlef: Demmis tweeted that the w78 caused b79 mistake, that only
> surfaced out some ten moves later. Ca you share the development of
> the value evals during these moves? Did your net fall down right
> after move 78?

My net is very unstable in the sequence, jumping around a lot. But I
am in a very early state of value network, just wanted to state:

This is probably more a problem of the value network, than of the MC
playouts, they are quite stable in the sequence, but don't put to much
into this, I am not really convinced of my net at the moment :(


> 
> What an interesting times we live in :-)
> 
> Regards, Josef
> 
> Dne ne 13. 3. 2016 10:33 uživatel Marc Landgraf
>  napsal:
> 
>> Oh, is it possible to provide those variants? Or is there a
>> recording of the broadcast, reading the board is probably enough
>> to roughly understand it.
>> 
>> 2016-03-13 10:32 GMT+01:00 Chun Sun :
>>> Hi Marc,
>>> 
>>> "but did not find a "solution" for Lee Sedol that broke
>>> AlphaGos
>> position"
>>> -- this is not true. Ke Jie and Gu Li both found more than one
>>> way to
>> break
>>> the position :)
>>> 
>>> On Sun, Mar 13, 2016 at 5:26 AM, Marc Landgraf
>>> 
>> wrote:
 
 What is the most interesting part is, that at this point many
 pro commentators found a lot of aji, but did not find a
 "solution" for Lee Sedol that broke AlphaGos position. So the
 question remains: Did AlphaGo find a hole in it's own
 position and tried to dodge that? Was it too strong for its
 own good? Or was it a misevaluation due to the immense
 amounts of aji, which would not result in harm, if played 
 properly?
 
 
 2016-03-13 9:54 GMT+01:00 Darren Cook :
> From Demis Hassabis: When I say 'thought' and 'realisation'
> I just mean the output of #AlphaGo value net. It was around
> 70% at move 79 and then dived on move 87
> 
> https://twitter.com/demishassabis/status/708934687926804482
>
>
> 
Assuming that is an MCTS estimate of winning probability, that 70%
> sounds high (i.e. very confident); when I was doing the
> computer-human team experiments, on 9x9, with three MCTS
> programs, I generally knew
>> I'd
> found a winning move when the percentages moved from the
> 48-52% range to, say, 55%.
> 
> I really hope they reveal the win estimates for each move
> of the 5 games. It will especially be interesting to then
> compare that to the other leading MCTS programs.
> 
> Darren
> 
> ___ Computer-go
> mailing list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
 ___ Computer-go
 mailing list Computer-go@computer-go.org 
 http://computer-go.org/mailman/listinfo/computer-go
>>> 
>>> 
>>> 
>>> ___ Computer-go
>>> mailing list Computer-go@computer-go.org 
>>> http://computer-go.org/mailman/listinfo/computer-go
>> ___ Computer-go
>> mailing list Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
> 
> 
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJW5UNlAAoJEInWdHg+Znf4pWcQAJ7A/9lEwLVizz9Hc7JuUrPN
/fRXN39kMJrgww+QYLDMfG0PLJgO4Y5yh+HNot7e4TQN/H5dDEFE2u4PkdQCDc7O
c/uuQGg8Rlh089tZt5JNhHTokz7yGJ2Abg8lvZirAWue0vlySpehj4xzJbaJ5yPJ
sXEIrFG8REPb3u6DJe5Mdi615CQDwdtZeLehK+tG71UN1P1a2RJBGUv6pb7hVNEh
jN/zVfiwTJXp/eSvRmDuLaFyFmS29pOzFstqtOTLN8xu8D5XkBgQcAJYqd8eoGTU
iPFlWwLNosoOh5DBWIYAdhbFBOOrKI0YWPsiCjWOYjXiwx5+ShCNxDPyTgQfKCki
wvdxGnoDPfQDODWExfwPvx/N37LvW2zcL/tnfQ20L8WYLmauY8iCDSsttD79JpBx
Ll0UUmBv1V75+Dr3M/hIFm5/Zb8A8rjv3HRiZH/cIytM3T5qjb0W6UfgblVSHhCL
2DCE9tGIlQXwfj/8jVCuYja9t7O9+K8mlpZ1hKwkngSufPPta9rcw9xOvIXRyRwE
4bdYuPFGjvZjS5n4AuY8f3DCrmWhXbk4rJlqnBLsf0UdFaXoV6wVApqc5ekyAePH
5fholIb5/PVBXsNmBHNnU4QBDr2pavDzoIq1l1t7tieAdD14hxdORYwbZW+nYWCl
2w+j1WtdDkxFIi5BNyH+
=LeQE
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Interesting, my value net does the same, even it was trained totally
different from 7d+ games :)

Am 13.03.2016 um 09:54 schrieb Darren Cook:
> From Demis Hassabis: When I say 'thought' and 'realisation' I just
> mean the output of #AlphaGo value net. It was around 70% at move 79
> and then dived on move 87
> 
> https://twitter.com/demishassabis/status/708934687926804482
> 
> Assuming that is an MCTS estimate of winning probability, that 70% 
> sounds high (i.e. very confident); when I was doing the
> computer-human team experiments, on 9x9, with three MCTS programs,
> I generally knew I'd found a winning move when the percentages
> moved from the 48-52% range to, say, 55%.
> 
> I really hope they reveal the win estimates for each move of the 5 
> games. It will especially be interesting to then compare that to
> the other leading MCTS programs.
> 
> Darren
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJW5SyFAAoJEInWdHg+Znf4M/4P/1k1h6nc7t/6qk2wNJRp8dNL
8ZNgCyJ84d22V9V6ODcnreliM7NXhyg006tOz0MIF4F95Pgtp4ZJ9A9Bt55832a0
phIYh+FQpmm5zq/roDUxMUTPp7rlYIvgt0ngiIoFLGGrfTugDrylU9aI3Ij62Sww
eKs33DS7OMfNp9e4gY/Pek0TpiFY/Mjt/+cvQ8vKMy4p13FhfX4VqRbZV1fEnb6E
7LTQvJQp6nnQZm2frCuwmo6HkBZuyDnFC17CfPgxLGXjtU4QSxsYtehpBovBLjgb
iScwyEs/XRQEBEtKotIcXZHxWucHkA88jqzM/W9HfMDPKLKQ8stplAZVglBDkG9o
mhxM2y0Ds7/A6AM9l7F4eYHCuAzU9K2eDYITZIAyTFw7VByvOzvENgQsNCuvsxqj
BZzMP8NCVQfnRbmRQCIwSdbeFEF2c6uMvzj49Bpn3Z9Ntsp1rnJYV4TOXL7qbit3
we5q4T4xoYLkdrKAfBAjA/7oxpb9HIUsehYsOQPFSnOF3hPN4k9WYD1ytxk8hGQe
5PUNYVOOCVAfiRU1HSsHfpOi87/81/YYBNmrC1oBOxuXdnBYsAv8UIR5/QQBXp7W
vol/g5Rllh+lbAyxCDBjo6StripfILp43ByFNdatE4z7mXflzKsXMCVT3Fawx4lt
spYMb9+lL0myi8n40lKS
=wDwA
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Value Network

2016-03-04 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

thanks a lot for sharing! I try a slightly different approach at the
moment:

I use a combined policy / value network (adding 3-5 layers with about
16 filters at the end of the policy network for the value network to
avoid overfitting) and I use the results of the games as value. My
main problem is still overfitting!

As your results seem good I will try your bigger database to get more
games into training I think.

I will keep you posted

Detlef

Am 04.03.2016 um 16:23 schrieb Hiroshi Yamashita:
> Hi,
> 
> I tried to make Value network.
> 
> "Policy network + Value network"  vs  "Policy network" Winrate
> Wins/Games 70.7%322 / 455,1000 playouts/move 76.6%141 /
> 184,   1 playouts/move
> 
> It seems more playouts, more Value network is effetctive. Games is
> not enough though. Search is similar to AlphaGo. Mixing parameter
> lambda is 0.5. Search is synchronous. Using one GTX 980. In 1
> playouts/move, Policy network is called 175 times, Value network is
> called 786 times. Node Expansion threshold is 33.
> 
> 
> Value network is 13 layers, 128 filters. (5x5_128, 3x3_128 x10,
> 1x1_1, fully connect, tanh) Policy network is 12 layers, 256
> filters. (5x5_256, 3x3_256 x10, 3x3_1), Accuracy is 50.1%
> 
> For Value network, I collected 15804400 positions from 987775
> games. Games are from GoGoD, tygem 9d,  22477 games
> http://baduk.sourceforge.net/TygemAmateur.7z KGS 4d over, 1450946
> games http://www.u-go.net/gamerecords-4d/ (except handicaps
> games). And select 16 positions randomly from one game. One game is
> divided 16 game stage, and select one of each. 1st and 9th position
> are rotated in same symmetry. Then Aya searches with 500 playouts, 
> with Policy network. And store winrate (-1 to +1). Komi is 7.5. 
> This 500 playouts is around 2730 BayesElo on CGOS.
> 
> I did some of this on Amazon EC2 g2.2xlarge, 11 instances. It took 
> 2 days, and costed $54. Spot instance is reasonable. However 
> g2.2xlarge(GRID K520), is 3x slower than GTX 980. My Pocicy 
> network(12L 256F) takes 5.37ms(GTX 980), and 15.0ms(g2.2xlarge). 
> Test and Traing loss are 0.00923 and 0.00778. I think there is no
> big overfitting.
> 
> Value network is effective, but Aya has still fatal semeai
> weakness.
> 
> Regards, Hiroshi Yamashita
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJW2cD5AAoJEInWdHg+Znf4KXIP/2rrfEph3VNHkrf5B4H0DJXm
abSsFbqF453SjFOucjSGXv8Ecp90lCmwz41NWkQEpBLvedjl4atjMoBiCorjqhny
ZKeFUgY6tK0HWU2euxHH9reJ6HAsDrlYgMrJKqNySdAtPxNq2buMW1qIiFrAHCsL
wCsYlwVtz4EpViJcuSXoFufreTyfUJ7p8AxrhRtuC6ALZI1wUTm+xrwrCHPQ91Bg
AKx5N2xLO2c7rHCt9FsLhR1BmXgximzmYsD7Sge4mdYMwU5nrRhxgAvX1Uj8sP8Z
2YfF+/8YmFP/rc55LqqRGzjeUwpJaX8rv1eHxl+eaNoptP7PZcFchsC5motc6XNV
fjTwOhyaeEsPlPIDylJN5PNPn2hXc75MqVDHMnUn2J+VF2DdlerKMmZhqTd1VaIu
sHz1+DN7PNZIO4cO3AKi9ynmBEHB1pQaRH4nDWkL6hdI8Zv6ZgJEjRhXjnFWyJcI
PVmErcUI6Xn1xCXHEWhxSjKwuwil/RgdVfPgywfqhj1MiuTtkcrThpUmWcPCrLRk
fxsNddSKmJcFs4nCcK/M6oO/OiZ6mn7dO4xoCWnAvds3aW71tEupTuZhYjiWx9YH
KR5p4r7JIBNCSn1ZfonD3BKMKyBv7qIJ63ITSAdy0EH3aJPt4CVmZm2dsrE3ZhtW
wMqhp4Yf8ecTiapJOcol
=oJsM
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CPU vs GPU

2016-03-03 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

you can use caffe with time on the command line.

It gives you forward and backward time for a batch.

In my tests the batch size was not too important (I think, because the
net is quite large)...

cuDNN helps a lot in training, I did not test recently but it was 2
times faster end of last year and improved by a factor of 4 during
this year :)

Detlef

Am 02.03.2016 um 10:22 schrieb Rémi Coulom:
> I tried Detlef's 54% NN on my machine. CPU = i7-5930K, GPU = GTX 
> 980 (not using cuDNN).
> 
> On the CPU, I get 176 ms time, and 10 ms on the GPU (IIRC, someone
>  reported 6 ms with cuDNN). But it is using only one core on the 
> CPU, whereas it is using the full GPU.
> 
> If this is correct, then I believe it is still possible to have a 
> very strong CPU-based program.
> 
> Or is it possible to evaluate faster on the GPU by using a batch?
> 
> Rémi
> 
> On 03/02/2016 09:43 AM, Petr Baudis wrote:
>> Also, reading more of that pull request, the guy benchmarking it 
>> had old nvidia driver version which came with about 50% 
>> performance hit.  So I'm not sure what were the final numbers. 
>> (And whether current caffe version can actually match these 
>> numbers, since this pull request wasn't merged.)
>> 
>> On Wed, Mar 02, 2016 at 12:29:41AM -0800, Chaz G. wrote:
>>> Rémi,
>>> 
>>> Nvidia launched the K20 GPU in late 2012. Since then, GPUs and 
>>> their convolution algorithms have improved considerably, while 
>>> CPU performance has been relatively stagnant. I would expect 
>>> about a 10x improvement with 2016 hardware.
>>> 
>>> When it comes to training, it's the difference between running 
>>> a job overnight and running a job for the entire weekend.
>>> 
>>> Best, -Chaz
>>> 
>>> On Tue, Mar 1, 2016 at 1:03 PM, Rémi Coulom 
>>>  wrote:
>>> 
 How tremendous is it? On that page, I find this data:
 
 https://github.com/BVLC/caffe/pull/439
 
 " These are setup details:
 
 * Desktop: CPU i7-4770 (Haswell), 3.5 GHz , DRAM - 16 GB;
 GPU K20. * Ubuntu 12.04; gcc 4.7.3; MKL 11.1.
 
 Test:: imagenet, 100 train iteration (batch = 256).
 
 * GPU: time= 260 sec / memory = 0.8 GB * CPU: time= 752 sec
 / memory = 3.5 GiB //Memory data is from system monitor.
 
 "
 
 This does not look so tremendous to me. What kind of speed 
 difference do you get for Go networks?
 
 Rémi
 
 On 03/01/2016 06:19 PM, Petr Baudis wrote:
 
> On Tue, Mar 01, 2016 at 09:14:39AM -0800, David Fotland 
> wrote:
> 
>> Very interesting, but it should also mention Aya.
>> 
>> I'm working on this as well, but I haven’t bought any 
>> hardware yet.  My goal is not to get 7 dan on expensive 
>> hardware, but to get as much strength as I can on 
>> standard PC hardware.  I'll be looking at much smaller 
>> nets, that don’t need a GPU to run.  I'll have to buy a 
>> GPU for training.
>> 
> But I think most people who play Go are also fans of 
> computer games that often do use GPUs. :-)  Of course,
> it's something totally different from NVidia Keplers, but
> still the step up from a CPU is tremendous.
> 
> Petr Baudis ___
> Computer-go mailing list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
 ___ Computer-go 
 mailing list Computer-go@computer-go.org 
 http://computer-go.org/mailman/listinfo/computer-go
>>> ___ Computer-go 
>>> mailing list Computer-go@computer-go.org 
>>> http://computer-go.org/mailman/listinfo/computer-go
>> 
> 
> ___ Computer-go
> mailing list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJW2JVUAAoJEInWdHg+Znf4JGEQAIcuAbKkEihOhjKoH4fXKr1C
zsN0nVX5Viyca9Ca8NDxBb8xL1w1MYQl8oO2yiljwZKD6GQiSGV2d+jQTI42HCXn
xPn88O9nkzOJh6OKyNmLxZvj/A9XQjFTaDkPgaO0NejoAytdQEjXZrZlDB1dLjvJ
5GYtIZCXpCPsmgV25Iz8Hs1lsMFbUy4GrTx6m+htd3iAZiL+ZmaFMh+aYHVVn9ea
j66EAevu3klzCiWoO8A+s0onpvw05LM9gSpC1yPqqR55IOY1ikEAh7MqUSdStz/9
xt7YUdtXm5lrzEIRIMvhDec/FsnVYfmeRE6lKxBus95Y6zcwx9HvwtDZBDWzjIsl
4tgO86N8saK5eG1PqAPmRkg5abp0r3JG7NRCoI/47T9i3NkYC+RtCVZCo0tojzkW
gY28blxHtG5yQscob6D1T/2597VWetXBghzcnXCzLe0+aJS2p+S5klW3Q/B4o3t+
fKqICpMQrNzJO3vagefqYkRCzIUdUVXH4/rAnUnu3+pY75iTPNzqAK1PtdYcAQYt
XDGUIWH0+Ir9Wk20EzM3dXDR/7rKbHgeuBbIEP4+LnEQsNJ7UE6YSFJHhdJTNGGM
5LXeXJv6YQQO3Gj9yQYuMTZFploVhJKAFf4vIloYwG2ZKE1SkU3aBQsudCr5uhc1
5XK2WAy4gKl/Ih0KxZLm
=uWVF
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What hardware to use to train the DNN

2016-02-06 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi David,

I am not happy with my IDE on linux too. You might give Visual Studio
on linux a try:

https://www.visualstudio.com/de-de/products/code-vs.aspx

It seems to be free...

Detlef

Am 05.02.2016 um 07:13 schrieb David Fotland:
> I’ll do training on Linux for performance, and because it is so
> much easier to build than on Windows.  I need something I can ship
> to my windows customers, that is light weight enough to play well
> without a GPU.
> 
> 
> 
> All of my testing and evaluation machines and tools are on Windows,
> so I can’t easily measure strength and progress on linux.  I’m also
> not eager to learn a new IDE.  I like Visual Studio.
> 
> 
> 
> David
> 
> 
> 
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
> Behalf Of Petri Pitkanen Sent: Thursday, February 04, 2016 9:12 PM 
> To: computer-go Subject: Re: [Computer-go] What hardware to use to
> train the DNN
> 
> 
> 
> Welll, David is making a product. Making a product is 'trooper'
> solution unless you are making very specific product to a very
> narrow target group, willing to pay thousands for single license
> 
> Petri
> 
> 
> 
> 2016-02-04 23:50 GMT+02:00 uurtamo . <uurt...@gmail.com>:
> 
> David,
> 
> 
> 
> You're a trooper for doing this in windows. :)
> 
> 
> 
> The OS overhead is generally lighter if you use unix; even the most
> modern windows versions have a few layers of slowdown. Unix (for
> better or worse) will give you closer, easier access to the
> hardware, and closer, easier access to halting your machine if you
> are deep in the guts. ;)
> 
> 
> 
> s.
> 
> 
> 
> 
> 
> On Tue, Feb 2, 2016 at 10:25 AM, David Fotland
> <fotl...@smart-games.com> wrote:
> 
> Detlef, Hiroshi, Hideki, and others,
> 
> I have caffelib integrated with Many Faces so I can evaluate a DNN.
> Thank you very much Detlef for sample code to set up the input
> layer.  Building caffe on windows is painful.  If anyone else is
> doing it and gets stuck I might be able to help.
> 
> What hardware are you using to train networks?  I don’t have a
> cuda-capable GPU yet, so I'm going to buy a new box.  I'd like some
> advice.  Caffe is not well supported on Windows, so I plan to use a
> Linux box for training, but continue to use Windows for testing and
> development.  For competitions I could use either windows or
> linux.
> 
> Thanks in advance,
> 
> David
> 
>> -Original Message- From: Computer-go
>> [mailto:computer-go-boun...@computer-go.org] On Behalf Of Hiroshi
>> Yamashita Sent: Monday, February 01, 2016 11:26 PM To:
>> computer-go@computer-go.org Subject: *SPAM* Re:
>> [Computer-go] DCNN can solve semeai?
>> 
>> Hi Detlef,
>> 
>> My study heavily depends on your information. Especially Oakfoam
>> code, lenet.prototxt and generate_sample_data_leveldb.py was
>> helpful. Thanks!
>> 
>>> Quite interesting that you do not reach the prediction rate 57%
>>> from the facebook paper by far too! I have the same experience
>>> with the
>> 
>> I'm trying 12 layers 256 filters, but it is around 49.8%. I think
>> 57% is maybe from KGS games.
>> 
>>> Did you strip the games before 1800AD, as mentioned in the FB
>>> paper? I did not do it and was thinking my training is not ok,
>>> but as you have the same result probably this is the only
>>> difference?!
>> 
>> I also did not use before 1800AD. And don't use hadicap games. 
>> Training positions are 15693570 from 76000 games. Test
>> positions are   445693 from  2156 games. All games are shuffled
>> in advance. Each position is randomly rotated. And memorizing
>> 24000 positions, then shuffle and store to LebelDB. At first I
>> did not shuffle games. Then accuracy is down each 61000 iteration
>> (one epoch, 256 mini-batch). http://www.yss-aya.com/20160108.png 
>> It means DCNN understands easily the difference 1800AD games and
>> 2015AD games. I was surprised DCNN's ability. And maybe 1800AD
>> games  are also not good for training?
>> 
>> Regards, Hiroshi Yamashita
>> 
>> - Original Message - From: "Detlef Schmicker"
>> <d...@physik.de> To: <computer-go@computer-go.org> Sent: Tuesday,
>> February 02, 2016 3:15 PM Subject: Re: [Computer-go] DCNN can
>> solve semeai?
>> 
>>> Thanks a lot for sharing this.
>>> 
>>> Quite interesting that you do not reach the prediction rate 57%
>>> from the facebook paper by far too! I have the same experience
>>

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search (value network)

2016-02-04 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

> One possibility is that 0=loss, 1=win, and the number they are
quoting is
> sqrt(average((prediction-outcome)^2)).


this makes perfectly sense for figure 2. even playouts seem reasonable.

But figure 2 is not consistent with the numbers in section 3 would be
0.234 (test set of the self-play data base. The figure looks more like
0.3 - 0.35 or even higher...



Am 04.02.2016 um 21:43 schrieb Álvaro Begué:
> I just want to see how to get 0.5 for the initial position on the
> board with some definition.
> 
> One possibility is that 0=loss, 1=win, and the number they are
> quoting is sqrt(average((prediction-outcome)^2)).
> 
> 
> On Thu, Feb 4, 2016 at 3:40 PM, Hideki Kato
>  wrote:
> 
>> I think the error is defined as the difference between the output
>> of the value network and the average output of the simulations
>> done by the policy network (RL) at each position.
>> 
>> Hideki
>> 
>> Michael Markefka:
>> 

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search (value network)

2016-02-04 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

>> Since all positions of all games in the dataset are used, winrate
>> should distributes from 0% to 100%, or -1 to 1, not 1. Then, the
>> number 70% could be wrong.  MSE is 0.37 just means the average
>> error is about 0.6, I think.

0.6 in the range of -1 to 1,

which means -1 (eg lost by b) games -> typical value -0.4
and +1 games -> typical value +0.4 of the value network

if I rescale -1 to +1 to  0 - 100% (eg winrate for b) than I get about
30% for games lost by b and 70% for games won by B?

Detlef


Am 04.02.2016 um 20:10 schrieb Hideki Kato:
> Detlef Schmicker: <56b385ce.4080...@physik.de>: Hi,
> 
> I try to reproduce numbers from section 3: training the value
> network
> 
> On the test set of kgs games the MSE is 0.37. Is it correct, that
> the results are represented as +1 and -1?
> 
>> Looks correct.
> 
> This means, that in a typical board position you get a value of 
> 1-sqrt(0.37) = 0.4  --> this would correspond to a win rate of 70%
> ?!
> 
>> Since all positions of all games in the dataset are used, winrate
>> should distributes from 0% to 100%, or -1 to 1, not 1. Then, the
>> number 70% could be wrong.  MSE is 0.37 just means the average
>> error is about 0.6, I think.
> 
>> Hideki
> 
> Is it really true, that a typical kgs 6d+ position is judeged with 
> such a high win rate (even though it it is overfitted, so the test
> set number is to bad!), or do I misinterpret the MSE calculation?!
> 
> Any help would be great,
> 
> Detlef
> 
> Am 27.01.2016 um 19:46 schrieb Aja Huang:
>>>> Hi all,
>>>> 
>>>> We are very excited to announce that our Go program, AlphaGo,
>>>> has beaten a professional player for the first time. AlphaGo
>>>> beat the European champion Fan Hui by 5 games to 0. We hope
>>>> you enjoy our paper, published in Nature today. The paper and
>>>> all the games can be found here:
>>>> 
>>>> http://www.deepmind.com/alpha-go.html
>>>> 
>>>> AlphaGo will be competing in a match against Lee Sedol in
>>>> Seoul, this March, to see whether we finally have a Go
>>>> program that is stronger than any human!
>>>> 
>>>> Aja
>>>> 
>>>> PS I am very busy preparing AlphaGo for the match, so
>>>> apologies in advance if I cannot respond to all questions
>>>> about AlphaGo.
>>>> 
>>>> 
>>>> 
>>>> ___ Computer-go
>>>> mailing list Computer-go@computer-go.org 
>>>> http://computer-go.org/mailman/listinfo/computer-go
>>>> 
>> ___ Computer-go
>> mailing list Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWs6WFAAoJEInWdHg+Znf4eTsP/21vawWsmrZkDuAjTkwbKB2S
7LpLi3huuLlepkulmUr3rIUvDHhTOwD04pDHjjVrIDBB1k3JjQQ/YKWDfijQQYu6
ZI1GK55pglUPH+uc+rxfM89ziJwCQrza71l5XU+5ffcBwxRjeAL+D1fGGyr0CPlv
WKR/Q07XDslXhwlk2O6NDpd80d38dMlMV9lO4s8Zf3Y+o8WJOuyEdybRpg8VOibq
o59RCAWUiVkTs++iSihcIrVAwGnLtkPyMJ/lBN6zMyZQeuM0dyYL+IAoMH9IdCLQ
0jpbtJEqtSsp1ZjWs9s/M4pxKlvUZLThtYSjyGDJ2qDYXII6DeBgxHGUoUxc5A6a
HVF04gG77U2fMCa/6eGlQN2380kNCjdyRCDUZc9St3tbQPnWU+syk6U/inF7bhAA
7ONJD0dcjZROmblqurv32pO6sLuS8wA4DfJhpM5xSSJcYI46YQtVWL4OXY+dtx6S
6uQ1fiPqgo4WM0iHEOnh7BEz0NqZeahIUJJVmgKODzp2krOqbpOpbwe7WUI7UHmK
3LCNC9oMRybNuc+jrbHqFwT+tgQLTqpbHZuDVzKkBcxqPSj7hRvjLXAjkWNCzL7j
Yo4MySS6rzenuj9ZRSrQDSYfowRZyzPzMnmjkMbM7R7wpR5CL4U95LqOdMnce2IG
s/6iYcuUH8KqpG9NMy0U
=TnKW
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search (value network)

2016-02-04 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

I try to reproduce numbers from section 3: training the value network

On the test set of kgs games the MSE is 0.37. Is it correct, that the
results are represented as +1 and -1?

This means, that in a typical board position you get a value of
1-sqrt(0.37) = 0.4  --> this would correspond to a win rate of 70% ?!

Is it really true, that a typical kgs 6d+ position is judeged with
such a high win rate (even though it it is overfitted, so the test set
number is to bad!), or do I misinterpret the MSE calculation?!

Any help would be great,

Detlef

Am 27.01.2016 um 19:46 schrieb Aja Huang:
> Hi all,
> 
> We are very excited to announce that our Go program, AlphaGo, has
> beaten a professional player for the first time. AlphaGo beat the
> European champion Fan Hui by 5 games to 0. We hope you enjoy our
> paper, published in Nature today. The paper and all the games can
> be found here:
> 
> http://www.deepmind.com/alpha-go.html
> 
> AlphaGo will be competing in a match against Lee Sedol in Seoul,
> this March, to see whether we finally have a Go program that is
> stronger than any human!
> 
> Aja
> 
> PS I am very busy preparing AlphaGo for the match, so apologies in
> advance if I cannot respond to all questions about AlphaGo.
> 
> 
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWs4XOAAoJEInWdHg+Znf4btkQAKP5T6o8qk9Fv1z/JJGTdcph
A8rXWXNPNybSTqBVh7IJHKB9WxNIIOODjj6JYL4vhjFC+Eqy7AcpHkFZEn+f9j2G
Kl3w6G88M9P8/iCn++pNHF3dPTrsn2doUcUjF1fJZsKeNCISuvbHwbzgXupna+lH
qeCpYx/VBQMJIdhGqTmWsQozbFMFbeTumMH94UwNkTwo4Tnue/UCJweU0bWIIt0D
TtCyLDsDcFy/qNrZC97858tpvOpo3hWs7pLf8ed+9r13UGeJhQJkedg6Oq0e5wTl
Ye36Z1/2QHnQtvUbk6yjd6GMK0lo6LOOC1lTpp1nFvzcZ4ifrY2LejQ+7nWmafRq
y46aeH4jqtEF2GXsFTq7ATftSYoeeUzgKxb8t4D8ShP3cRRWkpHFyGtdq25yYO2P
hp35zJHAlCtUUOzy0YBY+mngYnbwxjp1ykwUi5DmubtRTfhf4pTAH+DF5/UeItn/
IebMR9JlfCsFJZ7BLq7P7UoHz0eiG2vVKNXUP0Np4LA3KAm1IAOfJWRmLP9TrHOX
32vdIQLXlbDMTfbloXrFjQo3wm3pKrKstI41Pyoo5d8kE8FPxjCga6EfmarjRKGY
UBf7Vz8iijGWjTjszui78HlJBIGls9qyuxNUcBN4+kfJJLP633HO+tmDM1hcsD4D
BPtgsamuDtf5jvxGTa9d
=Ab0j
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search (value network)

2016-02-04 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks for the response, I do not refer to the finaly used data set:
in the referred chapter they state, they have used their kgs dataset
in a first try (which is in another part of the paper referred to
being a 6d+ data set).

Am 04.02.2016 um 18:11 schrieb Álvaro Begué:
> The positions they used are not from high-quality games. They
> actually include one last move that is completely random.
> 
> Álvaro.
> 
> 
> On Thursday, February 4, 2016, Detlef Schmicker <d...@physik.de>
> wrote:
> 
> Hi,
> 
> I try to reproduce numbers from section 3: training the value
> network
> 
> On the test set of kgs games the MSE is 0.37. Is it correct, that
> the results are represented as +1 and -1?
> 
> This means, that in a typical board position you get a value of 
> 1-sqrt(0.37) = 0.4  --> this would correspond to a win rate of 70%
> ?!
> 
> Is it really true, that a typical kgs 6d+ position is judeged with 
> such a high win rate (even though it it is overfitted, so the test
> set number is to bad!), or do I misinterpret the MSE calculation?!
> 
> Any help would be great,
> 
> Detlef
> 
> Am 27.01.2016 um 19:46 schrieb Aja Huang:
>>>> Hi all,
>>>> 
>>>> We are very excited to announce that our Go program, AlphaGo,
>>>> has beaten a professional player for the first time. AlphaGo
>>>> beat the European champion Fan Hui by 5 games to 0. We hope
>>>> you enjoy our paper, published in Nature today. The paper and
>>>> all the games can be found here:
>>>> 
>>>> http://www.deepmind.com/alpha-go.html
>>>> 
>>>> AlphaGo will be competing in a match against Lee Sedol in
>>>> Seoul, this March, to see whether we finally have a Go
>>>> program that is stronger than any human!
>>>> 
>>>> Aja
>>>> 
>>>> PS I am very busy preparing AlphaGo for the match, so
>>>> apologies in advance if I cannot respond to all questions
>>>> about AlphaGo.
>>>> 
>>>> 
>>>> 
>>>> ___ Computer-go
>>>> mailing list Computer-go@computer-go.org <javascript:;> 
>>>> http://computer-go.org/mailman/listinfo/computer-go
>>>> 
>> ___ Computer-go
>> mailing list Computer-go@computer-go.org <javascript:;> 
>> http://computer-go.org/mailman/listinfo/computer-go
> 
> 
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWs4idAAoJEInWdHg+Znf4gyEP/iqAdAkxlsilYGQflCyN3z4V
xg7esBxj2p8cR2SP3erFQNtThDMN+Qr8FNSYMbMHyJRUaJcsEhsk76Qbro/Bh815
Xk/79w2LF0rdHwzYkIye3YunifIAwREaIXwCokzuPv0zFrKFgCu7UbIpup7oMXdr
q3c+FvBwjXX/UtChBZC6kC8U2b1dijMnxPOQC05Hw/LMycKinOzzwByKS3CHdzg6
eFHAAmrJsaY9iJvCyJQL5ZLdOMBVl50iLez5P8F2t1Bf+Qm03w2nnhAWl/3bjyVy
hdvcNw6VGSUNeXo2wmF8SJoB1fOUOLAVVenc9jJHkcdcRQxSEBzuH25OfPgNTz55
JgRqiSM0iOeQ9NmQlC1LRz1BDRYRUx0RsaCvcA1G3m5gKCsbbsVkluppNHUzxAUz
o3+jazCi+88Gb5EZfdKF7p+g0JoWE2OucwXKyzlmUZMz4Hce+zOfSwv2k/9vrTLW
z0LfKxDbqQG7cj1jVysvTcQvxSkA54cNLtj/uVNzTvoti+pwyscd5DqJ8jXfcHGG
HZC3tPsVM7wvqf46EGgmjDI9jjhSTzXXdpDW7gfFTtUvZx8S4iGfmxWLod48deIP
MiLDehl3rQBuQq5fx+i1ZDB3Gej6vvc9MHAiTo+kUf5TCYBMDAaxRt4OiLnK6N+d
J02Sn7O1jSG4Fw5ud6iR
=p2U5
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] *****SPAM***** Re: What hardware to use to train the DNN

2016-02-02 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

this is a very difficult questions:

I do have 100 batches (each 64) within 1 minute for the big facebook
DCNN (384 layers in each of the 9 3x3 kernel two 128 5x5 and two 128
7x7 before that)

What facebook calls an epoch is 400 of this (4 * 64 positions)

Now you can have a look at the fig 5 of facebook and you see it is
difficult to say where to stop training.

My experience at the moment is, that my training is not increasing
after 20 of facebook batches, but I am fighting with this at the
moment and still am not sure if figure 5 corresponds to KGS or GoGoD
database, which makes a huge difference for me.


Sorry for the not to good answer, but that is my state:(


I trained the same net with all 128 layers within about two weeks in
December, but was not happy with the result (49%, but after I read
Hiroshi's post I am not sure, if it was not ok anyway :)

At the moment I am preparing a net with additional winrate value output:

I was fighting with Komi representation last year, now I will try to
support 6.5, 7.5 and 0.5 komi (>90% of kgs games 6d+) using 6 layers
(3 for b moving and 3 for white moving) Before I tried flexible komi
support, but this was not successful enough:( And as google only
supports 7.5 :)

If somebody else working on this, I would love to share here!


Detlef

Am 02.02.2016 um 19:38 schrieb David Fotland:
> How long does it take to train one of your nets?  Is it safe to
> assume that training time is roughly proportional to the number of
> neurons in the net?
> 
> Thanks,
> 
> David
> 
>> -Original Message- From: Computer-go
>> [mailto:computer-go-boun...@computer-go.org] On Behalf Of Detlef
>> Schmicker Sent: Tuesday, February 02, 2016 10:35 AM To:
>> computer-go@computer-go.org Subject: *SPAM* Re:
>> [Computer-go] What hardware to use to train the DNN
>> 
> Hi David,
> 
> I use Ubuntu 14.04 LTS with a NVIDIA GTX970 Graphic card (and
> i7-4970k, but this is not important for training I think) and
> installed CUDNN v4 (important, at least a factor 4 in training
> speed).
> 
> This Ubuntu version is officially supported by Cuda and I did only
> have minor problems if an Ubuntu update updated the graphics
> driver: I had 2 times in the last year to reinstall cuda (a little
> ugly, as the graphic driver did not work after the update and you
> had to boot into command line mode).
> 
> Detlef
> 
> Am 02.02.2016 um 19:25 schrieb David Fotland:
>>>> Detlef, Hiroshi, Hideki, and others,
>>>> 
>>>> I have caffelib integrated with Many Faces so I can evaluate
>>>> a DNN. Thank you very much Detlef for sample code to set up
>>>> the input layer. Building caffe on windows is painful.  If
>>>> anyone else is doing it and gets stuck I might be able to
>>>> help.
>>>> 
>>>> What hardware are you using to train networks?  I don t have
>>>> a cuda-capable GPU yet, so I'm going to buy a new box.  I'd
>>>> like some advice.  Caffe is not well supported on Windows, so
>>>> I plan to use a Linux box for training, but continue to use
>>>> Windows for testing and development.  For competitions I
>>>> could use either windows or linux.
>>>> 
>>>> Thanks in advance,
>>>> 
>>>> David
>>>> 
>>>>> -Original Message- From: Computer-go 
>>>>> [mailto:computer-go-boun...@computer-go.org] On Behalf Of
>>>>> Hiroshi Yamashita Sent: Monday, February 01, 2016 11:26 PM
>>>>> To: computer-go@computer-go.org Subject: *SPAM*
>>>>> Re: [Computer-go] DCNN can solve semeai?
>>>>> 
>>>>> Hi Detlef,
>>>>> 
>>>>> My study heavily depends on your information. Especially
>>>>> Oakfoam code, lenet.prototxt and
>>>>> generate_sample_data_leveldb.py was helpful. Thanks!
>>>>> 
>>>>>> Quite interesting that you do not reach the prediction
>>>>>> rate 57% from the facebook paper by far too! I have the
>>>>>> same experience with the
>>>>> 
>>>>> I'm trying 12 layers 256 filters, but it is around 49.8%. I
>>>>> think 57% is maybe from KGS games.
>>>>> 
>>>>>> Did you strip the games before 1800AD, as mentioned in
>>>>>> the FB paper? I did not do it and was thinking my
>>>>>> training is not ok, but as you have the same result
>>>>>> probably this is the only difference?!
>>>>> 
>>>>> I also did not use before 1800AD. 

Re: [Computer-go] What hardware to use to train the DNN

2016-02-02 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi David,

I use Ubuntu 14.04 LTS with a NVIDIA GTX970 Graphic card (and
i7-4970k, but this is not important for training I think) and
installed CUDNN v4 (important, at least a factor 4 in training speed).

This Ubuntu version is officially supported by Cuda and I did only
have minor problems if an Ubuntu update updated the graphics driver: I
had 2 times in the last year to reinstall cuda (a little ugly, as the
graphic driver did not work after the update and you had to boot into
command line mode).

Detlef

Am 02.02.2016 um 19:25 schrieb David Fotland:
> Detlef, Hiroshi, Hideki, and others,
> 
> I have caffelib integrated with Many Faces so I can evaluate a DNN.
> Thank you very much Detlef for sample code to set up the input
> layer.  Building caffe on windows is painful.  If anyone else is
> doing it and gets stuck I might be able to help.
> 
> What hardware are you using to train networks?  I don’t have a
> cuda-capable GPU yet, so I'm going to buy a new box.  I'd like some
> advice.  Caffe is not well supported on Windows, so I plan to use a
> Linux box for training, but continue to use Windows for testing and
> development.  For competitions I could use either windows or
> linux.
> 
> Thanks in advance,
> 
> David
> 
>> -Original Message- From: Computer-go
>> [mailto:computer-go-boun...@computer-go.org] On Behalf Of Hiroshi
>> Yamashita Sent: Monday, February 01, 2016 11:26 PM To:
>> computer-go@computer-go.org Subject: *SPAM* Re:
>> [Computer-go] DCNN can solve semeai?
>> 
>> Hi Detlef,
>> 
>> My study heavily depends on your information. Especially Oakfoam
>> code, lenet.prototxt and generate_sample_data_leveldb.py was
>> helpful. Thanks!
>> 
>>> Quite interesting that you do not reach the prediction rate 57%
>>> from the facebook paper by far too! I have the same experience
>>> with the
>> 
>> I'm trying 12 layers 256 filters, but it is around 49.8%. I think
>> 57% is maybe from KGS games.
>> 
>>> Did you strip the games before 1800AD, as mentioned in the FB
>>> paper? I did not do it and was thinking my training is not ok,
>>> but as you have the same result probably this is the only
>>> difference?!
>> 
>> I also did not use before 1800AD. And don't use hadicap games. 
>> Training positions are 15693570 from 76000 games. Test
>> positions are   445693 from  2156 games. All games are shuffled
>> in advance. Each position is randomly rotated. And memorizing
>> 24000 positions, then shuffle and store to LebelDB. At first I
>> did not shuffle games. Then accuracy is down each 61000 iteration
>> (one epoch, 256 mini-batch). http://www.yss-aya.com/20160108.png 
>> It means DCNN understands easily the difference 1800AD games and
>> 2015AD games. I was surprised DCNN's ability. And maybe 1800AD
>> games  are also not good for training?
>> 
>> Regards, Hiroshi Yamashita
>> 
>> - Original Message - From: "Detlef Schmicker"
>> <d...@physik.de> To: <computer-go@computer-go.org> Sent: Tuesday,
>> February 02, 2016 3:15 PM Subject: Re: [Computer-go] DCNN can
>> solve semeai?
>> 
>>> Thanks a lot for sharing this.
>>> 
>>> Quite interesting that you do not reach the prediction rate 57%
>>> from the facebook paper by far too! I have the same experience
>>> with the GoGoD database. My numbers are nearly the same as
>>> yours 49% :) my net is quite simelar, but I use 7,5,5,3,3,
>>> with 12 layers in total.
>>> 
>>> Did you strip the games before 1800AD, as mentioned in the FB
>>> paper? I did not do it and was thinking my training is not ok,
>>> but as you have the same result probably this is the only
>>> difference?!
>>> 
>>> Best regards,
>>> 
>>> Detlef
>> 
>> ___ Computer-go
>> mailing list Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWsPbCAAoJEInWdHg+Znf4MMUQAIEp0VXzCC58S+FyWniUFrnq
1zTBd9bApj57mJWE7n+etPOWy9tPrWfKRdc25U8KHnJNiiK3UrVOdhVJPjOK3l9g
qEPom48av0DfjrGNp2ZFJ30xCV7eahdR27vguYCn+qdg2Hyc/X7yhCp/mzF7zEMm
yx1n8A+ZwuRMkLTApuewaccA1TiMTOP+mJj79PHRgFdaaoOiYn41Mzp8bKllb3t/
E7bqz2zhLz5Qct9p3x98SY/9vYYGEMojTsE10sCe3/1oDWaxgL+agA6PkbqxuwQ1
QpQh6XAYSYJo4eoCFvDJGiuUDHEOLkI0B6k8WBbRTRuG

Re: [Computer-go] DCNN can solve semeai?

2016-02-01 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks a lot for sharing this.

Quite interesting that you do not reach the prediction rate 57% from
the facebook paper by far too! I have the same experience with the
GoGoD database. My numbers are nearly the same as yours 49% :) my net
is quite simelar, but I use 7,5,5,3,3, with 12 layers in total.

Did you strip the games before 1800AD, as mentioned in the FB paper? I
did not do it and was thinking my training is not ok, but as you have
the same result probably this is the only difference?!


Best regards,

Detlef

Am 02.02.2016 um 03:25 schrieb Hiroshi Yamashita:
> Hi,
> 
> I made DCNN, and tried whether DCNN can understand semeai.
> 
> 1. try one playout that always select DCNN highest probablity
> move. 2. try 100 playouts that select moves from DCNN probability. 
> (one playout takes 4 seconds.)
> 
> Result is DCNN does not understand semeai. It can play semeai like
> moves, but far from perfect. Maybe need more difficult feature?
> 
> DCNN highestDCNN 100 playouts Aya playout problem1   0
> 53   20 problem2   0 54
> 85 problem3 100 91   70 
> problem4 100 66   66 problem5
> 0 51   62 problem6   0
> 45   67 problem7 100 95
> 90 problem8   0  9   90 
> problem9   0 39   50 
>  Average
> 33 56   67
> 
> 100 is correct. 0 is wrong. 53 means DCNN playout is correct 53
> times out of 100 playouts.
> 
> 
> DCNN is  12 layers, 128 filters. (5x5_128, 3x3_128 x10, 3x3_1) It
> predicts next 3 moves, like Facebook paper. Test accuracy is next_1
> 49%, next_2 27%, next_3 16%. 
> http://www.yss-aya.com/20160123_3steps.png Feature are liberty
> black and white. 1,2,3,4>= stones black and white. 1,2,3,4>= 
> previous move 1,2,3,4,5 previous ko 1,2,3 CFG Distance 1,2,3,4,5>= 
> string life and death by search. dead, killed next, kill move, life
> move. 8 planes group life and death. It is from Aya's classic
> evaluation(KGS 8k). 6 planes territory. black and white. half eye.
> black and white. recapture soon. (if black play here, recapture
> soon). black and white All 49 channels. Learning games are from
> GoGoD, 78000 games. This DCNN runs on CGOS by DCNN_Aya_i49_a49, no
> search. GTX 980. Its winrate 40% against Pachi 100k. AlphaGo
> DCNN(RL) winrate is 85% against Pachi 100k. So this DCNN is weaker
> than AlphaGo(RL) by 370 Elo, and weaker than AlphaGo(SL) by 129
> Elo.
> 
> 
> Problem8, W lives. 4 libs vs 5 libs (top left) Problem9, W lives. 5
> libs vs 6 libs (bottom left)
> 
> 19.O.X.XO.OX.   DCNN answered badly these two problem. 
> 18OXO.XX. 17.OO.X.O.O..   Left Top is 4 libs vs
> 5 libs 16OO.X...O...   White(O) must live 
> 15.XX   14X...O..   DCNN answer is
> White(O) lives 9% 13...   DCNN best move playout
> also fails. 12... 11O.. 
> 10... 9O.. 8XX.X... 
> 7.OX.X..O... 6O.X 5OX..O..
> Left bottom is 5 libs vs 6 libs. 4OX.O...  White(O)
> must live 3OX..O.. 2XO.XOX.  DCNN answer is
> White(O) lives 39% 1.OOXOX.  DCNN best move playout
> also fails. ABCDEFGHJKLMNOPQRST
> 
> (;GM[1]SZ[19]KM[0.5]RU[Chinese]AB[da][fa][ja][bb][cb][db][eb][fb][ib]
>
> 
[jb][ic][dd][ed][fd][gd][hd][be][ce][af][al][bl][dl][cm][em][cn][fo]
> [fp][aq][bq][cq][dq][fq][ar][dr][fr][ds][fs]AW[ba][ga][ia][ab][gb] 
> [bc][cc][dc][ec][fc][gc][oc][qc][ad][bd][pd][qf][qi][qk][bm][pm][an]
>
> 
[ao][qo][ap][bp][cp][dp][ep][pp][eq][qq][br][er][bs][cs][es])
> 
> 
> Problem1, B lives. 3 libs vs 5 libs. share 2 libs. (top) Problem2,
> B lives. (right bottom)
> 
> (;GM[1]SZ[19]KM[0.5]TM[]RU[Chinese];B[pe];W[qp];B[dq];W[cd];B[op]; 
> W[do];B[fq];W[cq];B[cp];W[dp];B[bq];W[cr];B[ep];W[dr];B[eq];W[co]; 
> B[bp];W[dk];B[bo];W[bn];B[cn];W[dn];B[cm];W[dm];B[cl];W[dl];B[pm]; 
> W[oo];B[po];W[pp];B[oq];W[no];B[qo];W[ro];B[rn];W[rp];B[qm];W[mq]; 
> B[qr];W[rr];B[qq];W[rq];B[nr];W[mr];B[rs];W[pq];B[pr];W[qs];B[ps]; 
> W[sr];B[ns];W[nm];B[so];W[pi];B[sp];W[ed];B[jd];W[ld];B[gd];W[lf]; 
> B[nd];W[jf];B[ec];W[dc];B[fc];W[gf];B[he];W[ig];B[fn];W[fo];B[eo]; 
> W[ck];B[bm];W[fm];B[gn];W[gm];B[hn];W[hm];B[in];W[im];B[jn];W[jq]; 
> B[jm];W[hq];B[jl];W[en];B[gp];W[pg];B[qf];W[qc];B[oc];W[oe];B[pc]; 
> W[pd];B[qd];W[od];B[mc];W[rd];B[qe];W[rc];B[rh];W[qg];B[rg];W[lc]; 
> B[qb];W[rb];B[pb];W[re];B[rf];W[qk];B[ra];W[ne];B[se];W[md];B[nc]; 
> W[jb];B[ic];W[ib];B[hb];W[hc];B[hd];W[ha];B[gb];W[ie];B[fe];W[id]; 
> B[jc];W[kb];B[ff];W[fg];B[eg];W[dg];B[df];W[eh];B[de];W[dd];B[db]; 
> 

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-01-29 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Ingo,

I think you are not alone: When I started computer go 4 years ago I
ask a good friend of mine, who did his PhD in Neural Networks back in
the 90s, if I have any chance to use them instead of pattern matching
and he said, they will probably not generalize in a good way :)

I think the big size of the nets make a qualitative difference,
therefore our intuition is misleading...


Congrats to the AlphaGo team,

Detlef

Am 29.01.2016 um 02:14 schrieb "Ingo Althöfer":
> Hi Simon,
> 
> do your remember my silly remarks in an email discussion almost a
> year ago?
> 
> You had written:
>>> So, yes, with all the exciting work in DCNN, it is very
>>> tempting to also do DCNN. But I am not sure if we should do
>>> so.
> 
> And my silly reply had been:
>> I think that DCNN is somehow in a dreamdancing appartment. My
>> opinion: We might mention it in our proposal, but not as a
>> central topic.
> 
> 
> In my mathematical life I have been wrong with my intuition only a
> few times. This DCNN topic was the worst case so far...
> 
> Greetings from the bottom, Ingo.
> 
> 
> 
> Gesendet: Donnerstag, 28. Januar 2016 um 16:41 Uhr Von: "Lucas,
> Simon M"  An: "computer-go@computer-go.org"
>  Betreff: Re: [Computer-go] Mastering
> the Game of Go with Deep Neural Networks and Tree Search
> 
> Indeed – Congratulations to Google DeepMind!
> 
> It’s truly an immense achievement.  I’m struggling to think of
> other examples of reasonably mature and strongly contested AI
> challenges where a new system has made such a huge improvement
> over existing systems – and I’m still struggling …
> 
> Simon Lucas ___ 
> Computer-go mailing list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWqwGDAAoJEInWdHg+Znf4Ok8P/ix67Uj91gXi0+dJFgNq8R+m
cWRU7J3rRtPT+PuQM4suYjVhvn2+r/jmUOSWOxXYqzL03d323ufIl935jtZK00k7
428bp2g0r7NqZHvp4r8SnEVMhvVhjWG+vuGqPWPGJWnimpV/6C0d6/8JHZMVnoWR
BtAhlnJC1xOIjGJi86Xs3xRMiAdqEOeyu0HLu0LrJjm+fz4JNmVpENAPWivHBP4U
gnYSMHztrDxfBeWJCKg5a1hJBp7tGoN2LN7axbIZeNKh6cD30VtyWtHPDCpkfRlc
mW4mH5ljSrkcyjcfkC7ZL+qGf7aZ80vlNlGVlufeDSQaet8mkh592+lBVDSBViyU
NERSjqkWcbj9i5spNhyor+XEyLXgic23oXoPdTFhkYWQ0NKjRIdIhkGGTyvggbsn
Z4+T1GibInRS6MhLNxr2WPBDKqMYPGwXP325J1NQj+/aKpItOC4wRoQbtEdZdzb2
gDQ+Wu+DSlM99jgA0606BTKrv5n8ktTSP0CO8H9HBxsm+0rM1w/Nb0rg9AZnhPzD
9iEGkUo9MyWmQcsLasV/sYhZkbrR84l+GWkRSNnxF7DkbhbjnM6VmSI4q7j9H+R/
iotyslx7G3EZUuFTkFx2O9ePmbb3WEBKnAYFQmx2zXbrZ9hFtMQ8q1wAaa4svDE4
HiwOsxg3aaTpxTJCBT8u
=NpUI
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to Zen!

2016-01-23 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks,

would be great if we could get the hardware info on darkforest :)

(it is in the tournament rules as I understood, so Facebook should
release...)

Thanks Detlef

Am 10.01.2016 um 16:18 schrieb Nick Wedd:
> Congratulations to Zen19S, winner of the January KGS tournament!
> 
> It was closely-contested, with a group of strong players at the
> top. These included a newcomer to these events, darkfmcts3
> (Darkforest from the Facebook AI Project), which would probably
> have won with better time-keeping.
> 
> My report is at http://www.weddslist.com/kgs/past/119/index.html As
> usual, I will welcome your comments and corrections.
> 
> Nick
> 
> 
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWo0fOAAoJEInWdHg+Znf4SJ0P/0fHDWY1RQv7+AV93gwOBwFN
+W7ib1ksHO715tUT6PeQZ54WvOZGbNP8C9oenoqR1FzPHIAdMlUbX55zvfbwtlgT
1NB24qCsM8n4Y+D/KqtNM6UH/BSrjzBbmymwjZPoYnC45A6w/OJbNx8s/vjdSpda
35/8D9GWR6wn7pmNNmLSf0UH20+Ft1NxnB8liFuWwoF4cHY7nET07wdJlZf8nYQw
Gnjdy+d/nT47eJS6C/CVX+mXgAKJ4y6Ng5I9ubZwjFMjeeZohYJltJZh8q9+xVnm
YUCbptHWL4w9YvtgQtnb27VP++fANgDjgPmfqmK+y6v1B4+Dq8sGao4JjJvkpZOd
DaEOJelavlKVpGFrkaTCLAEKH/uvy4sTIIpK7a5dab0yijhskj7VZFLj3mcm7t6I
IR/1tclSm64aOTDsrRav1f7FP8wMCMckbgFVnn25IFv8Qz/ur+jqPexTKfTDrC+9
gGVVmDUfCtkeUG04BNtmZZ/143AsPidTvs4R0Wcao9CFt/6eEgbEoGE8OTLVjTd5
brk0sqaOfCRTSTLrO5wg+lH4e5jReQg/awYqWLx11AkhyktzCO4Qom9UF1hGW4Fm
DmwS4MzAYt+iVeQ96HjhozLdZ68QBfIlIYOXun27hWQKB318GtZg2ieXVD/KRIAC
g++8WfbFy5yFYcqVjjnb
=O9vR
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CNN with 54% prediction on KGS 6d+ data

2015-12-29 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

I am fighting with the problem most seem to have with the strong move
predictions at the moment, MCTS is not increasing the players a lot :)

I wonder, if somebody measured the performance of the pure CNN54
against pachi 10k (or 100k), to get a comparison with the darkforest CNN.

It is not too much work, but you probably did it already.

Thanks,

Detlef

Am 21.12.2015 um 12:42 schrieb Hiroshi Yamashita:
> Hi Detlef,
> 
> Thank you for publishing your data and latest oakform code! It was
> very helpful for me.
> 
> I tried your 54% data with Aya.
> 
> Aya with Detlef54% vs Aya with Detlef44%, 1 playout/move Aya
> with Detlef54%'s winrate is 0.569 (124wins / 218games).
> 
> CGOS BayseElo rating Aya with Detlef44%  (aya786n_Detlef_10k) 3040 
> Aya with Detlef54%  (Aya786m_Det54_10k ) 3036 
> http://www.yss-aya.com/cgos/19x19/bayes.html
> 
> Detlef54% is a bit stronger in selfplay, but they are similar on
> CGOS. Maybe Detlef54%'s prediction is strong, and Aya's playout
> strength is not enough.
> 
> Speed for a position on GTS 450. Detlef54%   21ms Detlef44%   17ms
> 
> Cumulative accuracy from 1000 pro games.
> 
> move rank  AyaDetlef54%  Mixture 1  40.8  47.6
> 48.0 2  53.5  62.4 62.7 3  60.2  70.7 71.0 
> 4  64.8  75.8 76.1 5  68.1  79.5 79.9 6
> 71.0  82.3 82.6 7  73.2  84.5 84.8 8  75.2
> 86.3 86.6 9  76.9  87.8 88.1 10  78.3  89.0
> 89.3 11  79.6  90.2 90.6 12  80.8  91.2
> 91.4 13  81.9  92.0 92.2 14  82.9  92.7
> 92.9 15  83.8  93.3 93.5 16  84.6  93.9
> 94.1 17  85.4  94.3 94.5 18  86.1  94.8
> 95.0 19  86.8  95.2 95.4 20  87.4  95.5
> 95.7
> 
> Mixture is pretty same as Detlef54%. I changed learning method from
> MM to LFR. Aya's own accuracy is from LFR rank, not MM gamma. So
> comparison is difficult.
> 
> Cumulative accuracy Detlef44% 
> http://computer-go.org/pipermail/computer-go/2015-October/008031.html
>
>  Regards, Hiroshi Yamashita
> 
> 
> - Original Message - From: "Detlef Schmicker"
> <d...@physik.de> To: <computer-go@computer-go.org> Sent: Wednesday,
> December 09, 2015 12:13 AM Subject: [Computer-go] CNN with 54%
> prediction on KGS 6d+ data
> 
> 
>> -BEGIN PGP SIGNED MESSAGE- Hash: SHA1
>> 
>> Hi,
>> 
>> as somebody ask I will offer my actual CNN for testing.
>> 
>> It has 54% prediction on KGS 6d+ data (which I thought would be
>> state of the art when I started training, but it is not
>> anymore:).
>> 
>> it has: 1 2 3
>>> 4 libs playing color
>> 1 2 3
>>> 4 libs opponent color
>> Empty points last move second last move third last move forth
>> last move
>> 
>> input layers, and it is fully convolutional, so with just editing
>> the golast19.prototxt file you can use it for 13x13 as well, as I
>> did on last sunday. It was used in November tournament as well.
>> 
>> You can find it http://physik.de/CNNlast.tar.gz
>> 
>> 
>> 
>> If you try here some points I like to get discussion:
>> 
>> - - it seems to me, that the playouts get much more important
>> with such a strong move prediction. Often the move prediction
>> seems better the playouts (I use 8000 at the moment against pachi
>> 32000 with about 70% winrate on 19x19, but with an extremely
>> focused progressive widening (a=400, a=20 was usual).
>> 
>> - - live and death becomes worse. My interpretation is, that the
>> strong CNN does not play moves, which obviously do not help to
>> get a group life, but would help the playouts to recognize the
>> group is dead. (http://physik.de/example.sgf top black group was
>> with weaker move prediction read very dead, with good CNN it was
>> 30% alive or so :(
>> 
>> 
>> OK, hope you try it, as you know our engine oakfoam is open
>> source :) We just merged all the CNN stuff into the main branch! 
>> https://bitbucket.org/francoisvn/oakfoam/wiki/Home 
>> http://oakfoam.com
>> 
>> 
>> Do the very best with the CNN
>> 
>> Detlef
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWglFcAAoJEInWdHg+Znf42q4P/AnMdgqhps4RSJG3NoLiwEUq
QmT4mQd58WbuxnXRO4xiyIKGTQq13+FOpqVu7RgFPXxQaKS+8Hi1qpGVjg8aE8Zh
bnHb3D+p30hv9lCT8e4xNQ2B1J

Re: [Computer-go] Those were the days ...

2015-12-29 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Yes, the published

http://computer-go.org/pipermail/computer-go/2015-December/008324.html

I think, you can not win this with "normal" good moves :)

You have to exploit mfgo

Detlef

Am 29.12.2015 um 15:18 schrieb "Ingo Althöfer":
> Hi Detlef,
> 
>> I gave pure DCNN 54% a try against the 15 kyu version:)
> 
> did I get it right, that these 54 % were from normal 6-dan games 
> ("normal" meaning small or no handicap)?
> 
> I think you need "Mueller High Handicap games" for feeding the
> CNN.
> 
> Cheers, Ingo. ___ 
> Computer-go mailing list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWgpnpAAoJEInWdHg+Znf4mlAQAImrF9kB8G8pSqUs0/0Dnv1b
FltaJ+6BImhjL2I5Gc6M0QsnBIbozARmKI4MshcNYfhxw9ZNSg4GxXYY/nEli55t
mcHLVPS1YhpQ/VTQb3Om900PXQpd16UR5YeaVnKmFBWvZZ19BghQsVKlCA3M3QrK
BnU6P1na8eRSB5ZXF6qj82e4EomRd2ul27P4r9RKb5+2gAjBCUj6yhRRbcCJq92R
uuecCh/AtvaZdem5dmL2b+CJi5tQHVaeye/xr0fyD4xSCIGgHuyWdJ0qQQaCtK8N
d61YeAnN9bgciSUMEps8J8DrNvLxkyZPzWJaReRAMeBjzrSfvlaPYsWv+Qr3OlcH
rpL3UWnvnom3rzyi7vvnEXaZf0+rpCUWjJLe81D2ouliRk/mO89F49MDFazjCVwF
rEcGv+Ilgb/PhngVSbq0J6rAqSF1M1tAoOfZ+MxHaUe42iamhu9DeNqQmIXTkqMn
cynn8O/FwdqLiCbx/FJgkkyWwq8Qz7xNvrr39CjcXICGFzg8u2RCyv1UXzyzVlCZ
qKB0hwr6ax2hHbuWJn7Bu9BdeK7CG7rIgW6PaCDRHPu/3xlq0Ot7dmfojh6LJmPV
GlQLAzsUZVH0flHNolRz1/b1Naq3BfCUTprMo+rTrX4itxrsA+KVQqBO83wQ7pyJ
CoKDYUbvZqUXkLm0Myoa
=ZP13
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Those were the days ...

2015-12-29 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I gave pure DCNN 54% a try against the 15 kyu version:)

http://files.gokgs.com/games/2015/12/29/mfgo15kyu0-NiceGo19N.sgf

There was no pass handling, therefore filled an eye, without would
have been 133.5 loss or so :)

Detlef

Am 29.12.2015 um 11:20 schrieb "Ingo Althöfer":
> Hello all,
> 
> 
>> Martin understood computer go weaknesses, so he played to exploit
>> them.  A modern program not specifically tuned for those
>> weaknesses would have a much more difficult time.
> 
> Agreed. Concerning CNN bots (like darkfores3), a natural way would
> be to find some players of Martin's calibre who play hundreds of
> games against the old MFoG at handicap 29. These games might become
> the training set for the CNN.
> 
>> I released version 10 in 1997, so it might be more accurate to
>> play against version 10 rather than against version 12 at 12 kyu
>> level.  I probably still have a version 10 CD somewhere.
> 
> I also still have one, I think.
> 
> Anyway, as soon as a serious contender shows up, I am willing to
> allow a match environment that may - in case of disputes - be
> helpful for the bot.
> 
> Ingo.
> 
> PS. I want to see my 1,000 Euros finding their way to a pocket of
> a talented go programmer! 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWgnXhAAoJEInWdHg+Znf4VIMP/RO4p3KIiY0e7ntcMqZWpH7V
3sHxFLz+YxO/CUTMxhserJmdgJFRWCdIamw5jYOF3AvLA2HaJIEbef49I7X+WEFC
bqzqkJCwei3YchisnAHeKUYB2FxNEg6YIgSwGOGvcx7rSJJsRtBz34L330AOo875
2hMe2zAAvPdDyKEUhg4BDKB9CA4WwU2q4DlVyYcoJ0VPHSKGvKbwaCK1Loy0Zreb
eJzBpLOOyT6ZdPGIlDdeUtHHZkSsffkdd/tA22KJS5ZzCtN+gw5gJoTjK5O2ZcJj
v7cjrL8ts8odclzf4JpRV+D8dFoUPTezUYYcd3Q0OBA9I/U//ahFIUPND1VH0A9d
LYBSz23PaQ7nl4mRBIDOConc0eV1Y5LvBEKiLQWxmh2LBhh8QQy5e9wc6QrhwVCL
lYvOPmBnb43eEMFq0glUfW34YYO8NwoMMB13c439thb3RwiShcNtDGIkpjx+a1is
zCJC3QNmpA693zummwqopRraNRuH3rhF51hz203pxcjJ/G2CPwrKmxFoGYhyNY1I
dXaw/75H3L4ITvWKfVjz7c0xo59yz5pBxj8e8wyTWk49DRyyZINmmmGQ/tU7smd9
hUq68GHi37rwyEp161wFhZU30vnco1isH/wL4rJOvw8CFgnZwbVY/skQE10z8M/G
sHmZKdNwF0D8UpuxYOTH
=AFfy
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] CNN with 54% prediction on KGS 6d+ data

2015-12-08 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

as somebody ask I will offer my actual CNN for testing.

It has 54% prediction on KGS 6d+ data (which I thought would be state
of the art when I started training, but it is not anymore:).

it has:
1
2
3
> 4 libs playing color
1
2
3
> 4 libs opponent color
Empty points
last move
second last move
third last move
forth last move

input layers, and it is fully convolutional, so with just editing the
golast19.prototxt file you can use it for 13x13 as well, as I did on
last sunday. It was used in November tournament as well.

You can find it
http://physik.de/CNNlast.tar.gz



If you try here some points I like to get discussion:

- - it seems to me, that the playouts get much more important with such
a strong move prediction. Often the move prediction seems better the
playouts (I use 8000 at the moment against pachi 32000 with about 70%
winrate on 19x19, but with an extremely focused progressive widening
(a=400, a=20 was usual).

- - live and death becomes worse. My interpretation is, that the strong
CNN does not play moves, which obviously do not help to get a group
life, but would help the playouts to recognize the group is dead.
(http://physik.de/example.sgf top black group was with weaker move
prediction read very dead, with good CNN it was 30% alive or so :(


OK, hope you try it, as you know our engine oakfoam is open source :)
We just merged all the CNN stuff into the main branch!
https://bitbucket.org/francoisvn/oakfoam/wiki/Home
http://oakfoam.com


Do the very best with the CNN

Detlef




code:
if (col==Go::BLACK) {
  for (int j=0;jinGroup(pos))
libs=board->getGroup(pos)->numRealLibs()-1;
if (libs>3) libs=3;
if (board->getColor(pos)==Go::BLACK)
  {
  data[(0+libs)*size*size + size*j + k]=1.0;
  //data[size*size+size*j+k]=0.0;
  }
  else if (board->getColor(pos)==Go::WHITE)
  {
  //data[j*size+k]=0.0;
  data[(4+libs)*size*size + size*j + k]=1.0;
  }
  else if
(board->getColor(Go::Position::xy2pos(j,k,size))==Go::EMPTY)
  {
data[8*size*size + size*j + k]=1.0;
  }
}
}
if (col==Go::WHITE) {
  for (int j=0;jinGroup(pos))
libs=board->getGroup(pos)->numRealLibs()-1;
if (libs>3) libs=3;
if (board->getColor(pos)==Go::BLACK)
  {
  data[(4+libs)*size*size + size*j + k]=1.0;
  //data[size*size+size*j+k]=0.0;
  }
  else if (board->getColor(pos)==Go::WHITE)
  {
  //data[j*size+k]=0.0;
  data[(0+libs)*size*size + size*j + k]=1.0;
  }
  else if (board->getColor(pos)==Go::EMPTY)
  {
data[8*size*size + size*j + k]=1.0;
  }
}
}
if (caffe_test_net_input_dim > 9) {
  if (board->getLastMove().isNormal()) {
int j=Go::Position::pos2x(board->getLastMove().getPosition(),size);
int k=Go::Position::pos2y(board->getLastMove().getPosition(),size);
data[9*size*size+size*j+k]=1.0;
  }
  if (board->getSecondLastMove().isNormal()) {
int
j=Go::Position::pos2x(board->getSecondLastMove().getPosition(),size);
int
k=Go::Position::pos2y(board->getSecondLastMove().getPosition(),size);
data[10*size*size+size*j+k]=1.0;
  }
  if (board->getThirdLastMove().isNormal()) {
int
j=Go::Position::pos2x(board->getThirdLastMove().getPosition(),size);
int
k=Go::Position::pos2y(board->getThirdLastMove().getPosition(),size);
data[11*size*size+size*j+k]=1.0;
  }
  if (board->getForthLastMove().isNormal()) {
int
j=Go::Position::pos2x(board->getForthLastMove().getPosition(),size);
int
k=Go::Position::pos2y(board->getForthLastMove().getPosition(),size);
data[12*size*size+size*j+k]=1.0;
  }
}

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWZvOlAAoJEInWdHg+Znf4t8cP/2a9fE7rVb3Hz9wvdMkvVkFS
4Y3AomVx8i56jexVyXuzKihfizVRM7x6lBiwjYBhj4Rm9UFWjj2ZvDzBGCm3Sy4I
SpG8D01VnzVR6iC1YTu3ecv9Wo4pTjc7NL5pAxiZDB0V7OTRklfZAYsX4mWyHygn

Re: [Computer-go] Facebook Go AI

2015-12-06 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



Am 06.12.2015 um 16:24 schrieb Petr Baudis:
> On Sat, Dec 05, 2015 at 02:47:50PM +0100, Detlef Schmicker wrote:
>> I understand the idea, that long term prediction might lead to a 
>> different optimum (but it should not lead to one with a higher
>> one step prediction rate: it might result in a stronger player
>> with the same prediction rate...)
> 
> I think the whole idea is that it should improve raw prediction
> rate on unseen samples too.

This would mean, we are overfitting, which should be seen by a bigger
difference between unseen and seen samples, which is normaly checked
by using a test database and compare the results with the train
database and small ?!


The motivation of the increased suprevision is
> to improve the hidden representation in the network, making it
> more suitable for longer-term tactical predictions and therefore
> "stronger" and better encompassing the board situation.  This
> should result in better one-move predictions in a situation where
> the followup is also important.
> 
> It sounds rather reasonable to me...?
> 

Yes, it sounds reasonable, but this not always helps in computer go :)

Detlef
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWZGoGAAoJEInWdHg+Znf40Z0P/3vHdeHIb9/5Fqp78hdT45IC
RizA7703fWMeU8JC6dZA1ziI/oKfGLTetFVFGIcPIx+T3lkRapZLYZNa8BXQGXZr
lnjSk2aEfsJdCZd+Y62ECLitXEOOosjvF9bHNoAj39MeuDOUMxBcjCSSUjNDbOTm
eYWAYC0UgTE7xzR629FHQ1PJ6+iw2RYsvfEmEXwCD2blT8gab4uNDZelk+R/l4KI
F73Sid8ULz3AE8zi6XX2qWHmupKa7bMI9ZWcDfAf78rSPMOx6SPYbrX/QXsCiQAD
fy3KNOro93HJyW1sDk4mJEERm+UjmOcGjxILQDkcRo/+D/SdLT2DbeXM1KuE3RWu
0tSsubBNvNv5begSLkhOvSrybKZYtsgTqycF+4dMzLVj3LO5+Iy957w1QfwHBjBZ
ATT4GFPidrnrGfdKcRL/mtQJi0+JZ/QVZoIMvxYHMVk6Vz9JRwGFdFZKL1mnX5oy
t9Y8tIzkkQnjJk4/XRVuDnngBKFFgBgbyLWy6WU1MmhEGNyHt7anUKDyxHHz4LU/
00vJUIGiMsaOTqT92yLPWOm39RHRP2J9Pfflz12+ysGpD+VKsSNZOm6wdyn8/q5c
A3zrVJM7tug2Iu7mjJ6BC10bF813EGKfgtjxrQ54TnlIucMDssv9rBajqsoRDIxG
14Iv+3P/YDY0hmRoI51S
=Bx1M
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS again

2015-11-10 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks a lot,

mine did not run stable, hope you have more luck! I think I had to
little RAM (128MB)

Detlef

Am 10.11.2015 um 14:11 schrieb Hiroshi Yamashita:
> Hi,
> 
> I have started CGOS on my VPS(Virtual Private Server). 19x19 and 
> 9x9 are running. In 9x9, komi is 7.0. Draw is 0.5 win in rating 
> calc. I'll keep it running for a while. 13x13 is running on 
> original server.
> 
> time   serverportkomi 9x9  5 minutes 
> yss-aya.com   68097.0 19x19   15 minutes
> yss-aya.com 68197.5 13x13   10 minutescgos.boardspace.net
> 6813 7.5
> 
> http://www.yss-aya.com/cgos/ 
> http://www.yss-aya.com/cgos/9x9/standings.html 
> http://www.yss-aya.com/cgos/19x19/standings.html
> 
> Regards, Hiroshi Yamashita
> 
> ___ Computer-go
> mailing list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWQfGTAAoJEInWdHg+Znf4WcEP/i0DDbfaFLYeChvDTuCZrcnV
UanfUVgupxizi0rNf8Tpbn+7ZuhA1nP/4Vpf7w8brgtrhH/GxbxymmOGDzrKCBgD
bFiS2FOyJaqkGKmzvs79qRUi5c7bsEEdnBm2RHxgDdBIrMNLPyLz7yJPSIZIYUf7
VpgzVs7E44uA1RA2Nlh2wlAj81AmaIDEm3Zx4E2b5nZ3864M7UbqNzIl7GtePGw+
k/Uwt7ezrPkIoBma+7enHz/6b1yH0jhjoApFE4JbUktA0tsrhE24x2vSGSKWlUj7
Yibt0VsaGDINZ8Qcq8VhAxfk81SRqzSBTtnlwAUzZ1xGnzgJGBiUg5voNReRHr+C
eY+B+lp+n3zcKC5ZhUe1J3Nxg775I/pwXiTmUGRX95uMTOkaU3NkpxEBIiKzDoCA
jImCNFXqgrYBxE5byDvluCrggpWz7YdhCUGFCzccL7LM1roajo/g1PyyTLoKLwVp
IvQyu/mC/HTfb8QnAOOCXYegOOOHABdjdWeX70l1kH+g+o2g+kFUpi/K7Y+tKl1P
BxSBBMtXgJaLNFH1bJff1u072Zw/ktpUzJjR/bAFkFDH/3Uu339yC0QRuJzOr80K
VMd0G1YrJT1ShgqW5QUdDJXIJi62LGY5ERaY8/KihGbR8nmSUhXOel2YZ4HcWtcF
CRBsuJV8LebVzHmQZJ66
=uFhn
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Feature training with "offset"

2015-11-05 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

I would like to train features like in

http://www.remi-coulom.fr/Amsterdam2007/

but using DCNN probabilities as an additional not trained gamma, which
is always present. Did anybody try using an additional not trained
gamma (not necessarily DCNN)? Is there a reference?


Any suggestion would be great :)

Detlef
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWO5daAAoJEInWdHg+Znf4EVcP/3vy8u5mNXDyJU1Tml0Eo4WJ
5im4tsKAsELwDMiCGSpCv0Zkd8NCY7sIDtrDXOT/KFAzOBFwIln18alGha4GDSQp
WxGxDuIDdkOHfohxjSWSH6xmhI6eKaPM8QN4foYlNnxT89YMo4GSgBsP+CT34L6R
Af7T93GGKxg/peg7WZFDFmDW0MfHCSk+mu+MyCGMXmKoPPzbUkvGRMBVLZvHdOYq
LsEKrqc9CD0EWu6eEWJEQIwfdEN7Cyy1vSo1S+6P/xVeg5bTd5dviFnYNSRdjaiE
dIv/0+lNkVv/9Zquwy6SXHLBa0RN0L8+bfApUoly+jmysiPbSG89CY2AUA72d9a/
uaX3FAl705D/k1PJE+zJU3MCAKtVAtWaYZAM2PaHZwQTYtBlNXHyhkT7BIbxtnnb
6pN4B+nblEIJl3BsjND0zumZ3X9W2MQT3asj1J/KsYNo6aGJWm+PRRaEvBViPUtz
SL3te5E0ISJ17opPlp0kYQ62Dq91bdlvwp9JjZOEzTZN03S+ONvfQncCZAxnxcYg
hOD2uZ1g1wsAJHSs0owUSoHDSM1T3dgWMF+is1eZ5xgO+PPeNenWAUJtnOiUa6Z+
Z3EObyimaT7J4fG7vwbmLcfHX0GHKrPvCOR4+ZtOwdfDTi5fTvh5rLMIcyKXeRRv
ron/WOJWRsWh/hIB5Eko
=b31b
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Number of 3x3 patterns

2015-11-03 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks, but I need them reduced by reflection and rotation symmetries
(and leave the center empty so 3^8 + 3^5 + 3^3 and than reduce)



Am 03.11.2015 um 19:32 schrieb Gonçalo Mendes Ferreira:
> If you are considering only black stone, white, empty and border, 
> ignoring symmetry, wouldn't it be
> 
> 3^9 + 3^6 + 3^4
> 
> 3^9 for patterns away from the border, 3^6 for near the sides and
> 3^4 near the corners, assuming you are also interested in the
> center value.
> 
> This makes 20493, then you need to take out illegal patterns
> (surrounded middle stone). So I'd hint it's close to 2.
> 
> On 03/11/2015 18:17, Detlef Schmicker wrote: I could not find the
> number of 3x3 patterns in Go, if used all symmetrie s.
> 
> Can anybody give me a hint, were to find. Harvesting 4 games I
> get 1093:)
> 
> Thanks, Detlef
>> ___ Computer-go
>> mailing list Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWOQRAAAoJEInWdHg+Znf4SCcP/1oQI5ZI4nIMym8c/1gg1OWh
IDqhO3Ie1Rd7nxgDxaXZDaYACZr9UYRsueyLCLNGignLw6FVpXZvfC9YRn+NKenw
fruwUt/lRzfZGhRyxS5R55pq1X8ux1F/+aa63sRo4/SZGFCVCKeX/hsv3RyM6nuw
iuO9BhFSr0upmarcAJiyt4qSg9JWVdrQI7CaWE6C5XdJI3v5zXxV6yG7mcrSl8oy
66fVbpLo++TjWRUZoChn+HSLuq1rTfp6fWPbMsV3Wwfk7Y3kGkBzMMjvjUpVRktv
5Nu3tcFEf2Hts1iudQ/lPTdA9UYPhkSKwD3l4Z8khR6HpxO8kKClbU6g/uXBwLP+
k/ORYx4imeXMhSfgbzBxZAljRqL86cpUT9A0F0+Llqq4lClCq7I+CjZDXamAH8P8
AqpDQhizni0wEy0KLizkwvJ8mZnqg310553wKaZs+BfP2kkAw+iUtrqvkQ6dsk2t
8fgwMT5Yuw/BfvXLHT/Bp2/GSw5Q97yMNXBBdMqvbIpWkann5bwqVmJljz1VV+TU
G8CA9xeIj8HjVETBUIMURZDxTs+RkxT3fIdlEhH0MvreKBEw0Xp/eSUffqINVSk4
pIcUdjTnutQgL/AtxePcnQuGksvB0KPK5zvUscNgXw9US3JUu1DXcJ6mn+y1PRKK
9xaazKCRBfJjAkhnBlzR
=j9hC
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Number of 3x3 patterns

2015-11-03 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



Am 03.11.2015 um 20:24 schrieb Jim O'Flaherty:
> I don't see how "leave the center empty" works as a valid case,
> assuming this it just any valid 3x3 window on the board. Given bots
> playing each other, there can be 9x9 clumps of a stone of the same
> color. I can see it being argued there is no computational value in
> this specific pattern instance. But, then what are the conditions
> of the exceptions to the generalization? And how do you effectively
> iterate through the other +20,000 variations (not reduced by
> location or color symmetry)?
> 
> So, I'm curious, is there some other assumption about the 3x3
> window other than it be a view into any valid 3x3 space on a Go
> board?

Sorry, I did not explain the details, the assumption is:
I play in the middle, so it must be empty. I thought legal moves might
not really reduce the number of 3x3 patterns, as there can be no
suicide known from 3x3 patterns, as a capture is always possible.

Therefore I wonder, what 14 patterns did not appear in my 4 games
harvested:)


> 
> On Tue, Nov 3, 2015 at 1:04 PM, Álvaro Begué
> <alvaro.be...@gmail.com> wrote:
> 
>> I get 1107 (954 in the middle + 135 on the edge + 18 on a
>> corner).
>> 
>> Álvaro.
>> 
>> 
>> 
>> On Tue, Nov 3, 2015 at 2:00 PM, Detlef Schmicker <d...@physik.de>
>> wrote:
>> 
> Thanks, but I need them reduced by reflection and rotation
> symmetries (and leave the center empty so 3^8 + 3^5 + 3^3 and than
> reduce)
> 
> 
> 
> Am 03.11.2015 um 19:32 schrieb Gonçalo Mendes Ferreira:
>>>>> If you are considering only black stone, white, empty and
>>>>> border, ignoring symmetry, wouldn't it be
>>>>> 
>>>>> 3^9 + 3^6 + 3^4
>>>>> 
>>>>> 3^9 for patterns away from the border, 3^6 for near the
>>>>> sides and 3^4 near the corners, assuming you are also
>>>>> interested in the center value.
>>>>> 
>>>>> This makes 20493, then you need to take out illegal
>>>>> patterns (surrounded middle stone). So I'd hint it's close
>>>>> to 2.
>>>>> 
>>>>> On 03/11/2015 18:17, Detlef Schmicker wrote: I could not
>>>>> find the number of 3x3 patterns in Go, if used all
>>>>> symmetrie s.
>>>>> 
>>>>> Can anybody give me a hint, were to find. Harvesting 4
>>>>> games I get 1093:)
>>>>> 
>>>>> Thanks, Detlef
>>>>>> ___
>>>>>> Computer-go mailing list Computer-go@computer-go.org 
>>>>>> http://computer-go.org/mailman/listinfo/computer-go
>>>>> 
>>>>> ___ Computer-go
>>>>> mailing list Computer-go@computer-go.org 
>>>>> http://computer-go.org/mailman/listinfo/computer-go
>>>>> 
>>> ___ Computer-go
>>> mailing list Computer-go@computer-go.org 
>>> http://computer-go.org/mailman/listinfo/computer-go
>>> 
>> 
>> 
>> ___ Computer-go
>> mailing list Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
>> 
> 
> 
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWOQv1AAoJEInWdHg+Znf4GTYQAIATv45HU7fR1S4bfiygapDI
IOOnTtHTdjNoqHWGD07Y3MUy8rP24AcWHtEmlH+uwt42HBFXhCW9Hr2ul/Yreofl
e/lxcoawYYWs1tPuHEKV8TPQUVM3aHvPREoQgBMbkMlDpKQA1Jj3Q0Kv8T9cUVOW
S2URrTyOFrLiEbl4znYJwiH7hVI7q0HKom/XGFYWkfwhvJjDdKDrPbTUyl4IWo2Q
v/HdIXC/6WrPSnkeFnkc595w0qTUiXWj+B/0JYMnKvBml3aEsG8W6uT79SdDJ1MN
OJ4iW9L08p68Ovxt6Wp+eXopiPZSQ90PxPtI3cfmWrPWhs3/P95mLPg+u0CEt+PH
iuMaCM/XR68rWqQhMjRVbJkM+udo0f5iIGwN3xSDQiqfD1OO4Ks60Bdbj2qmKu/B
npEMGGeCqQmiyPftCYSdeMTHPScH+CvcL1nZaC4kdW7+aDfrC7JvU3L5nfKhVxMK
RfuXdNeX6mVAI2uL+MvFFea1B38qvdBS4y1XCQ8QObQxuxNJJupzQ8fixYGdOotj
UzuuXI4pyCzEcWWG+dr58pA35MbEpUWVsw/UMSA96RjevaqAUQ7nyFvNxcBahzE/
PGHRvcdp/c9AtlxKLDCqMd4+XMVWkSzj75jFhJuyRM4hkcszYdzVngWj6D4h9Npk
bzUgyQTYpesLVPNtVfjx
=Vqlf
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Number of 3x3 patterns

2015-11-03 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I could not find the number of 3x3 patterns in Go, if used all symmetrie
s.

Can anybody give me a hint, were to find. Harvesting 4 games I get
1093:)

Thanks, Detlef
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWOPpEAAoJEInWdHg+Znf45nEP/j4Xy8lJTesaydsiEeuAUZGZ
vfmmOEOYoX6kH2wZ6BA9GqD2NpDlZrm9+DtobTMg3EyrIkIc2kXZVkSm/rS24Jo5
p18Rub2mzeYEw4Qj3/mlr0L4NsO88wNz+vuJdDswr+UqgpEFA8Mqzf8dbC22YegO
HFGOMKPF5BHzl8clZ899N1SUSCRbuZAM/YKmtmJvi56/+MoJCO7gYOWdmUC2KYGu
J72asmmzkGgOZhYPykDF1+nGeed4KiIdtG5b9J0b0/7oV0lu4D36mjhpwbXXA8f3
xCtDO9jm6OG4kIRamCibEmYwVrJFnriltSP5RumyMGYuv7Bv7Ret3+Fgdg7IQFGp
xpH3c4cTE+GJ9d7Afr+FCMJcbEPcYSz+xPDSWhLWfTqYfQj7fh/qepTw2SLs+eD4
bjMWa3dISGnwKhzJykwXUZTqWnL/h3MvXe95Fi8bvYNEMSXCWlG4ajoSDQwwdPT2
Fktv8RJwVmtOYl5EDZhCbXi7zxvTsATyYtAxNbLcGbSyerJVrKh7cFp07GaaKyy/
tbo9tw9AEmb3xZ+qBQNI5lzQqtbWYn3sbu4eLSjIs/njtp2A7cVVW/O1SUrW+wAQ
ePkehsPjQ7yB++op8VZmN/J3f5M2UmO6TgSPDKQNUSeyCeZHzjZDJjH7qoLDr1Mw
fXMMo7Pg5lc6GOOBL885
=LKxP
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] caffee

2015-10-09 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

all the guys which are using caffe for DCNN:

If you want to see how bad a bot is playing in CPU mode: have a look
at the last results of NiceGo :)


Obviously the caffe library changed between December 2014 and August
2015 an now every thread seems to need to call:

Caffe::set_mode(Caffe::GPU);


I only noticed that my GPU did not got warm anymore, I thought might
be optimizations :)


Detlef
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWF70RAAoJEInWdHg+Znf4q4sP/An6eFAOeyJcCPqax0AQ8GKu
K6CZ93n2b2m51Dgk5hxOA5842NNRtfWequHadm6skA5EU7HIov4v6v4DxTR6Bgsi
a6WGEhr+SE2CfaVExZMw4ttwLdSPUVCb7afEbrr3ydsDZ0enXre5Qx6hHbVcNOVl
DbUf6VAZ4iSFx6FV9GPLLc2eVnkuhNa31jdabcujhXqXCoSprgyYrAVyzftu0bCz
VYdQqmZ8/dqvEfD/5seRQdq7VsPSORVKDwGr0GyPeMPqysSeI8snidR3NjSELjjR
/T0ymV9MPwQXZZ02BoB52Jj8ql1ZIaLZlwwcJUf8+kKuxF71v3sJuD4ea9TzI5lW
HUIbOpPbDCQs2JH16KoAkU0Y2uTa0rUTuiGNOUKnQZx+sWVxx/bjjJDJMT8Rt5eb
OvMeQL1y2olCRzQoqtOr52l4wCZwZ3WPHg2uP4QlAbM2v0IAo0z8Iv2Xco1mavkt
E8UZDleDyddhkk0g5jX3HcLD66610tHMs9qLXtmV5J/OujgoZyB92wegHwpnF8Yc
DQrJ92OGlxewD+aBv4HpHVc37yZcsCI++hDTBNlb65poJfFd3a4lDyZAGJ0vWJWR
E8ywIDsfRsFnsSkg6pckXb1hIKxS96GbuJtLbs6iVkNZLTeVyEGxeISL32SAr2JY
3m2nMpkPx9q3VLhg08lU
=bye/
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] KGS bot tournaments - what are your opinions?

2015-10-07 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

If I compare hardware specs in the KGS tournaments I usually use

http://spec.org/cpu2006/results/rint2006.html

(Multithread Integer operations are the ones most important for
computer go programs I think)

Detlef

Am 08.10.2015 um 05:48 schrieb Hideki Kato:
> Petr Baudis: <20151007234420.gb9...@machine.or.cz>:
> 
>> (I might propose relaxing the requirement even further, from one 
>> desktop cpu to just one cpu - as in physical package.  Many
>> cloud providers might give you a Xeon instance that's about as
>> good as a regular i7.  You mainly want to exclude dedicated
>> multi-CPU servers and clusters.)
> 
> For Amazon Clusters, Amazon provides vCPU spec (for example, EC2: 
> https://aws.amazon.com/ec2/details/ and 
> http://aws.amazon.com/ec2/previous-generation/). Only such
> instances that vCPU spec number is 1 belong PC class.
> 
> The principle is simple; such computers with one physical CPU
> installed or equivalents can belong PC class.  The exceptions are
> Intel Xeon E7 series processors, AMD Opteron, and IBM Power, as
> they have so many physical cores in a socket and cannot be said
> "personal".  In other words, processors up-to regular Intel Core i7
> processor can belong PC class as Petr suggested.
> 
> Then, how about Core i7 5960X Extreme Edition?  5960X has eight
> cores and actually a Xeon processor.  Is this regular i7?>Petr I
> think, as there are many PC with 5960X are sold widely as a
> high-end desktop for gamers/enthusiasts, 5960X can belong PC class.
> Comments are welcome. #This definition could prefer Intel...
> 
> Hideki Hideki
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWFgTVAAoJEInWdHg+Znf4L8sQAIAiDAfhvT/Dn+08VMof1nCI
603p+IlHgP45P63coAsPed1TqIEC+fiskBvqObgC0A1InDIaAtOrlS3+BbG/kHKH
WCO5JqeWxNvbYQP/h3AsJQQMeqrfBglO5ZFBGkcglDYDzDwdP5zt/P8pmO85NSXs
6NaefFQRbqFlSk3MqdSWV4OVh9iDd4J2kFFk10iQnSVu/uULju2xXY6S8wCVAGED
yhpeZ346eUBQXNtgCBOve5vsFCDpggq/k0VVb/vF7VXjL9IvsIyP3+X6koNeaPcR
+jeRoS5ORXlByHpwX31DNxSqEZK8RkpWOmQtv63/GYc3wL1+Qy9TUPrEHQLEYSdt
FaLGYjsWcY+aZphTdTG9kfZt8QPB/ArBHP4QOdS8b3XP+gsNrEnZ+cSvAbdFtxrG
NyhjT4YzZ7nZpD41L9nXiXhDdnC/UBYX2ynjxtU9B65NjddTz7qAssVvsiaql73m
o8Wpo/GgW/pIljc2LtdbCemCjFmHU+5yHrYzsyWIzFkClF6RIOuMTQ3794PaArC9
qLAYLTzNke/a4blpzhzByF7DdD06FVNFi/NZluBgD42DGGW+bKzHN34lJ2/qTfXy
zy7R+uymV5MCMgcLovVYi7Pmi3HKs8m2+oiR96KOZ0jtZDwx3bfgR3KXk6jsmYj/
VHbg112fKssMPqV5kP3b
=48Gw
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Fast pick from a probability list

2015-10-07 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

I have a probability table of all possible moves. What is the fastest
way to pick with probability, possibly with reducing the quality of
probability?!

I could not find any discussion on this on computer-go, but probably I
missed it :(

Thansk a lot

Detlef
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJWFPijAAoJEInWdHg+Znf4LZwQAKGuQBtxb6/v+zaczghStCcU
5/a6oKaMNM2CJgCPMzgR2JKKb0gJVS0EldsOCmtQIgS2ZiBn4yP7y6uMGdtvEdMp
ytod1uV/KnW0GaTbaQ2RItIifCX/ft8JMrcchgW9wFVFitZpaPierQZuSOAoqQL1
IWXyZqmr92IoarZWI1MABtGOfiP7OFT91X17tUvhCmJ+dxrx7bczxnhkLfczEK4l
shkuourEX3VYiS/55wE/RwNmyuUnUYXy08SK3Ii4Tp9e5p0lS+i7eCJI/+wzqnUZ
HfjNDF+rPa/EpBo3a9VrTsxN3O8gYkzbiQLmWAZBKYrhwlFikOZ196VFALyZMfXE
KWFrK1niRaubP+00iQb1sXlntGbaAHm6e6+cGOVgykNMjQwseCMrD7xnYpQtaa8q
OHHz72ybGUe9USOKGTtGp+WxNS5j++zcQG7BKoRvRMJHEXomX6z+YeI2a1q19fk4
c/aeSvHGZuVU/xbDp1gIJADz1OtZFv8zzL38wGWoEPBBVxCpGezEk4O/m3Aleksx
rHmyDtcHuWQb0cuFMdwrqW37x+ier6SvAM6XNCTz2bqN0S6/3udTZmhHoFDCxzZ0
h2fZBtlJXEsR8y7MZ1nY6YewWNErbLK1rnkIe7lW7nTDjelwelknAtDTaJVpSQfp
3gUSnNUC5ZZ9F2eJxpCY
=UuDw
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Detlef's DCNN data

2015-09-19 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks for the very detailed report! SO good to see, that stronger
programs start using DCNN.

We should ask Nick, if he DCNN gets an exception from the KGS rules.
At the moment I would interpret them as not allowing multiple bots
using the same CNN, but of cause training this CNN is no magic and
only costs energy. For me it would be fine using it in tournaments!


Your factor: yes, I think this way it is nearly independent from the
factor (it just multiplies the final gamma with r, leaves the order
unchanged ...)

I use an aditive term gamma * (DCNN + z), but this was only a quick
shot too:)

Detlef

Am 18.09.2015 um 20:08 schrieb Hiroshi Yamashita:
> Hi,
> 
> I tried Detlef's DCNN learning data with Aya. 
> http://computer-go.org/pipermail/computer-go/2015-April/007573.html
>
> 
I tested 1 playout/move selfplay, and DCNN with Aya got around
> 90% winrate. DCNN returns each move probabilty. I multiply it by
> 1000, and multiply it by each move's rating. (r *= 1000 means
> multiply by 1000).
> 
> Test games are less than 100. But It seems muliply constant has no 
> effect. 90% winrate is about +400 Elo. But this is selfplay and 
> playout does not understand semeai(capture race). So I guess +50 or
> +100 Elo against human.
> 
> 
> 1 playout Aya with DCNN vs 1 playout Aya without DCNN. (1
> thread, selfplay, Xeon W3680 3.3GHz, GTS 450)
> 
> winrate  wins/games 0.943  83/88r *= 1000 0.897  78/87
> r *= 500 0.913  84/92r *= 200 0.932  82/88r *= 100 
> 0.914  85/93r *= 50
> 
> Select maximum uct_rave move. MM_gamma is each move's rating from
> Remi's Elo rating paper. 
> --- r =
> result_DCNN(pos(x,y)); if ( r < 0.001 ) r = 0.001; r *= 1000; 
> MM_gamma *= r;
> 
> C = 0.31 ucb   = moveWins/moveCount + C * sqrt( log(moveSum+1) /
> moveCount ); rave  = raveWins/raveCount + C * sqrt(
> log((moveSum+1)*175) / ((moveSum+1)*0.48) );
> 
> W1 = (1.0 / 0.9);  // from fuego W2 = (1.0 / 2); beta =
> raveCount / (raveCount + moveCount * (W1 + W2 * raveCount));
> 
> K = 1200; bias = 0.01 * log(1 + MM_gamma) * sqrt( K / (K +
> moveCount));
> 
> ucb_rave = beta * rave + (1 - beta) * ucb + bias; 
> ---
> 
> Aya calls DCNN when node is created. Aya makes 900 nodes in 1 
> playouts. GTS 450 needs 17.4ms for a position. 900*17.4 = 15.6 sec 
> is needed. Aya needs 5 sec for 1 playout without DCNN, and 20.6
> sec with DCNN. So 4 times slower.
> 
> I heard HiraBot jumped from 2d to 3d by using Detlef's data. He
> uses DCNN only in root node. HiraBot prediction rate without DCNN
> is 38.5%. MC_ark jumped from 2k to 1d by using Detlef's data.
> MC_ark uses DCNN only in root node and root's children. Aya's
> prediction rate is 38.8%, and Detlef's DCNN is 44%.
> 
> Time for one position
> 
> CUDA cores  clock GTS 450  17.4 ms 192 783MHz GTX 970   1.6
> ms   1,6641050MHz
> 
> *CPU235.0 ms  ... Xeon W3680 3.3GHz one thread.
> 
> GTX 970 is 11 times faster than GTS 450. Maybe it is equal CUDA
> cores ratio (8.6) x clock ratio(1.3). I also use caffe. Installing
> caffe was the most difficult part... And thank you Detlef for
> publishing your data!
> 
> My test code and Makefile. 
> http://yss-aya.com/20150907detlef_test.zip
> 
> Regards, Hiroshi Yamashita
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJV/RogAAoJEInWdHg+Znf438QQAKwoj67b0WPv2rf8dXXAiKz8
5UApBbXRQDzFoUywwJrnWIhV+7Uc8FJ5BBCrGq0XRxgOLzowrgu4lfWQ35Ma5vzg
mGfkYb+ybxdAymzhbYRvRW3klYFnTrgzutgNSe4pFi3/Zn4BfyqdxJKfJcoFawi1
4zdVPCoTDhDyny5XxKrxjSx0RrQzToFCoM6SthHlFDhAEnW3wmDgCkKdCfZKytSb
feneYUb6/Xf0UBcwfJrNLTXzQVXi9EkRDhhZDSDSl2GxTzg0roa5lgQ6Buhm5Ywz
c6qotSqXqvpIUQXl1MvfIzDQyIMgtYN79ktL8F5zL67Sm1Br18au7whHj1HDVTtW
sUL1gP6qxyMDrLQCM3Bk+pqU1R4rQjX2kCoxQYluH4hE43NAGS9VqS1KLKZqHOtH
U/VEGIxKJcSp+HsgVZCEm75ZgiEnGsmS6BaHNgwLNUV/7M38CWnleZZQdYC6h99J
D6J8kkacxgnz6Y9qoo1KcSZ67sdQqzD8Mi10dWMr6BfYpY06SoB5/gu2uo1PqiMg
lZktI0i7JU+qRgTGd+LnPSRtcHlJigzj9b1NYXxmlZ4TYDDZNuKjjqkh0H84PoQY
07rip+raNieGGGJCD3w7flVmSCCY/c7M59AChYZM5xM4yg/IAVNlQRb+tjPqbbzB
xy4UkLxTy792CbEl1u4I
=K7kr
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] cgos 9x9

2015-07-05 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

http://blog.physik.de/?page_id=788

should be up again, sorry when I took my comuter to the olympiad in
Leiden (which was a great event) it changed the dyndns ip to leiden :(

Detlef

Am 03.07.2015 um 20:33 schrieb Olivier Teytaud:
 Hello; we would like to come back to Computer-Go, starting with the
 small 9x9 board; is there any such Cgos running somewhere ? Thanks
 to people currently running the 13x13. Or maybe we should run a 9x9
 server ourselves if nobody has this in his/her agenda ? Best
 regards, Olivier ___ 
 Computer-go mailing list Computer-go@computer-go.org 
 http://computer-go.org/mailman/listinfo/computer-go
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBAgAGBQJVmOzyAAoJEInWdHg+Znf4Z8MP/1WgxpsklIoYTsJVDfvyy45I
Gca9RG0RG3XmcBGgL9Mm7c9zY69eiDPYbVHkZQYjohsRa499X/QAkZ7aUnH7del9
hrR7PDlmK31OtUwtOgIqedlgw8pq8BMAIQ5ZjoK7/w1Acm5vbwojlpB+Br6/RIAE
YYOAkX40EghuveTwTMrlb5IztQALN887sFJv0Y+GzTHcMRhJ6k7uhMb9p6q3BJ3/
+vJsVN8Wq7VEmHWTLU2F1LvkMcfnPZKyx+1ieb1kf3tw4nIilDYIZfNbTMNj1pVF
b02NhEyZEkHksO6ymVB0SZZRqVwcReNzOIs4REw+znw87SU8SXdrf0Sd57rZ82TA
mlrCi7hZmVIri48CY1kAFJgPWCl3ce6TYqksHb7XRbBMFy5gMNA2aLaxNFGWHFon
Cs8Frcdo5MkXBseUA8/0hEoj6eQX+J6nZfFQJnlp3JBeaAZXFaowJTtYKZq+Brbh
97cD4TXk2LneCyM/IGGs8iOc72On0cTFsTuRFwIa/a4Ql6BG1PAPfHrp5wJoih+E
dv1uuwNgDP4lqi6rewfd/qYbs/rZljzYi2ZXc/QcYnc3hqWCsXye3bBv+h9kxSie
ecGWp+vEMtsyriBo5LCMRNd0Pv38p3S8IoaNc+4NGW1M1L2BdpJ0orF+BLC+puq9
dVmkTwNi9tcSnU3aA2kr
=jtyS
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] NNGS go server

2015-06-29 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks a lot,

will we play sudden death in Leiden?

Detlef

Am 29.06.2015 um 12:32 schrieb Hideki Kato:
 Sorry, it has three bugs.  Fixed version is attached.
 
 Hideki
 
 Hideki Kato: 55911375.6269%hideki_ka...@ybb.ne.jp:
 Dear Detlef,
 
 Yes, you are right.  If you have any trouble with this, please
 use attached script which will send time_settings TIME 0 0
 upon new game.
 
 Best Hideki
 
 Detlef Schmicker: 55910719.3050...@physik.de:
 Is it correct, that only time_left is deliverd to the gtp engine?
 
 No time setting before? (therefore my implementation ignores it at
 the moment:(
 
 Thanks Detlef
 
 Am 26.06.2015 um 15:22 schrieb Hideki Kato:
 Jago can. http://www.rene-grothmann.de/jago/
 
 Hideki
 
 Detlef Schmicker: 558d5159.1000...@physik.de: Is there a
 client to watch the games on NNGS go servers?
 
 
 Thanks a lot
 
 Detlef
 ___
 Computer-go mailing list Computer-go@computer-go.org 
 http://computer-go.org/mailman/listinfo/computer-go
 ___ Computer-go
 mailing list Computer-go@computer-go.org 
 http://computer-go.org/mailman/listinfo/computer-go
 
 
 ___ Computer-go
 mailing list Computer-go@computer-go.org 
 http://computer-go.org/mailman/listinfo/computer-go
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBAgAGBQJVkUzXAAoJEInWdHg+Znf4tfQQAImqG1UYZZZv5iTDaSIrz3O7
/5XbGxMNhJhLzy9jnGQJ0IeNEpBRjhHJdJ4o3/MeGRDVx0i5JYQpMbQIrgU3pQBD
Fgw2SZ1AmKl0s96HBX8rE5Ntoyj4JdIQwSM/UrMsBlINeFs8xprEWMAly+8jZHEd
qbdq5V1VkdhhwQoTwRYOv2JIJ0T89WZJoSmGeNSCmVQwu5L8KZg5bzQyj5vQjB4V
jjGjOruwe4+QeU5/97QNOGzGAMgJJ5wXT5aN+S6IGrTlW2hAsbHnsmiqRUB57S6n
0mS8rLh7Sr+UR4QLx0nDKwlRJR19caW0iJaNkPhpHUMdw2oaOhvzfxzs3WHkAFTN
O2EiZm9Kd60TgumRsqzHZtp8QKfjDzEbScVnby0P5c7qHybd9Y3rk7h/rNbs04GM
pOt3SNA0ICVLc5Uk0hjXPFwSWD6dhvzFyiFbRioMX/fJCOa4gmbei2rVIyUtvgRy
tHhXP3ngjsWgBmKlGhBOkfzL1wDbTbVxQTLboik0L7gPfwGs1ldVhq+AuCm8TcHO
Bs6uoMVh05fPVtFku1PjOJdnHYig8EYVonJQorGBECO16dPHTsdZ30sPm9fzkdKF
4sTDKA8A8VfoQke7trvCo4Jals3S83pJP/lFX6qsM3u/LQBIPrO8bKp3l6JjKn+3
VgBlIRGXRnB5cvZbZKnv
=juhz
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] NNGS go server

2015-06-29 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Is it correct, that only time_left is deliverd to the gtp engine?

No time setting before? (therefore my implementation ignores it at the
moment:(

Thanks Detlef

Am 26.06.2015 um 15:22 schrieb Hideki Kato:
 Jago can. http://www.rene-grothmann.de/jago/
 
 Hideki
 
 Detlef Schmicker: 558d5159.1000...@physik.de: Is there a client
 to watch the games on NNGS go servers?
 
 
 Thanks a lot
 
 Detlef
 ___ Computer-go
 mailing list Computer-go@computer-go.org 
 http://computer-go.org/mailman/listinfo/computer-go
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBAgAGBQJVkQcZAAoJEInWdHg+Znf4Uh0QAKtBgsXksUkgvtNbqgF3/hzb
qHzszd8HaAqmc6a44HlKDTvfsesF5E9fiRLodzHtoq2xVcFVBsL+Csl0slLd44bg
nYBxoRdLMESiPrYgtuzMIyXS/yuM/BhceaASSHeSZGox0k3vSGZCc2g+m4scDlfv
ZuJU0pCGlfWQ4O6Z3jeC6fijbvG1hrQ8q2GN/QBhhf/iYeXQB5PKB4LzJVCghFAH
pASnAnALa7HFOPsdAiiYka8/M42P2aQArHGrBwCZNq6LYC5ilM6VJI+3CLPfY3o4
FK0ABdmfFApw5nt9d/IjX40+twgRcUrU8TKUYR1d1AHQUc0LBdWIeWWi4JqVN40S
DmgP+v5QWHcx3KTyxuR1BcA7+0jr7T1CVzzmu3mHFjS2sVT9GzExXt9TLs5fyAwj
DlB3yJbNZxtDwxSVH4Vi0tV48/RloPVmoh2cnkmI2GTGORbUjmMdtmx/b/qsc0IJ
DVCNHuX9hizSjdozkJqh4R5ESuXE5FEEt3VvYr1T62yxwWKGiJIvTowzxH44bm5+
7aLhcADHJP+MYzZCRAfDMo+wTSisMHClXcpYOy1YOS+wyd4ZLY3JIeCee+3KYEzv
U3o2Of3vz4Ea2q9p+GHWrHYWhFhRtb7dkN9lajCmHyf0Z6a/DpZpJB0gXe0nbDhC
wdFbVQjptFwhsCJugHUF
=/gTm
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] NNGS go server

2015-06-26 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Is there a client to watch the games on NNGS go servers?


Thanks a lot

Detlef
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBAgAGBQJVjVFZAAoJEInWdHg+Znf4f3gP/0FwX4fu5g2AXJ6KO1fIMgQS
wMcuCWc1kE3f+kXAMY1ztCiWUAqtXFyLI39ozZpr2/oGUl4w2Fds0xnxCEbBmaK/
tAJjb0Cer0T6ct2ejTxtHofaitsOiKRL1cuUSc5grWb0i/EoE2pNdjwouIBiWLum
V3cpB4/rsxZeCj1I7HLgB7VaAzaVVmUgcAUOWucWj9YeRot8Md/B3PGZY5abl85k
Pksa5YFv7tLyzW2JDSzAHpDIOtL2428nNTbPA3WTuoJ3YYh5pW1iZaSxhnpebAXq
1cEn97FR0sbfy6j5WQX87T0BP+VyXYk4ztLHUMAZZGnvcMAywYfbDL5VEtePoCt6
rkFhQanRZGRFSeI51igQEz4xjkUhM6YnG3Hc3gSVoSWaMN9cp0wf5q4BFLXs5xX7
MfufhNkQFVYeN6iUIszbT5kQei9pnzBWqxY/JKv7zdmV3X02ubMF4vkt1z0zhzlN
rV0GQkDy0qj6BUNnGW/2deao3NK/+C08LNH/8yPlvdgDbZ2qhru/pFQSWVZTQa9D
Kvij/8RFzqEb1nMrY8Z0EoC+cTH2mqYidydDmWzUBGWc3q+FzDtPHND3++LWMgTp
dHghO4Ga/U5dfIZEFFYEHP3KADCR/vI7fTDoQQ7v09CYnOXVuxdAKgjfcfDCRpB6
nl45puHdkwJakQR7HXU5
=YxCJ
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Strange GNU Go ladder behavior

2015-06-19 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ubunut 12.04 (64 bit) self compiled with

 gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.8/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu
4.8.2-19ubuntu1'
- --with-bugurl=file:///usr/share/doc/gcc-4.8/README.Bugs
- --enable-languages=c,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr
- --program-suffix=-4.8 --enable-shared --enable-linker-build-id
- --libexecdir=/usr/lib --without-included-gettext
- --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.8
- --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu
- --enable-libstdcxx-debug --enable-libstdcxx-time=yes
- --enable-gnu-unique-object --disable-libmudflap --enable-plugin
- --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk
- --enable-gtk-cairo
- --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-4.8-amd64/jre
- --enable-java-home
- --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-4.8-amd64
- --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-4.8-amd64
- --with-arch-directory=amd64
- --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --enable-objc-gc
- --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64
- --with-multilib-list=m32,m64,mx32 --with-tune=generic
- --enable-checking=release --build=x86_64-linux-gnu
- --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1)




CRASHES

$gnugo --infile crashes-gnugo.sgf
Speicherzugriffsfehler (Speicherabzug geschrieben)


Am 19.06.2015 um 16:22 schrieb Peter Drake:
 CentOS Linux release 7.1.1503 64 bit
 
 I'm not sure which compiler the make script invoked, but I have
 these installed:
 
 gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9) cc (GCC) 4.8.3 20140911
 (Red Hat 4.8.3-9)
 
 
 
 On Fri, Jun 19, 2015 at 12:45 AM, Hiroshi Yamashita
 y...@bd.mbn.or.jp wrote:
 
 [drake@broadcast ~]$ gnugo --infile
 results/2015/06/19/03:21:15.950/instance17-b1-2015-06-19-03:21:31.287.sgf

 
Segmentation fault
 
 
 Your sgf is ok on theres 4 environments. It took about 1 minute
 though.
 
 GNU Go 3.9.1   gcc 4.1.2, Debian 4.1.1-21, 32bit GNU Go 3.8
 gcc 4.1.2, Debian 4.1.1-21, 32bit GNU Go 3.7.10  gcc 4.1.2,
 Debian 4.1.1-21, 32bit GNU Go 3.7.10  Visual C++6.0, Windows XP,
 32bit
 
 Result are same.
 
 yss@debian:~/gnugo-3.7.10/interface$ ./gnugo -l
 crashes-gnugo.sgf white (O) move J8
 
 What kind of OS, 32bit or 64bit, and compiler makes crash?
 
 Regards Hiroshi Yamashita
 
 
 ___ Computer-go
 mailing list Computer-go@computer-go.org 
 http://computer-go.org/mailman/listinfo/computer-go
 
 
 
 
 
 
 ___ Computer-go mailing
 list Computer-go@computer-go.org 
 http://computer-go.org/mailman/listinfo/computer-go
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBAgAGBQJVhCgQAAoJEInWdHg+Znf4XvsP/2l9y9aYaawdIXeK/3S5ygW3
bvUpOQ5ZN9ez/qg4HSRFVsekjaocEEf+KiJMFoIXVBB05WldQwTCqOU0LKRDhkk7
KhWw3ntpJNK29ACXqmADR7iEvtCh7sS3IksgZZ2GCheo9IV3SkcD/zWXsFvUKINq
aGWtSUN7VSAqi9ndGlhAhaeclo1ZCaN9Sq/MA1W4Od6K2CVw5sRLNhQDOVD+1dzV
AKqlXsnc49GWLSOHgRXBAlNaXcYRP/LUZ86aef1FTxhdSZOg7TwIAK7OB0c80LLo
R7Cz+uj8TCoa+o2RIeomN7xYIQfK6rJyRztOW+is10M4t19ggqoO0792DcSpE+1s
foAeFC3+EGF3fh8hNrnkzgF1r3GDMNrBEGzCxzAtyl8tDErRS0C8AHEW2pMAgUjN
Pb/+zgTAXbaTaWMfHxeMc36u+rAjLOHqPaSityTm68mB54X1unwPy4G5mI+DOwYB
38YfIdIt13GqsaJnEzpsPRe7nauld/CrPeJV3zYcxAPl1uBu1IwEOHHuEFMzC2Sn
1uk8xJtr5EIm5mbc/JXNTYxkd8phM9iGkIthWFYZPFqTijpLFrUMwhqdPCwp7+zu
nDZZRYdlIeTURmxxfNV/zsvx2sfQRhRfrtg8jHNoMF0I/XfLHn11ti59pLZhAlR4
k8SMp+guTXQ0AK8Rv1l/
=an2s
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS

2015-06-10 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

After my ISP crashed, I do not get up 9x9 at the moment.
Immediatly myCtest tries to connect from within the middle of a game i
think and DODs the server



Am 26.05.2015 um 18:56 schrieb Christoph Birk:
 On 05/26/2015 02:41 AM, Detlef Schmicker wrote:
 -BEGIN PGP SIGNED MESSAGE- it should be up nearly 24/7 I
 hope and use less than 5W electrical power, until the sd card is
 full :)
 
 Thank you, Christoph
 
 
 ___ Computer-go mailing
 list Computer-go@computer-go.org 
 http://computer-go.org/mailman/listinfo/computer-go
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBAgAGBQJVeGtyAAoJEInWdHg+Znf47cYP/3xi8EFAHfTMPD6glKe9jt2k
XnMj40ffZKO/JmpQrZ1X7sfqESoCZW0qIEP1MNMQZYxzOPg4y+VLaMtxqUaqD9eb
tsywq/CW2NPeEy9RWl6M1Y3oiv3vrOaPdk4r6OyKCuht1xBHcY6qKT8dnHOvf4E6
qWBVtfcaA5sWblFK3gxoqqHCiU5ogOxzIDnZFTyYel3vk/pDjEzhol/9fBEjfiS9
Kk0GP2fN3A5p3+I6qkB9ht5kPlfB1suBmNF4iis8H4XKr2re9wOc22pPKDjuMt7y
okNY7JBTDi1l9INrd7UDVzmdL2tAvxMbbL9B0fUcjGG55rTx6elmFvLg5TtJvimp
89P1axN0bo95vjG7tbEeF0UzTQjYg1p781BfA+voT84OBmIhqag1Q1XxcM5oX/la
ok0iHUaTbzrk0pwA3DCUhxZMMpffPVmw9k+m9bHCitqq5XWCeACyi95rHIuaL0zK
el9Nl0OzfhAUIztJdd9gg0NnAsHcfd/yMQ5YZI1r1HtOG8GoSUV9pk5FwTnm55io
/JamFARlbe6cUmPLe9tnfy+/fEPoAB0/DdwjafI7t1tm2H5UmhNN8n7Gf4UNL5/A
mCFxjo63U98DinT/wqcPFS/Cj6tAV1ELV5jm15ooDF0eOb1TJJgFXaglyoTxkdhu
VsSk4h7XsmAk3cZ04wKV
=YTNi
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS

2015-06-06 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

I'd like to add the bayes rating to 9x9 and 19x19 intermediate server
(physik.selfhost.eu:8080) and wonder, if the bayeselo scripts for go
are around some where?

I did not find them in the original cgos source code:(


Detlef

Am 23.05.2015 um 17:29 schrieb folkert:
 24/7 is only useful, if other than open source bots are run on
 the server, otherwise the author can run it simply on
 gomill...
 
 While I agree that it is not ideal having so few programs
 running, shutting down the server is even worse, or not?
 
 If it runs stable for a while, people will return.
 
 
 Folkert van Heusden
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBAgAGBQJVcyoYAAoJEInWdHg+Znf4wK4P/RsbAhhCB+uBdOt8ygXRAGOT
bzydm9i9Nh/N9rc8IVLPojEQaTW3QimolshQH0KTHnjk97S8qnfV30MuXz2x1c6C
2k0DOvHZoQTdztKFVO5P5j0Dw5KnfTXnrEtfSChWWnhk9N/+73/+07/OqW/OYQ21
3NFC/as3TFvkiOvIfopvrJ3ng5ZzV2Bb303wXOy/IfXiae+TqajV5NExYUPFXZmK
v+GDIrE9l53iowRJQTdk+ORNAsmUtl3mieiRSZCUGoY8Kk3Cv6oinixy7dzJRSJD
0FKg0VVMTwR1X8HyAPmcx0DUYLrp8gMaBQjXfh8N+6juoTASpJJmIWn//8OLi1md
gEAL19GZU5KFGZ+8hMtP4xzXzuKeOMshYCk/Bx5Px4bUJg+QayaFnBWeZL9MdPPw
2PBb3/vbI6kK77j4qjk+75HwoNUQEobbb3ogbrdmNuTlJTVaLbMbdTSDSRoXc/N8
mePZ+KUePWLObcfGgKCfC+baN+1alXHExTzvVYrbTTp6tdLXx5QLVFAjH1Ht+cnF
4FNaAqBK3beqjTA6xwNu3YSM767Fj2le7weFw9swMbiQtw4DrFlg1vm+uAHoPfCn
lF3tg2K3UMQpy2o4l7+7dQYeMSG8gdhllC2ZmDQt4IgxpUDCzmlNKudnh8X4ixM4
EpNo4XpaeO05cGAu+QjV
=T1G3
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] KGS tournament rules

2015-06-04 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Oakfoam uses caffe library.I did not ask, as I considered it the same
as using e.g. boost lihrary to not write special kind of maps, you do
not want to write your self.

Of cause the net definition and training is our own. Most of the code
would be linear algebra, if you would write it yourself, and you would
most probably use a library for this

Detlef

Am 04.06.2015 um 22:56 schrieb Nick Wedd:
 I have been asked
 
 Your page [ http://www.weddslist.com/kgs/rules.html ] says:
 
 All the code in it that is in any way involved in move-generation
 (i.e.
 anything that causes the program to prefer one move to another)
 or position evaluation must be unique among the entrants. Code
 that is involved only in non-essential parts of the program,
 such as input/output, or scoring the position after the game is
 over, need not be unique. If two or more people want to submit
 programs containing the same code, then the author of that code
 shall decide which may enter.
 
 
 Would it be acceptable for me to use a (non-Go-specific) neural
 network package that I didn't write?
 
 
 My immediate inclination is to say Yes. It's like using a compiler
 that you didn't write.  But I fear it may be more complicated than
 that.
 
 For now, the rule is that if you enter a KGS bot tournament using a
 neural net that you did not write, your entry will be accepted, buy
 you must specify what neural net you are using.
 
 But I would like to discuss the issue, and accept the consensus of
 this list.  I have never used a neural net, and my understanding of
 how they work is close to zero.  I naively imagine it goes like
 this: 1.  You obtain a neural net, by buying one, downloading a
 free one, or getting one from a colleague. 2.  You install it on
 your computer. 3.  You configure it by setting some parameters. 4.
 You specify how its board state representation will work (I have
 very little idea about this). 5.  You train it, maybe by feeding it
 a large database of professional games. 6.  You test the results.
 Quite likely you realise it hasn't gone well, and redo from step
 3. 7. You add a harness that attaches it to kgsGtp, and maybe to
 some other programs.
 
 I look forward to becoming better informed.  I know that if someone
 writes a praiseworthy program in say C, the creator of his C
 compiler will deserve and expect none of the credit. I suspect
 things may be different with neural nets.
 
 Nick
 
 
 
 ___ Computer-go mailing
 list Computer-go@computer-go.org 
 http://computer-go.org/mailman/listinfo/computer-go
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBAgAGBQJVcMEYAAoJEInWdHg+Znf4EPEP/0fCweS1fh9lb+O0IulhdfDW
DVHNo93/eYibw2z/1uPEypE5/gk740aYO71F9GHLXuh/QQcqRVAEZ6FLuUJoBTsl
91B05sNzxlcgCxz4YV8kUscq5fL/v16gkPWl1QVZnVOICX40N20pc+CgSd/34pFj
VP1K7xitWZllnJ+6j0utk+hCzVC0OY5H9WYQUW0dAuyAVY2+sOwSx8AbgqMM9qeB
cZzW2PVZpXc74n0VWLlTeMevMQASSMH5qg96J7hOp13WE3n4rGPS/+IiANffzPDD
fQRwH4n4cTV8C9ynYb4tMYmuRfN+HdWhu5GVTIHJ6SMpix+QRHA1epY4v8vscsHQ
MVEUEI1uXEM5cQwbTNs6+TR/5lEOQ22HW3S4+Ukfu/pI1WCHLNyyb3YxXlm9uOqg
7fNUFfNo3ko59oNPg+FsMoDr2dG2ZH79b4iMhY5z28vYznZVaFdf0xK7Xxnkkj0s
K2s3piu+sIOnRNJhZR2Ran1aeVCzTAr4FQ2td/cn6ltWG7HCJwR5d5u7E0XW/8th
9bkZUasS45QqH+njtV+ADYhYcFRM1K3u/cNiDfiOyD+Ekje9c5HTbhjCNK29foYw
0VbiEnYk2r03SzlZqEVsOYbCpHQ7X/SY9zlhBPm+Suz0CZcgRNPj1Z3O8+guYXnX
ZXYHlqdRbRQVDGZToIqh
=IemF
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS

2015-05-26 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

OK,

I moved it to an old Nokia N800 tablet (I was bored, and did not have
an idea to improve my bot:)

The anchor is not running on the tablet, since I was afraid of it
loosing on time, if a opponent plays a lot of moves (gnugo on 9x9
takes about 10s per move on this tablet). So anchor will be up
everytime my usual computer is up...

http://blog.physik.de/?page_id=788

it should be up nearly 24/7 I hope and use less than 5W electrical
power, until the sd card is full :)

Detlef

Am 23.05.2015 um 17:29 schrieb folkert:
 24/7 is only useful, if other than open source bots are run on
 the server, otherwise the author can run it simply on
 gomill...
 
 While I agree that it is not ideal having so few programs
 running, shutting down the server is even worse, or not?
 
 If it runs stable for a while, people will return.
 
 
 Folkert van Heusden
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBAgAGBQJVZD/PAAoJEInWdHg+Znf4c28P/0899wo0dAxnJY8HCKpjYJHy
rZEMr0k3fHu5ijsElcINZBKIgl79sB4CD5TRtvMo7SZcpqYUh7ccyKN3QXc/madA
cCA6GQBs1Fm7tgKckUFp3Ni5WtiEXpPozMFNl0VHVp7EArO4Z01KmNmatwGl3VE4
LsT+zN68U862VsAK5vcLlq269vtTfNaBrDPtENDIRwCUUtuFK+fo23syguYoU7Mh
y77v4AIvcvMb+BzdkLFR9+XWWvoqGEQF5657ny4CVC9YbukpGbGHb2RMTEoJH46s
HBUlowUIf1KdEIBWtA7b099bSdJBQmvLiC91qghV1iUD77Gap5p0oepEFvDeCBzU
4aFFRvZuQ9+O2Dj7x9b85HE9VQd/O0aHCEr268BPNCay6HITMUEVQx9dMfefjaAz
Kd8GEGIrlIVs66iRUIsxv4tJJvwxsghAUEAPXGJnmseoa6/3SvxCeE5NymJVZfvB
a+uTYlMY7Ql1NZreePkGbR8VaRwMCqgHHWuNIZdAnAmwtoHEB5R1ojTCANsa2QxO
JCKhv6NUwz2qgcNvzQTZ+hRJvx7d+Fqxsb2H1RWokz+rlzr6UUnPdjUgylzP3KyO
VVayYoImkhPoGc25OKp1giqaWO0bfTG8E9DrGjfzvEtbSTctyyAGvsPE8pvyd5ta
qcW/MpplyuHKtAZoGBNY
=S7WJ
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS

2015-05-23 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

24/7 is only useful, if other than open source bots are run on the
server, otherwise the author can run it simply on gomill...

Last week only Aya_50 and Amigo was run on 13x13 24/7
cgos.boardspace.net

Am 22.05.2015 um 23:14 schrieb Christoph Birk:
 
 On May 22, 2015, at 10:46 AM, Detlef Schmicker d...@physik.de
 wrote:
 I wonder, if it would help to put it up once a week or so, with
 announcement, and take it down again, if the number of bots falls
 below 5 or so?
 
 I am not actively developing a bot, but IMHO without being up 24/7
 CGOS is not very useful.
 
 Christoph
 
 --- Science advances one funeral at a time -- Max Planck
 
 
 
 ___ Computer-go mailing
 list Computer-go@computer-go.org 
 http://computer-go.org/mailman/listinfo/computer-go
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBAgAGBQJVYC77AAoJEInWdHg+Znf409QP/RoWpQi9AzSDChKGiEz4ncaj
Ir74Dr5yw5eKu80QjMWBLy2FkZ2BIo6ebloQ0zDaXeDayuldZrDIC1+l4DXHKSnW
Q8lTjtYYQowMS2/JZwqzaXaqxHneEsxKk5uEypqhdy5Pv5XiA1qmRgr3bx8BtBDG
Q5rF7tp8FtuENoY99ZiaCIlX5ZLCb6dHOqkSHpuTWp6UA362An2LpvTO28CCnjFD
0vnbbpanrbb1RBNdKCW5CKYNoN7cemPSjuFc+CznyxCLBBMrK3wqxx6nGGqLEK/p
jNCQif18+I/5SLIrxXV9L6O88q/Yedvvr2IyUoDxQVaxOQA7cO97E8w7YA6hr7St
kbhPfbBkVtNJ4lXS8lLJPvZCYZqDulYw+PVpfFNnA++fFatJjlrnYaie/RlaEgkQ
1ERfrYz08om5/2GiZrDcGeZfQaUkpD4T43B3ZR+q0J8CS1uIRsSlKmBwryyHzQIP
6dnyg+jd7AnIkXKVgDtte+cssMMNz6iRWT/9DG1bqRywq6r7mJa7cvrXTJTLQBtx
0lBp81ldE81+HRVv9a5foJW2s/z5bTdA16KfzJC9ew5kuSkYe7j97JyWL476Ky80
nOXUXJ487bEipReDjHVsmdE/9Otnca/PLRbQGpyDspJ9Ph5CWRTWcOXM/5k6Ju7A
a9rrtXrejTHsoOvqkkSI
=LJn/
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS

2015-05-22 Thread Detlef Schmicker

OK,

it was up for about 3 weeks now for 9x9, 19x19 and 25x25 (with a gnugo 
anchor running all the time on every boardsize), with very few short 
breaks (I had to reboot my system due to using a MIDI keyboard:) I am 
not sure, if the bots reconnected automatically, or if the authors 
reconnected the manually, maybe somebody has a feed back on this.


Not too many bots connected, but 13x13 on cgos.boardspace.net is quite 
empty too.


I wonder, if it would help to put it up once a week or so, with 
announcement, and take it down again, if the number of bots falls below 
5 or so?

Maybe this would bring more bots connect at the same time?!
I do not have the resource (and do not want to spend the energy costs) 
to keep a number of strong open source bots up, so bots from 900 to 
2700ELO could use cgos with some sense.


Detlef


Am 02.05.2015 um 18:18 schrieb Detlef Schmicker:
8084 for 25x25 with GnuGo 3.8 as ELO 1800 anachor is up for a while 
now :)


9x9 too, (13x13 I will not set up, original cgos is running fine on 
13x13)


Detlef

Am 02.05.2015 um 07:21 schrieb Detlef Schmicker:

Hi,

I set up a CGOS server at home. It is connected via dyndns, which is 
not optimal of cause :(


physik.selfhost.eu

Ports:
8080 (webinterface)
8083 (19x19, GnuGo 3.8 set to ELO 1800 as anachor)

This is mainly for testing, if I get CGOS up correctly, what to do to 
have it permanently running still to be seen.
I am not able to test the connection from the outside, hopefully I 
set up everything correctly.
I might stop the server for the tournament on sunday, as it is the 
same machine


future plan is:
8081 for 9x9 8082 for 13x13 and 8084 for 25x25.
(you will see on the web interface, as soon as the other boardsizes 
are switched on.



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS

2015-05-03 Thread Detlef Schmicker

Hi,

I really did not do a lot:

mainly I run make in the server directory, and got an error message that 
the sqlite version did not match (after trying to start or imediatly, I 
do not remember)


cgos.vfs/lib/sqlite3/pkgIndex.tcl:package ifneeded sqlite3 3.3.5

this line had 3.3.4

now I start with
gnome-terminal --tab -e ./cgos-linux-x86_64 cgos19.cfg --tab -e 
./webuild-linux-x86_64 cgos19.cfg --tab -e ./http_server.sh --tab -e 
./cgosGtp-linux-x86_64 -c gnugo-3.8-a0.cfg


with for web serving

detlef@ubuntu-i7:~/cgosboar$ cat http_server.sh
#!/bin/bash
cd public_html
python -m SimpleHTTPServer 8080

to have all 19x19 in one gnome-terminal

As you see I run the binaries produced, not the way in the 
install.readme file.


This is what my sources looks like, it is some months ago that I 
downloaded them..


http://physik.de/cgossource.tar.gz


My 9x9 webbuild sometimes crashes with database locked, I think this is 
because at the moment the games are very fast, but we will see



Detlef

P.S.: I do not have tcl experience, so repacking would be try and error


Am 03.05.2015 um 07:35 schrieb Joshua Shriver:

Would you be willing to help me revitalize cgos.computergo.org?  I can
even give you ssh access.

This we your server can sit on a dedicated IP/domain  and we can kinda
refresh things a bit and have a clean work/server space.

Just let me know.
-Josh

I'm still curious how you got around the myriad of SQLite issues. I
know when I tried running local, I had to repackage the tclkit server
by copying a local built sqlite.so. file
Lots of kinks and issues, but to be honest you are doing a better job
at me at understanding/running it.

So I can at least offer dedicated space.

On Sat, May 2, 2015 at 12:18 PM, Detlef Schmicker d...@physik.de wrote:

8084 for 25x25 with GnuGo 3.8 as ELO 1800 anachor is up for a while now :)

9x9 too, (13x13 I will not set up, original cgos is running fine on 13x13)

Detlef

Am 02.05.2015 um 07:21 schrieb Detlef Schmicker:

Hi,

I set up a CGOS server at home. It is connected via dyndns, which is not
optimal of cause :(

physik.selfhost.eu

Ports:
8080 (webinterface)
8083 (19x19, GnuGo 3.8 set to ELO 1800 as anachor)

This is mainly for testing, if I get CGOS up correctly, what to do to have
it permanently running still to be seen.
I am not able to test the connection from the outside, hopefully I set up
everything correctly.
I might stop the server for the tournament on sunday, as it is the same
machine

future plan is:
8081 for 9x9 8082 for 13x13 and 8084 for 25x25.
(you will see on the web interface, as soon as the other boardsizes are
switched on.


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS

2015-05-02 Thread Detlef Schmicker

Great,

for the mean time: Do you have an idea, how the bayeselo rating is 
produced? My cgos source does not seem to include it?


The archives are probably a hand made script?!

Detlef

Am 02.05.2015 um 09:25 schrieb Joshua Shriver:

Working on it :)

On Sat, May 2, 2015 at 2:25 AM, Detlef Schmicker d...@physik.de wrote:

Thanks for connecting!

Am 02.05.2015 um 08:11 schrieb remi.cou...@free.fr:

Great! Thanks for your efforts. I have just connected Crazy Stone, and it
seems to be working.

My favorite setting would be 9x9 with 7-point komi. But unfortunately, I
believe CGOS does not support jigo. Would be great if it did.

I think you are right :(

set sc [expr $sc - $komi]
 if { $sc  0.0 } {
 set sc [expr -$sc]
 set over W+$sc
 gameover $gid $over 
 return
 } else {
 set over B+$sc
 gameover $gid $over 
 return
 }

This definitly looks like a jigo is not possible. I am afraid, I will
probably not go into this. I still hope for a future CGOS replacement :)

Detlef


Rémi


- Mail original -
De: Detlef Schmicker d...@physik.de
À: computer-go@computer-go.org
Envoyé: Samedi 2 Mai 2015 14:21:05
Objet: [Computer-go] CGOS

Hi,

I set up a CGOS server at home. It is connected via dyndns, which is not
optimal of cause :(

physik.selfhost.eu

Ports:
8080 (webinterface)
8083 (19x19, GnuGo 3.8 set to ELO 1800 as anachor)

This is mainly for testing, if I get CGOS up correctly, what to do to
have it permanently running still to be seen.
I am not able to test the connection from the outside, hopefully I set
up everything correctly.
I might stop the server for the tournament on sunday, as it is the same
machine

future plan is:
8081 for 9x9 8082 for 13x13 and 8084 for 25x25.
(you will see on the web interface, as soon as the other boardsizes are
switched on.


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS

2015-05-02 Thread Detlef Schmicker

Thanks for connecting!

Am 02.05.2015 um 08:11 schrieb remi.cou...@free.fr:

Great! Thanks for your efforts. I have just connected Crazy Stone, and it seems 
to be working.

My favorite setting would be 9x9 with 7-point komi. But unfortunately, I 
believe CGOS does not support jigo. Would be great if it did.

I think you are right :(

set sc [expr $sc - $komi]
if { $sc  0.0 } {
set sc [expr -$sc]
set over W+$sc
gameover $gid $over 
return
} else {
set over B+$sc
gameover $gid $over 
return
}

This definitly looks like a jigo is not possible. I am afraid, I will 
probably not go into this. I still hope for a future CGOS replacement :)


Detlef



Rémi


- Mail original -
De: Detlef Schmicker d...@physik.de
À: computer-go@computer-go.org
Envoyé: Samedi 2 Mai 2015 14:21:05
Objet: [Computer-go] CGOS

Hi,

I set up a CGOS server at home. It is connected via dyndns, which is not
optimal of cause :(

physik.selfhost.eu

Ports:
8080 (webinterface)
8083 (19x19, GnuGo 3.8 set to ELO 1800 as anachor)

This is mainly for testing, if I get CGOS up correctly, what to do to
have it permanently running still to be seen.
I am not able to test the connection from the outside, hopefully I set
up everything correctly.
I might stop the server for the tournament on sunday, as it is the same
machine

future plan is:
8081 for 9x9 8082 for 13x13 and 8084 for 25x25.
(you will see on the web interface, as soon as the other boardsizes are
switched on.


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS

2015-05-02 Thread Detlef Schmicker

8084 for 25x25 with GnuGo 3.8 as ELO 1800 anachor is up for a while now :)

9x9 too, (13x13 I will not set up, original cgos is running fine on 13x13)

Detlef

Am 02.05.2015 um 07:21 schrieb Detlef Schmicker:

Hi,

I set up a CGOS server at home. It is connected via dyndns, which is 
not optimal of cause :(


physik.selfhost.eu

Ports:
8080 (webinterface)
8083 (19x19, GnuGo 3.8 set to ELO 1800 as anachor)

This is mainly for testing, if I get CGOS up correctly, what to do to 
have it permanently running still to be seen.
I am not able to test the connection from the outside, hopefully I set 
up everything correctly.
I might stop the server for the tournament on sunday, as it is the 
same machine


future plan is:
8081 for 9x9 8082 for 13x13 and 8084 for 25x25.
(you will see on the web interface, as soon as the other boardsizes 
are switched on.



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS

2015-05-02 Thread Detlef Schmicker

OK, I will not set bayeselo for now, happy if basics are working:)

No, I did not change anything but the version number of sqlite3 (from 
3.3.4 to 3.3.5 or so), the version offered by ubuntu 12.04


Only changed the directories in the cgos19.cfg and  used SQLite Browser 
to open the cgos.state database (produced after the first start) and 
added the anachor in the anachor table.


Now I have 4 processes up: cgos, webbuild, anachor gnugo and a pyhton -m 
SimpleHTTPServer



Detlef

Am 02.05.2015 um 09:33 schrieb Joshua Shriver:

If you're using the github source for CGOS, (Don's)  CGOS uses an
external program. Basically dumps all games from the database, runs
bayeselo takes results and pushes back to the sqlite for the official
rating. But it doesn't seem to always work or update properly and if
the DB gets messed up on update it seems to mess up the cgos server
itself. This was one big reason I've had trouble getting 9x9 and 19x19
back up.

I'll have to check the crontab but it should show a tcl script that
fires off a sqlite3 dump and feeds bayeselo.
Did you make any changes to the git source to get it to run ok?
-Josh

On Sat, May 2, 2015 at 3:28 AM, Detlef Schmicker d...@physik.de wrote:

Great,

for the mean time: Do you have an idea, how the bayeselo rating is produced?
My cgos source does not seem to include it?

The archives are probably a hand made script?!

Detlef

Am 02.05.2015 um 09:25 schrieb Joshua Shriver:

Working on it :)

On Sat, May 2, 2015 at 2:25 AM, Detlef Schmicker d...@physik.de wrote:

Thanks for connecting!

Am 02.05.2015 um 08:11 schrieb remi.cou...@free.fr:

Great! Thanks for your efforts. I have just connected Crazy Stone, and
it
seems to be working.

My favorite setting would be 9x9 with 7-point komi. But unfortunately, I
believe CGOS does not support jigo. Would be great if it did.

I think you are right :(

set sc [expr $sc - $komi]
  if { $sc  0.0 } {
  set sc [expr -$sc]
  set over W+$sc
  gameover $gid $over 
  return
  } else {
  set over B+$sc
  gameover $gid $over 
  return
  }

This definitly looks like a jigo is not possible. I am afraid, I will
probably not go into this. I still hope for a future CGOS replacement
:)

Detlef


Rémi


- Mail original -
De: Detlef Schmicker d...@physik.de
À: computer-go@computer-go.org
Envoyé: Samedi 2 Mai 2015 14:21:05
Objet: [Computer-go] CGOS

Hi,

I set up a CGOS server at home. It is connected via dyndns, which is not
optimal of cause :(

physik.selfhost.eu

Ports:
8080 (webinterface)
8083 (19x19, GnuGo 3.8 set to ELO 1800 as anachor)

This is mainly for testing, if I get CGOS up correctly, what to do to
have it permanently running still to be seen.
I am not able to test the connection from the outside, hopefully I set
up everything correctly.
I might stop the server for the tournament on sunday, as it is the same
machine

future plan is:
8081 for 9x9 8082 for 13x13 and 8084 for 25x25.
(you will see on the web interface, as soon as the other boardsizes are
switched on.


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] CGOS

2015-05-01 Thread Detlef Schmicker

Hi,

I set up a CGOS server at home. It is connected via dyndns, which is not 
optimal of cause :(


physik.selfhost.eu

Ports:
8080 (webinterface)
8083 (19x19, GnuGo 3.8 set to ELO 1800 as anachor)

This is mainly for testing, if I get CGOS up correctly, what to do to 
have it permanently running still to be seen.
I am not able to test the connection from the outside, hopefully I set 
up everything correctly.
I might stop the server for the tournament on sunday, as it is the same 
machine


future plan is:
8081 for 9x9 8082 for 13x13 and 8084 for 25x25.
(you will see on the web interface, as soon as the other boardsizes are 
switched on.



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] 25x25 experiment

2015-04-29 Thread Detlef Schmicker
My net should behave very similar to Christopher Clark 
http://arxiv.org/find/cs/1/au:+Clark_C/0/1/0/all/0/1, Amos Storkey 
http://arxiv.org/find/cs/1/au:+Storkey_A/0/1/0/all/0/1, but I will not 
do a standalone player, sorry. I have integrated it into our MC tree 
search...
In our way of integrating the net into MC Christopher Clark 
http://arxiv.org/find/cs/1/au:+Clark_C/0/1/0/all/0/1, Amos Storkey 
http://arxiv.org/find/cs/1/au:+Storkey_A/0/1/0/all/0/1 is 
significantly (60ELO) stronger than your net (I did a net similar to 
yours (not all features, but last move features), with a little  above 
50% prediction rate), therefore I am using the net without last move 
feature.


But it may be, that last move features come from our original gammas 
anyway, which are mixed with CNN values...


Am 28.04.2015 um 18:26 schrieb Aja Huang:


On Mon, Apr 27, 2015 at 2:12 PM, Detlef Schmicker d...@physik.de 
mailto:d...@physik.de wrote:


I did not do any playing tests without MC! On CGOS 13x13 I have
two players (NiceGo) with 50 and 1k playouts running at the moment...


Maybe you can play your CNN of 44% prediction rate on KGS and compare 
it with DCNNigo which is solid KGS 5k at the moment. Our 12-layer CNN 
described in the paper is 600 Elo stronger than GnuGo, about 2k on KGS.


Aja



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] 25x25 experiment

2015-04-27 Thread Detlef Schmicker
oakfoam has BOARDSIZE_MAX set to 25, but it seems it is only used to say 
unsupported board size at the moment :)
I think the reason was gtp, but it was set long before I joined the 
project 



I dont see a reason, why there should be any problems using it with DNN 
on 19x19 trained network. If a 25x25 will be sheduled, I will take part :)



Detlef


Am 27.04.2015 um 12:00 schrieb Petr Baudis:

On Sun, Apr 26, 2015 at 11:26:42PM +0200, Petr Baudis wrote:

On Sun, Apr 26, 2015 at 12:17:01PM +0200, remi.cou...@free.fr wrote:

Hi,

I thought it might be fun to have a tournament on a very large board. It might 
also motivate research into more clever adaptive playouts. Maybe a KGS 
tournament? What do you think?

That's a cool idea - even though I wonder if 39x39 is maybe too extreme
(I guess the motivation is maximum size KGS allows).

I think that actually, GNUGo could become stronger than even the top
MCTS programs at some point when expanding the board size, but it's hard
for me to estimate exactly when - if at 25x25 or at 49x49...

I've let play Pachi (in the same configuration that ranks it as 2d on
KGS, but with 15s/move) to play GNUGo for a few games on 25x25 just to
see how it would go.  I'm attaching three SGFs if anyone would like to
take a look, Pachi never had trouble beating GNUGo.

Couple of observations:

(i) The speed is only about 60% playouts in the same time compared
to 19x19.

(ii) GNUGo needs to be recompiled to work on larger boards, modify
the MAX_BOARD #define in engine/board.h.  (Same with Pachi.)

(iii) As-is, Pachi might get into stack overflow trouble if ran on
larger boards than 25x25.

(iv) 25x25 is the last board size where columns can be represented by
single English alphabet letters.  This is the reason for the GTP
limitation, but might trigger other limitations in debug routines etc.

(v) The very first game (not included), Pachi lost completely.
I discovered that my max playout length was 600 moves; bumping that
to 1200 made things boring again.

(vi) Some typically fast operations take much longer on large boards,
e.g. tree pruning (because much wider tree breadth; on 19x19 it's
rarely more than 100ms but it can take seconds on 25x25 for some
reason); this would actually make Pachi occassionally lose byoyomi
periods by a second or two without a manual time allocation adjustment.

And a conjencture:

(vii) (Even) games against GNUGo still aren't interesting on 25x25.
The same factors that might benefit GNUGo compared to MCTS programs
should also benefit DNN players and the difference might be more
visible because a DNN should be much stronger than GNUGo.  I wonder
if the oakfoam or any other effort on building an open source DNN
implementation can already play standalone games and how well it works?



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] 25x25 experiment

2015-04-27 Thread Detlef Schmicker

Sorry, it is not,


but I offer my trained DCNN http://physik.de/net.tgz

it has about 44% prediction rate and uses only the position

it a a caffe file which is quite easy to use:

void Engine::getCNN(Go::Board *board,Go::Color col, float result[])
{
  int size=board-getSize();
float *data;
data= new float[2*size*size];
//fprintf(stderr,2\n);
if (col==Go::BLACK) {
  for (int j=0;jsize;j++)
for (int k=0;ksize;k++)
  {//fprintf(stderr,%d %d %d\n,i,j,k);
  if (board-getColor(Go::Position::xy2pos(j,k,size))==Go::BLACK)
  {
  data[j*size+k]=1.0;
  data[size*size+size*j+k]=0.0;
  }
  else if 
(board-getColor(Go::Position::xy2pos(j,k,size))==Go::WHITE)

  {
  data[j*size+k]=0.0;
  data[size*size+size*j+k]=1.0;
  }
  else
  {
  data[j*size+k]=0.0;
  data[size*size+size*j+k]=0.0;
  }
}
}
if (col==Go::WHITE) {
  for (int j=0;jsize;j++)
for (int k=0;ksize;k++)
  {//fprintf(stderr,%d %d %d\n,i,j,k);
  if (board-getColor(Go::Position::xy2pos(j,k,size))==Go::BLACK)
  {
  data[j*size+k]=0.0;
  data[size*size+size*j+k]=1.0;
  }
  else if 
(board-getColor(Go::Position::xy2pos(j,k,size))==Go::WHITE)

  {
  data[j*size+k]=1.0;
  data[size*size+size*j+k]=0.0;
  }
  else
  {
  data[j*size+k]=0.0;
  data[size*size+size*j+k]=0.0;
  }
}
}


  Blobfloat *b=new Blobfloat(1,2,size,size);
  b-set_cpu_data(data);
  vectorBlobfloat* bottom;
  bottom.push_back(b);
  const vectorBlobfloat* rr = caffe_test_net-Forward(bottom);
  //for (int j=0;j19;j++)
//{
//for (int k=0;k19;k++)
//{
//fprintf(stderr,%5.3f ,rr[0]-cpu_data()[j*19+k]);
//}
//fprintf(stderr,\n);
//}
  for (int i=0;isize*size;i++) {
  result[i]=rr[0]-cpu_data()[i];
if (result[i]0.1) result[i]=0.1;
  }
  delete[] data;
  delete b;
}



Am 27.04.2015 um 13:44 schrieb Petr Baudis:

On Mon, Apr 27, 2015 at 12:35:05PM +0200, Detlef Schmicker wrote:

I dont see a reason, why there should be any problems using it with
DNN on 19x19 trained network. If a 25x25 will be sheduled, I will
take part :)

I'm sorry for being unclear!  I actually meant DCNN as a standalone
player, not part of MCTS.  Is it possible to run oakfoam's DCNN
implementation like that?  (Have you measured its stength?)

Thanks,

Petr Baudis
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Liberty races in playouts

2015-04-26 Thread Detlef Schmicker

Hi,

I wonder which ideas are around for liberty races in playouts.

What nicego does: it reweights the random moves in the playout to make 
sure, that each point is played with roughly the same probability.


This approach tries to solve the problem, that local playout rules 
modify this probability, as this leads to more than one point ending in 
playing at a liberty in a liberty race.


But this just fixes the problem that were introduced due to local 
playout rules



What are you doing, and does it work?

Detlef
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS future

2015-04-09 Thread Detlef Schmicker

Hi, I just started playing on 13x13 again, very busy :)

One feature request:
give the programmer the chance, to add some information to the bot (in 
the configuration file?!):


e.g. Aya783a_50 (is it 50 playouts?! seems to slow for that or running 
on an old mobile :)


Detlef

Am 06.04.2015 um 14:21 schrieb Joshua Shriver:

This sounds like a good idea and I haven't ruled out Java, but not the
biggest fan. But would rather do it in Java than say C.

I was leaning toward C# since it's a very popular and portable
language. The code would be portable among Win/Lin/OS X heck even
Android/iOS (for viewer)  due to Xamarin.

I'm working on the first baby-steps of just getting a very basic
server/client model up and running and as you said relying more on GTP
and the engines to do more self-checking and use cgos only as a
gateway (till next update).

Going to use mysql/mariasql instead of lightsql. Then I can use php
for the backend webwork.

I'm going to re-use some of the webcode I wrote years ago for OICS as
well so that will kickstart that portion a bit.

Will update more as things progress.
-Josh

On Mon, Apr 6, 2015 at 6:13 AM, Detlef Schmicker d...@physik.de wrote:

What about just start the project on github or https://bitbucket.org/ (is
not bad at forking and merging)

Open an issue for the discussion and off we go:)

When I was thinking of a quick solution I was thinking about gogui, which
supports most of the game handling already.

http://gogui.sourceforge.net/doc/reference-twogtp.html

GoGui is well tested and widely used, as far as I know. The twogtp tool is
used e.g. in CLOP and works really great. A observer program may be added to
judge the resulting position, e.g.
gnugo...

The first server / client would not have to do much more than authorization
and afterwords tunneling gtp to get it playing and logging the games and
results?!

The result logs would be used to compute the ratings (possibly by a
independend process)?!

I have no special wishes about programming language, but as kgsclient
requires java anyway, java dependency is no additional dependency for new go
programmer...

I do not love java, but if one thinks of integrating the server into gogui
it might be a good idea to use?!

I would just start and try connecting Markus Enzenberger (author of GoGui
later, if he is interested in merging?!)

I would definitely work on the project, but it should have a quick start and
room for improvement later:)

Detlef



Am 03.04.2015 um 16:11 schrieb Joshua Shriver:

Agree as well.  But would like to offer both options.   Planning to
use github and make it 100% open source.

-Josh

On Fri, Apr 3, 2015 at 10:05 AM, Christoph Birk
b...@obs.carnegiescience.edu wrote:

On Apr 3, 2015, at 5:40 AM, folkert folk...@vanheusden.com wrote:

My goal is to move away from interpreted languages and release SOLID
.exe or bin for unices.

Are you talking about servers or clients there?

For clients, PLEASE do not release binaries, release sources. No sane
linux user installs random binaries.

I 100% agree,
Christoph

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS future

2015-04-06 Thread Detlef Schmicker
What about just start the project on github or https://bitbucket.org/ 
(is not bad at forking and merging)


Open an issue for the discussion and off we go:)

When I was thinking of a quick solution I was thinking about gogui, 
which supports most of the game handling already.


http://gogui.sourceforge.net/doc/reference-twogtp.html

GoGui is well tested and widely used, as far as I know. The twogtp tool 
is used e.g. in CLOP and works really great. A observer program may be 
added to judge the resulting position, e.g.

gnugo...

The first server / client would not have to do much more than 
authorization and afterwords tunneling gtp to get it playing and logging 
the games and results?!


The result logs would be used to compute the ratings (possibly by a 
independend process)?!


I have no special wishes about programming language, but as kgsclient 
requires java anyway, java dependency is no additional dependency for 
new go programmer...


I do not love java, but if one thinks of integrating the server into 
gogui it might be a good idea to use?!


I would just start and try connecting Markus Enzenberger (author of 
GoGui later, if he is interested in merging?!)


I would definitely work on the project, but it should have a quick start 
and room for improvement later:)


Detlef



Am 03.04.2015 um 16:11 schrieb Joshua Shriver:

Agree as well.  But would like to offer both options.   Planning to
use github and make it 100% open source.

-Josh

On Fri, Apr 3, 2015 at 10:05 AM, Christoph Birk
b...@obs.carnegiescience.edu wrote:

On Apr 3, 2015, at 5:40 AM, folkert folk...@vanheusden.com wrote:

My goal is to move away from interpreted languages and release SOLID
.exe or bin for unices.

Are you talking about servers or clients there?

For clients, PLEASE do not release binaries, release sources. No sane
linux user installs random binaries.

I 100% agree,
Christoph

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] cgos.computergo.org down?

2015-03-07 Thread Detlef Schmicker

This would be a dream of mine :)

Detlef

Am 06.03.2015 um 19:53 schrieb Joshua Shriver:

Mean the archive of .sgf games?  I have a couple backups.  If people
want a mega zip file I could get them all together and host it on the
web.
As for the live archive post game play it's kinda broken right now.
Same with 9x9 and 19x19, sadly.

-Josh

On Fri, Mar 6, 2015 at 1:47 PM, Detlef Schmicker d...@physik.de wrote:

This is great, especially as the next KGS tournament is 13x13 :)

Is there a way to get the CGOS archives back online? Or does anybody have a
copy which he can offer?

Would be really great

Thanks Detlef


Am 05.03.2015 um 08:23 schrieb valky...@phmp.se:

At least the 13x13 server is working now.

-Magnus

On 2015-03-04 09:18, Urban Hafner wrote:

On Tue, Mar 3, 2015 at 9:57 PM, Joshua Shriver jshri...@gmail.com
wrote:


Thanks for the heads up *sigh*

I really am contemplating re-writing CGOS from scratch. TCL is
just
quirky as hell. Plus would make updates easier and attract other
developers or contributions.


If you could find the time to do that then it would be pretty awesome!
I cannot guarantee that I will have time to contribute, but I will
certainly keep an eye on it. Personally, I'm really missing CGOS. The
time it was up recently really helped me debug some errors in my bot.

Urban--

Blog: http://bettong.net/ [1]
Twitter: https://twitter.com/ujh [2]
Homepage: http://www.urbanhafner.com/ [3]

Links:
--
[1] http://bettong.net/
[2] https://twitter.com/ujh
[3] http://www.urbanhafner.com/

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] cgos.computergo.org down?

2015-03-06 Thread Detlef Schmicker

This is great, especially as the next KGS tournament is 13x13 :)

Is there a way to get the CGOS archives back online? Or does anybody 
have a copy which he can offer?


Would be really great

Thanks Detlef


Am 05.03.2015 um 08:23 schrieb valky...@phmp.se:

At least the 13x13 server is working now.

-Magnus

On 2015-03-04 09:18, Urban Hafner wrote:

On Tue, Mar 3, 2015 at 9:57 PM, Joshua Shriver jshri...@gmail.com
wrote:


Thanks for the heads up *sigh*

I really am contemplating re-writing CGOS from scratch. TCL is
just
quirky as hell. Plus would make updates easier and attract other
developers or contributions.


If you could find the time to do that then it would be pretty awesome!
I cannot guarantee that I will have time to contribute, but I will
certainly keep an eye on it. Personally, I'm really missing CGOS. The
time it was up recently really helped me debug some errors in my bot.

Urban--

Blog: http://bettong.net/ [1]
Twitter: https://twitter.com/ujh [2]
Homepage: http://www.urbanhafner.com/ [3]

Links:
--
[1] http://bettong.net/
[2] https://twitter.com/ujh
[3] http://www.urbanhafner.com/

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to Zen!

2015-02-17 Thread Detlef Schmicker

What driver is loaded before suspend?

My guess: your distro does not reload the corresponing kernel module 
after suspend...


On Ubuntu the driver does not seem to be loaded as module, therefore I 
can not check...


if you know what module, check lsmod to see if it is loaded

after suspend I would modprobe the module and hope it works?!

Am 17.02.2015 um 08:46 schrieb Rémi Coulom:

Thanks Nick,

It was fun to participate again.

After the tournament, I noticed that my CPU was about 3x slower than 
usual. It turns out it is because of a bug in my Linux distro (Mint 
17). After suspend, the CPU becomes slow.


I have not yet found a proper way to fix this. Maybe someone on this 
list can help?


cpufreq-info returns this:
cpufrequtils 008: cpufreq-info (C) Dominik Brodowski 2004-2009
Report errors and bugs to cpuf...@vger.kernel.org, please.
analyzing CPU 0:
  no or unknown cpufreq driver is active on this CPU
  maximum transition latency: 4294.55 ms.

So none of the cpufreq-related workarounds I found on the web works.

This is a bug report of the same problem:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1102318
But I can't get indicator-cpufreq to work, maybe for similar reasons: 
whenever I drag the icon to the panel, nothing happens.


This also suggests indicator-cpufreq:
http://ubuntuforums.org/showthread.php?t=1377382

I'd be grateful for any help.

Thanks,

Rémi

On 02/17/2015 12:01 AM, Nick Wedd wrote:

Congratulations to Zen19S, winner of yesterday's KGS bot tournament!

My report is at http://www.weddslist.com/kgs/past/110/index.html
As usual, I hope you will send me your comments and corrections.

Nick
--
Nick Wedd mapr...@gmail.com mailto:mapr...@gmail.com


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] CNN for winrate and territory

2015-02-08 Thread Detlef Schmicker

Hi,

I am working on a CNN for winrate and territory:

approach:
 - input 2 layers for b and w stones
 - 1. output: 1 layer territory (0.0 for owned by white, 1.0 for owned 
by black (because I missed TANH in the first place I used SIGMOID))

 - 2. output: label for -60 to +60 territory leading by black
the loss of both outputs is trained

the idea is, that this way I do not have to put komi into input and make 
the winrate from the statistics of the trained label:


e.g. komi 6.5: I sum the probabilites from +7 to +60 and get something 
like a winrate


I trained with 80 positions with territory information through 500 
playouts from oakfoam, which I symmetrized by the 8 transformation 
leading to 600 positions. (It is expensive to produce the positions 
due to the playouts)


The layers are the same as the large network from Christopher Clark 
http://arxiv.org/find/cs/1/au:+Clark_C/0/1/0/all/0/1, Amos Storkey 
http://arxiv.org/find/cs/1/au:+Storkey_A/0/1/0/all/0/1 : 
http://arxiv.org/abs/1412.3409



I get reasonable territory predictions from this network (compared to 
500 playouts of oakfoam), the winrates seems to be overestimated. But 
anyway, it looks as it is worth to do some more work on it.


The idea is, I can do the equivalent of lets say 1000 playouts with a 
call to the CNN for the cost of 2 playouts some time...



Now I try to do a soft turnover from conventional playouts to CNN 
predicted winrates within the framework of MC.


I do have some ideas, but I am not happy with them.

Maybe you have better ones :)


Thanks a lot

Detlef

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CNN for winrate and territory

2015-02-08 Thread Detlef Schmicker

Exactly the one from the cited paper:


The best network had one convolutional layer with 64 7x7
filters, two convolutional layers with 64 5x5 filters, two lay-
ers with 48 5x5 filters, two layers with 32 5x5 filters, and
one fully connected layer.


I use caffe and the definition of the training network is:

name: LogReg
layers {
  name: mnist
  type: DATA
  top: data_orig
  top: label
  data_param {
source: train_result_leveldb/
batch_size: 256
  }
  include: { phase: TRAIN }
}
layers {
  name: mnist
  type: DATA
  top: data_orig
  top: label
  data_param {
source: test_result_leveldb/
batch_size: 256
  }
  include: { phase: TEST }
}

layers {
  name: slice
  type: SLICE
  bottom: data_orig
  top: data
  top: data_territory
  slice_param {
slice_dim: 1
slice_point : 2
  }
}

#this part should be the same in learning and prediction network
layers {
  name: conv1_7x7_64
  type: CONVOLUTION
  blobs_lr: 1.
  blobs_lr: 2.
  bottom: data
  top: conv2
  convolution_param {
num_output: 64
kernel_size: 7
pad: 3
weight_filler {
  type: xavier
  }
  bias_filler {
  type: constant
  }
}
}

layers {
  name: relu2
  type: TANH
  bottom: conv2
  top: conv2u
}

layers {
  name: conv2_5x5_64
  type: CONVOLUTION
  blobs_lr: 1.
  blobs_lr: 2.
  bottom: conv2u
  top: conv3
  convolution_param {
num_output: 64
kernel_size: 5
pad: 2
weight_filler {
  type: xavier
  }
  bias_filler {
  type: constant
  }
}
}

layers {
  name: relu3
  type: TANH
  bottom: conv3
  top: conv3u
}

layers {
  name: conv3_5x5_64
  type: CONVOLUTION
  blobs_lr: 1.
  blobs_lr: 2.
  bottom: conv3u
  top: conv4
  convolution_param {
num_output: 64
kernel_size: 5
pad: 2
weight_filler {
  type: xavier
  }
  bias_filler {
  type: constant
  }
}
}

layers {
  name: relu4
  type: TANH
  bottom: conv4
  top: conv4u
}

layers {
  name: conv4_5x5_48
  type: CONVOLUTION
  blobs_lr: 1.
  blobs_lr: 2.
  bottom: conv4u
  top: conv5
  convolution_param {
num_output: 48
kernel_size: 5
pad: 2
weight_filler {
  type: xavier
  }
  bias_filler {
  type: constant
  }
}
}

layers {
  name: relu5
  type: TANH
  bottom: conv5
  top: conv5u
}

layers {
  name: conv5_5x5_48
  type: CONVOLUTION
  blobs_lr: 1.
  blobs_lr: 2.
  bottom: conv5u
  top: conv6
  convolution_param {
num_output: 48
kernel_size: 5
pad: 2
weight_filler {
  type: xavier
  }
  bias_filler {
  type: constant
  }
}
}

layers {
  name: relu6
  type: TANH
  bottom: conv6
  top: conv6u
}


layers {
  name: conv6_5x5_32
  type: CONVOLUTION
  blobs_lr: 1.
  blobs_lr: 2.
  bottom: conv6u
  top: conv7
  convolution_param {
num_output: 32
kernel_size: 5
pad: 2
weight_filler {
  type: xavier
  }
  bias_filler {
  type: constant
  }
}
}

layers {
  name: relu7
  type: TANH
  bottom: conv7
  top: conv7u
}


layers {
  name: conv7_5x5_32
  type: CONVOLUTION
  blobs_lr: 1.
  blobs_lr: 2.
  bottom: conv7u
  top: conv8
  convolution_param {
num_output: 32
kernel_size: 5
pad: 2
weight_filler {
  type: xavier
  }
  bias_filler {
  type: constant
  }
}
}

layers {
  name: relu8
  type: TANH
  bottom: conv8
  top: conv8u
}

layers {
  name: flat
  type: FLATTEN
  bottom: conv8u
  top: conv8_flat
}

layers {
  name: split
  type: SPLIT
  bottom: conv8_flat
  top: conv8_flata
  top: conv8_flatb
}

layers {
  name: ip
  type: INNER_PRODUCT
  bottom: conv8_flata
  top: ip_zw
  inner_product_param {
num_output: 361
weight_filler {
  type: xavier
  }
bias_filler {
  type: constant
  }
   }
}

layers {
  name: sigmoid
  type: SIGMOID
  bottom: ip_zw
  top: ip_zws
}

layers {
  name: ip2
  type: INNER_PRODUCT
  bottom: conv8_flatb
  top: ip_label
  inner_product_param {
num_output: 121
weight_filler {
  type: xavier
  }
bias_filler {
  type: constant
  }
   }
}


#only learning framework
layers {
  name: flat
  type: FLATTEN
  bottom: data_territory
  top: flat
}
layers {
  name: loss
  type: EUCLIDEAN_LOSS
  bottom: ip_zws
  bottom: flat
  top: lossa
}
layers {
name: accuracy
type: ACCURACY
bottom: ip_label
bottom: label
top: accuracy
}

layers {
  name: loss
  type: SOFTMAX_LOSS
  bottom: ip_label
  bottom: label
  top: lossb
}



Am 08.02.2015 um 11:43 schrieb Álvaro Begué:

What network architecture did you use? Can you give us some details?



On Sun, Feb 8, 2015 at 5:22 AM, Detlef Schmicker d...@physik.de 
mailto:d...@physik.de wrote:


Hi,

I am working on a CNN for winrate and territory:

approach:
 - input 2 layers for b and w stones
 - 1. output: 1 layer territory (0.0 for owned by white, 1.0 for
owned by black (because I missed TANH in the first place I used
SIGMOID))
 - 2. output: label for -60 to +60 territory leading by black

Re: [Computer-go] CGOS back online

2015-01-17 Thread Detlef Schmicker
Seems a good idea to me. It is a quasi standard in publishing, so why 
not set Gnugo-3.7.10 at level 10 to 1800ELO on every board size?!



Am 16.01.2015 um 23:17 schrieb Christoph Birk:

On 01/16/2015 12:03 PM, David Doshay wrote:

cgos.boardspace.net http://cgos.boardspace.net says:
At the current time there is one player called FatMan with a fixed ELO
of 1800 on the 9x9 server and Gnugo-3.7.10 at level 10 serves as the
anchor player on the 13x13 and 19x19 server, also with a fixed ELO of 
1800.


Should we use Gnugo-3.7.10 as the anchor for 9x9 too?
It was rated 1858.6 by 'bayeselo'.

Christoph

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS back online

2015-01-17 Thread Detlef Schmicker
You are right, I too often read 3.7 in the past, but actually the papers 
using 3.8 now:)



Am 17.01.2015 um 12:08 schrieb Urban Hafner:
On Sat, Jan 17, 2015 at 10:38 AM, Detlef Schmicker d...@physik.de 
mailto:d...@physik.de wrote:


Seems a good idea to me. It is a quasi standard in publishing, so
why not set Gnugo-3.7.10 at level 10 to 1800ELO on every board size?!


Why 3.7.10 and not 3.8? IIRC all Gnu projects use the odd numbered 
point releases as unstable releases. BTW, I've configured my computer 
to run GnuGo 3.8 Level 10 on the 13x13 server. It should run most of 
the time, but I'd be happy to hand over the account to someone why can 
actually run it 24/7.


Urban
--
Blog: http://bettong.net/
Twitter: https://twitter.com/ujh
Homepage: http://www.urbanhafner.com/


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS back online

2015-01-17 Thread Detlef Schmicker

I had a look into the sourced of cgos server.

If I understand them correctly, one must put the anchors directly into 
the database. The sources do not seem to have a configuration

option for this?!

As the old database is used, as far as I understood it, person running 
the anchors years ago could just reconnect their anchors, and they 
should be recognized. It might even be possible for somebody, who can 
make it sure to run the anchor in the future to just connect with the 
old anchor name and choose a pw, but I did not try, as I can not make 
sure to run it all time. There is a 6-month remark in the source, but I 
am not sure if this also removes passwords...


This is from the source:
if {![file exists $database_state_file]} {

sqlite3 db $database_state_file

db eval {
create table gameid(gid int);
create table password(name, pass, games int, rating, K, last_game, 
primary key(name) );
create table games(gid int, w, wr, b, br, dte, wtu, btu, res, 
final, primary key(gid));

create table anchors(name, rating, primary key(name));
create table clients( name, count );
INSERT into gameid VALUES(1);
}

db close
}


Am 17.01.2015 um 15:24 schrieb Detlef Schmicker:
You are right, I too often read 3.7 in the past, but actually the 
papers using 3.8 now:)



Am 17.01.2015 um 12:08 schrieb Urban Hafner:
On Sat, Jan 17, 2015 at 10:38 AM, Detlef Schmicker d...@physik.de 
mailto:d...@physik.de wrote:


Seems a good idea to me. It is a quasi standard in publishing, so
why not set Gnugo-3.7.10 at level 10 to 1800ELO on every board size?!


Why 3.7.10 and not 3.8? IIRC all Gnu projects use the odd numbered 
point releases as unstable releases. BTW, I've configured my computer 
to run GnuGo 3.8 Level 10 on the 13x13 server. It should run most of 
the time, but I'd be happy to hand over the account to someone why 
can actually run it 24/7.


Urban
--
Blog: http://bettong.net/
Twitter: https://twitter.com/ujh
Homepage: http://www.urbanhafner.com/


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go




___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS back online

2015-01-16 Thread Detlef Schmicker

I also set up a 13x13 client. Seems to work fine, but rating is off I think.

I will let it up for a while, hopefully some anchors coming up:)

Thanks a lot for setting it up again, Detlef


Am 16.01.2015 um 17:21 schrieb Christoph Birk:

On Jan 16, 2015, at 1:51 AM, valky...@phmp.se wrote:

I forgot to turn of automatic Power off in Windows so after an hour my computer 
hibernated. I had started Valkyria again this morning (now using 6 threads) and 
then CGOS seemed to recover.

Maybe CGOS froze because of this?

No, CGOS kept running fine after Valkyra disconnected,
Christoph

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CNN tool applied in practice

2015-01-12 Thread Detlef Schmicker

Thanks,

I will, but it will take some time.

It is a problem of resources:
As I have holidays in the summer I think of visiting Advances in 
Computer Games 2015 in Leiden.

And I do not want to go there with nothing!
So I need my graphic card for CNN training as I think of doing some 
research on CNN for position evaluation. Hopefully there will be some 
computer go people:)


On the other hand, I did not tune any of the important progressive 
widening parameters for the bot, which would take another week at least...





Am 12.01.2015 um 17:30 schrieb Ingo Althöfer:

Dear Detlef,



Todays bot tournament nicego19n (oakfoam) played with a CNN for move
prediction.

congratulation to the performance of your bot, and thanks for
letting us know.

Will you let NiceGo play in the KGS computer room against humans
to see how it performs?

Thumbs pressed, Ingo.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Representing Komi for neural network

2015-01-12 Thread Detlef Schmicker

Hi Hiroshi,

I try to layout my approach:
- expandLeaf: expands a leaf node, after being visited 10 times (a 
parameter, but increasing it usually did not harm playing strength 
significantly)


it contains a
if (!expandmutex.try_lock())
return false;

which was in the code anyway to avoid, that the same node was expanded 
by two different threads.
The second thread just works with the not expanded node, if the first 
thread is busy expanding it and goes on with the playout. In this case 
we had some delayed expansion anyway, as this thread is doing the 
playout with the node, which should have been expanded before.


The only thing I changed: expandmutex was a mutex local to the node 
before, now it is a global mutex. Now no node is being expanded, if one 
expansion is in progress by another thread, but the playouts go on with 
the not expanded nodes as usual.


I did not measure how often a node is visited on average now before 
being expanded, but with our relatively slow playouts and the not too 
big CNN I do not think it goes a lot above 15 or so. Tests indicated 
that even 30 for the parameter was not a problem for the playing 
strength before. I think including the CNN does not reduce the 
playouts/move significantly. I did no exact measurements, just was happy 
to see my usual numbers:) Even the 1.6ms used by the CNN will be on 
resources (graphic card) different from the cpu, therefore other threads 
(8 threads on 4 cores) might take the time to use the cpu resources.



Am 12.01.2015 um 13:24 schrieb Hiroshi Yamashita:
Todays bot tournament nicego19n (oakfoam) played with a CNN for move 


Great! oakfoam had played with CNN already.
Second game vs Aya was difficult semeai with ko.
http://files.gokgs.com/games/2015/1/11/NiceGo19N-AyaMC.sgf


one position taking about 1.6ms on the GTX-970.


If C++ one thread do same thing, how slow is it? 10 times slower?

Hiroshi Yamashita

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Representing Komi for neural network

2015-01-11 Thread Detlef Schmicker

Hi,

I am planing to play around a little with CNN for learning who is 
leading in a board position.


What would you suggest to represent the komi?

I would try an additional layer with every point having the value of komi.

Any better suggestions:)


By the way:
Todays bot tournament nicego19n (oakfoam) played with a CNN for move 
prediction.
It was mixed into the original gamma with some quickly optimized 
parameter leading to 100ELO improvement for selfplay with 2000 
playouts/move. I used the Clark and Storkey Network, but with no 
additional features (only a black and a white layer). I trained it on 
6 kgs games and reached about 41% prediction rate. I have no delayed 
evaluation, as I evaluate no mini-batch but only one position taking 
about 1.6ms on the GTX-970. A little delay might happen anyway, as only 
one evaluation is done at once and other threads might go on playing 
while one thread is doing CNN. We have quite slow playouts anyway, so I 
had around 7 playouts/move during the game.


If you want to get an impression, how such a bot plays, have a look at 
the games :)


Detlef
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Representing Komi for neural network

2015-01-11 Thread Detlef Schmicker

Sure,

https://bitbucket.org/dsmic/oakfoam

is my bench, but it is not as clean as the original bench (e.g. the 
directory of the cnn file is hard coded and the autotools are not 
preparing for caffe at the moment:(


But there should be all tools I use to train in script/CNN, I use caffe


Am 11.01.2015 um 22:41 schrieb Aja Huang:



2015-01-11 15:59 GMT+00:00 Detlef Schmicker d...@physik.de 
mailto:d...@physik.de:


By the way:
Todays bot tournament nicego19n (oakfoam) played with a CNN for
move prediction.
It was mixed into the original gamma with some quickly optimized
parameter leading to 100ELO improvement for selfplay with 2000
playouts/move. I used the Clark and Storkey Network, but with no
additional features (only a black and a white layer). I trained it
on 6 kgs games and reached about 41% prediction rate. I have
no delayed evaluation, as I evaluate no mini-batch but only one
position taking about 1.6ms on the GTX-970. A little delay might
happen anyway, as only one evaluation is done at once and other
threads might go on playing while one thread is doing CNN. We have
quite slow playouts anyway, so I had around 7 playouts/move
during the game.

If you want to get an impression, how such a bot plays, have a
look at the games :)


Congrats on oakfoam's significant improvement with the CNN. The game 
oakfoam beat HiroBot[1d] is very nice


http://files.gokgs.com/games/2015/1/11/HiraBot-NiceGo19N.sgf

Would you release the newest version of oakfoam and the CNN? I 
couldn't find your git or svn repository at


http://oakfoam.com/#downloads
Aja


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-31 Thread Detlef Schmicker

Hi,

I am just trying to reproduce the data from page 7 with all features 
disabled. I do not reach the accuracy (I stay below 20%).


Now I wonder about a short statement in the paper, I did not really 
understand:
On page 4 top right they state In our experience using the rectifier 
function was slightly more effective then using the tanh function


Where do they put this functions in? I use caffe, and as far as I 
understood it, I would have to add extra layers to get a function like 
this. Does this mean: before every layer there should be a tanh or 
rectifier layer?


I would be glad to share my sources if somebody is trying the same,

Detlef

Am 15.12.2014 um 00:53 schrieb Hiroshi Yamashita:

Hi,

This paper looks very cool.

Teaching Deep Convolutional Neural Networks to Play Go
http://arxiv.org/pdf/1412.3409v1.pdf

Thier move prediction got 91% winrate against GNU Go and 14%
against Fuego in 19x19.

Regards,
Hiroshi Yamashita

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-31 Thread Detlef Schmicker


Am 31.12.2014 um 14:05 schrieb Petr Baudis:

   Hi!

On Wed, Dec 31, 2014 at 11:16:57AM +0100, Detlef Schmicker wrote:

I am just trying to reproduce the data from page 7 with all features
disabled. I do not reach the accuracy (I stay below 20%).

Now I wonder about a short statement in the paper, I did not really
understand:
On page 4 top right they state In our experience using the
rectifier function was slightly more effective then using the tanh
function

Where do they put this functions in? I use caffe, and as far as I
understood it, I would have to add extra layers to get a function
like this. Does this mean: before every layer there should be a tanh
or rectifier layer?

   I think this is talking about the non-linear transformation function.
Basically, each neuron output y = f(wx) for weight vector w and input
vector x and transfer function f.  Traditionally, f is a sigmoid (the
logistic function 1/(1+e^-x)), but tanh is also popular and with deep
learning, rectifier and such functions are very popular IIRC because
they allow much better propagation of error to deep layers.
Thanks a lot. I was struggling with the traditionally, and expected 
this to be the case for the standard convolutional layers in caffe. This 
seems not to be the case, so now I added layers for f(x): Now I reach 
50% accuracy for a small dataset (285000 positions). Of cause this 
data set is too small (therefore the number is overestimated), but I 
only had 15% on this before introducing f(x) :)






I would be glad to share my sources if somebody is trying the same,

   I hope to be able to start dedicating time to this starting the end of
January (when I'll be moving to Japan for three months! I'll be glad to
meet up with fellow Go developers some time, and see you at the UEC if
it's in 2015 too :-).

   I would very much appreciate an open source implementation of this
- or rather, I'd rather spend my time using one to do interesting things
rather than building one, I do plan to open source my implementation if
I have to make one and can bring myself to build one from scratch...

oakfoam is open source anyway. In my branch my caffe based 
implementation is available. My branch is not so clean as Francois's 
one, and we did not merge for quite a time:(


At the moment the CNN part is in a very early state, you have to produce 
the database by different scripts...

But I would be happy to assist!

Detlef
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional Neural Networks

2014-12-25 Thread Detlef Schmicker
Hi,

as I want to by graphic card for CNN: do I need double precision
performance? I give caffe (http://caffe.berkeleyvision.org/) a try, and
as far as I understood most is done in single precision?!

You get comparable single precision performance NVIDA (as caffe uses
CUDA I look for NVIDA) for about 340$ but the double precision
performance is 10x smaller than the 1000$ cards

thanks a lot

Detlef

Am Mittwoch, den 24.12.2014, 12:14 +0800 schrieb hughperkins2:
 Whilst its technically true that you can use an nn with one hidden
 layer to learn the same function as a deeper net, you might need a
 combinatorally large number of nodes :-)
 
 
 scaling learning algorithms towards ai, by bengio and lecunn, 2007,
 makes a convincing case along these lines. 
 
 
 
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional Neural Networks

2014-12-20 Thread Detlef Schmicker
Am Samstag, den 20.12.2014, 09:43 +0100 schrieb Stefan Kaitschick:
 Great work. Looks like the age of nn is here.
 
 How does this compare in computation time to a heavy MC move
 generator?
 
 
 One very minor quibble, I feel like a nag for even mentioning it:  You
 write
 The most frequently cited reason for the difficulty of Go, compared
 to games such as Chess, Scrabble
 or Shogi, is the difficulty of constructing an evaluation function
 that can differentiate good moves
 from bad in a given position.
 
 
 If MC has shown anything, it's that computationally, it's much easier
 to suggest a good move, than to evaluate the position.
 
 This is still true with your paper, it's just that the move suggestion
 has become even better.

It is, but I do not think, that this is necessarily a feature of NN.
NNs might be a good evaluators, but it is much easier to train them for
a move predictor, as it is not easy to get training data sets for an
evaluation function?!

Detlef

P.S.: As we all might be trying to start incorporating NN into our
engines, we might bundle our resources, at least for the first start?!
Maybe exchanging open source software links for NN. I personally would
have started trying NN some time ago, if iOS had OpenCL support, as my
aim is to get a strong iPad go program

 
 
 Stefan
 
 
 
 
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional Neural Networks

2014-12-20 Thread Detlef Schmicker
Hi,

I am still fighting with the NN slang, but why do you zero-padd the
output (page 3: 4 Architecture  Training)?

From all I read up to now, most are zero-padding the input to make the
output fit 19x19?!

Thanks for the great work

Detlef

Am Freitag, den 19.12.2014, 23:17 + schrieb Aja Huang:
 Hi all,
 
 
 We've just submitted our paper to ICLR. We made the draft available at
 http://www.cs.toronto.edu/~cmaddis/pubs/deepgo.pdf
 
 
 
 I hope you enjoy our work. Comments and questions are welcome.
 
 
 Regards,
 Aja
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go