Re: [Computer-go] CGOS source on github

2021-01-23 Thread Darren Cook
> ladders, not just liberties. In that case, yes! If you outright tell the
> neural net as an input whether each ladder works or not (doing a short
> tactical search to determine this), or something equivalent to it, then the
> net will definitely make use of that information, ...

Each convolutional layer should spread the information across the board.
I think alpha zero used 20 layers? So even 3x3 filters would tell you
about the whole board - though the signal from the opposite corner of
the board might end up a bit weak.

I think we can assume it is doing that successfully, because otherwise
we'd hear about it losing lots of games in ladders.

> something the first version of AlphaGo did (before they tried to make it
> "zero") and something that many other bots do as well. But Leela Zero and
> ELF do not do this, because of attempting to remain "zero", ...

I know that zero-ness was very important to DeepMind, but I thought the
open source dedicated go bots that have copied it did so because AlphaGo
Zero was stronger than AlphaGo Master after 21-40 days of training.
I.e. in the rarefied atmosphere of super-human play that starter package
of human expert knowledge was considered a weight around its neck.

BTW, I agree that feeding the results of tactical search in would make
stronger programs, all else being equal. But it is branching code, so
much slower to parallelize.

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


Re: [Computer-go] Accelerating Self-Play Learning in Go

2019-03-08 Thread Darren Cook
> Blog post:
> https://blog.janestreet.com/accelerating-self-play-learning-in-go/ 
> Paper: https://arxiv.org/abs/1902.10565

I read the paper, and really enjoyed it: lots of different ideas being
tried. I was especially satisfied to see figure 12 and the big
difference giving some go features made.

Though it would be good to see figure 8 shown in terms of wall clock
time, on equivalent hardware. How much extra computation do all the
extra ideas add? (Maybe it is in the paper, and I missed it?)

> I found some other interesting results, too - for example contrary to
> intuition built up from earlier-generation MCTS programs in Go,
> putting significant weight on score maximization rather than only
> win/loss seems to help.

Score maximization in self-play means it is encouraged to play more
aggressively/dangerously, by creating life/death problems on the board.
A player of similar strength doesn't know how to exploit the weaknesses
left behind. (One of the asymmetries of go?)

I hope you are able to continue the experiment, with more training time,
to see if it flattens out or keeps improving.

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Paper “Complexity of Go” by Robson

2018-06-22 Thread Darren Cook
> I also think that what makes real go that hard is ko, but you've shown that 
> it's 
> equivalent to ladder, which frankly baffles me. I'd love to understand that.

Just different definitions of "hard"? Ko is still way harder (more
confusing, harder to discover a winning move when one exists) than
ladders in the way they each appear in typical games.

> http://tromp.github.io/img/WO5lives.png
> 
> Might be useful for go event organizers in need of arrow signs...

:-)

...though it might end up with blocked corridors, as people stand around
the signs and argue about the best move, instead of going where the
organizers want them to go!

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] On proper naming

2018-03-08 Thread Darren Cook
> but then it does not make sense to call that algorithm "rollout".
> 
> In general: when introducing a new name, care should
> be taken that the name describes properly what is going on.

Speaking of which, why did people start calling them rollouts instead of
playouts?

Darren

P.S. And don't get me started on "chains": at one point this seemed to
be the standard term for a solidly connected set of stones, the basic
unit of tactical search (as distinguished from a "group", which is made
up of 1+ chains). But then somewhere along the way people started
calling them strings.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Crazy Stone is back

2018-02-28 Thread Darren Cook
> Weights_31_3200 is 20 layers of 192, 3200 board evaluations per move
> (no random playout). But it still has difficulties with very long
> strings. My next network will be 40 layers of 256, like Master. 

"long strings" here means solidly connected stones?

The 192 vs. 256 is the number of 3x3 convolution filters?

Has anyone been doing experiments with, say, 5x5 filters (and fewer
layers), and/or putting more raw information in (e.g. liberty counts -
which makes the long string problem go away, if I've understood
correctly what that is)?

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

2017-12-06 Thread Darren Cook
>> One of the changes they made (bottom of p.3) was to continuously 
>> update the neural net, rather than require a new network to beat
>> it 55% of the time to be used. (That struck me as strange at the
>> time, when reading the AlphaGoZero paper - why not just >50%?)

Gian wrote:
> I read that as a simple way of establishing confidence that the 
> result was statistically significant > 0. (+35 Elo over 400 games...

Brian Sheppard also:
> Requiring a margin > 55% is a defense against a random result. A 55% 
> score in a 400-game match is 2 sigma.

Good point. That makes sense.

But (where A is best so far, and B is the newer network) in
A vs. B, if B wins 50.1%, there is a slightly greater than 50-50 chance
that B is better than A. In the extreme case of 54.9% win rate there is
something like a 94%-6% chance (?) that B is better, but they still
throw B away.

If B just got lucky, and A was better, well the next generation is just
more likely to de-throne B, so long-term you won't lose much.

On the other hand, at very strong levels, this might prevent
improvement, as a jump to 55% win rate in just one generation sounds
unlikely to happen. (Did I understand that right? As B is thrown away,
and A continues to be used, there is only that one generation within
which to improve on it, each time?)

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

2017-12-06 Thread Darren Cook
> Mastering Chess and Shogi by Self-Play with a General Reinforcement
> Learning Algorithm
> https://arxiv.org/pdf/1712.01815.pdf

One of the changes they made (bottom of p.3) was to continuously update
the neural net, rather than require a new network to beat it 55% of the
time to be used. (That struck me as strange at the time, when reading
the AlphaGoZero paper - why not just >50%?)

The AlphaZero paper shows it out-performs AlphaGoZero, but they are
comparing to the 20-block, 3-day version. Not the 40-block, 40-day
version that was even stronger.

As papers rarely show failures, can we take it to mean they couldn't
out-perform their best go bot, do you think? If so, I wonder how hard
they tried?

In other words, do you think the changes they made from AlphaGo Zero to
Alpha Zero have made it weaker (when just viewed from the point of view
of making the strongest possible go program).

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Learning related stuff

2017-11-29 Thread Darren Cook
> My question is this; people have been messing around with neural nets
> and machine learning for 40 years; what was the breakthrough that made
> alphago succeed so spectacularly.

5 or 6 orders more magnitude CPU power (relative to the late 90s) (*).

This means you can try out ideas to see if they work, and get the answer
back in hours, rather than years.

After 10 hrs it was playing with an elo somewhere between 0 and 1000
(Figure 3 in the alpha go zero paper). I.e. idiot level. That is
something like 1100 years of effort on 1995 hardware.

They put together a large team (by hobbyist computer go standards) of
top people, at least two of which had made strong go programs before.

I'd name two other things: dropout (and other regularization techniques)
allowed deeper networks; the work on image recognition gave you
production-ready CNNs, without having to work through all the dead ends
yourself. Also better optimization techniques. Taken together maybe
algorithmic advances are worth another order of magnitude.

Darren

*: The source is the intro to my own book ;-) From memory, I made the
estimate as the average of top supercomputer 20 years apart, and a
typical high-end PC 20 years apart.
https://en.wikipedia.org/wiki/History_of_supercomputing#Historical_TOP500_table

-- 
Darren Cook, Software Researcher/Developer
My New Book: Practical Machine Learning with H2O:
  http://shop.oreilly.com/product/0636920053170.do
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Learning related stuff

2017-11-21 Thread Darren Cook
> Would it typically help or disrupt to start
> instead with values that are non-random?
> What I have in mind concretely:

Can I correctly rephrase your question as: if you take a well-trained
komi 7.5 network, then give it komi 5.5 training data, will it adapt
quickly, or would it be faster/better to start over from scratch? (From
the point of view of creating a strong komi 5.5 program.) (?)


Surely it would train much more quickly: all the early layers are about
learning liberty counting, atari and then life/death, good shape, etc.
(But, it would be fascinating if an experiment showed that wasn't the
case, and starting from a fresh random network trained more quickly!)

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] what is reachable with normal HW

2017-11-15 Thread Darren Cook
> Zero was reportedly very strong with 4 TPU. If we say 1 TPU = 1 GTX 1080
> Ti...

4 TPU is 180 TFLOPS, or 45 TFLOPS each [1]

GTX 1080Ti is 11.3 TFLOPs [2], or 9 TFLOPS for the normal 1080.

So 4 TPUs are more like 15-20 times faster than a high-end gaming notebook.

(I'm being pedantic; I expect your main point still stands even if it is
20x. And if not, the AlphaZero results were at 5s/move - people wanting
a world-class game could give it 30s or even 60s/move.)

Darren


[1]: https://en.wikipedia.org/wiki/Tensor_processing_unit
[2]:
https://www.anandtech.com/show/11180/the-nvidia-geforce-gtx-1080-ti-review
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Nochi: Slightly successful AlphaGo Zero replication

2017-11-10 Thread Darren Cook
> You make me really curious, what is a Keras model ?

When I was a lad, you had to bike 3 miles (uphill in both directions) to
the library to satisfy curiosity. Nowadays you just type "keras" into
Google ;-)

https://keras.io/

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] November KGS bot tournament

2017-10-27 Thread Darren Cook
> Since AlphaGo, almost all academic organizations have 
> stopped development but, ...

In Japan, or globally? Either way, what domain(s)/problem(s) have they
switched into studying?

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AlphaGo Zero SGF - Free Use or Copyright?

2017-10-25 Thread Darren Cook
> What do you want evaluate the software for ? corner cases which never
> have happen in a real game ?

If the purpose of this mailing list is a community to work out how to
make a 19x19 go program that can beat any human, then AlphaGo has
finished the job, and we can shut it down.

But this list has always been for anything related to computers and the
game of go. Right from John Tromp counting the number of games through
to tips and hints on the best compiler flags to use.

BTW, I noticed in the paper that it showed 3 games AlphaGo Zero lost to
AlphaGo Master: in game 11 Zero had white, in games 14 and 16 Zero had
black. An opponent that can only win 11% of games against it, was able
to win on both sides of the komi. Suggesting there is still quite a bit
of room for improvement.

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AlphaGo Zero SGF - Free Use or Copyright?

2017-10-24 Thread Darren Cook
Could we PLEASE take this off-list? If you don't like someone, or what
they post, filter them. If you think someone should be banned, present
your case to the list owner(s).

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Deep Blue the end, AlphaGo the beginning?

2017-08-19 Thread Darren Cook
I enjoyed that long read, GCP!

> There is a secondary argument whether the methods used for Deep Blue>
> generalize as well as the methods used for AlphaGo. I think that 
> argument may not be as simple and clear-cut as Kasparov implied,
> because for one, there are similarities and crossover in which
> methods both programs used.
> 
> But I understand where it comes from. SL/RL and DCNN's (more
> associated with AlphaGo) seem like a broader hammer than tree search
> (more associated with Deep Blue).

That was my main thought, on reading the Kasparov interview, too. I'd
include MCTS under tree search. Most interesting AI problems cannot be
phrased in a way that all the wonderfully clever ways we have for
searching 2-player trees can be used.

The really clever bit of both Deep Blue and AlphaGo, was taking an order
of magnitude more computing power than what had gone before, and
actually getting it to work. Not crashing into the wall of diminishing
returns.

> My objection was to the claim that making Deep Blue didn't require any
> innovation or new methods at all...

The other thing that struck me: Deep Blue has been regarded as
"brute-force" (often in a derogatory sense) all these years, dumb
algorithms that were just hardware-accelerated. But I remember an
interview with Hsu (which sadly I cannot find a link to) where he was
saying that Deep Blue contained most of the sophisticated chess
knowledge and algorithms of the day: it wasn't just alpha-beta in there.

So seeing Deep Blue described as "chess algorithms" rather than as
"brute-force" was interesting.

Darren

P.S. I feel I didn't quote enough of the interview - do try and find it.
E.g. Saying how he didn't realize how badly Deep Blue had played until
he analyzed the games with modern chess computers. That is an amazing
thing to hear from the mouth of Kasparov! The book he is plugging is
here - I just skimmed the reviews, and it actually sounds rather good:
https://www.amazon.co.uk/Deep-Thinking-Machine-Intelligence-Creativity/dp/1473653517


-- 
Darren Cook, Software Researcher/Developer
My New Book: Practical Machine Learning with H2O:
  http://shop.oreilly.com/product/0636920053170.do
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Deep Blue the end, AlphaGo the beginning?

2017-08-17 Thread Darren Cook
In New Scientist, 3 June 2017, p.42-43, there is an interview with
Kasparov (who has just published a book about his defeat by Deep Blue),
where he says "Deep Blue is the end and AlphaGo the beginning", and
explains:

"I'm sure some things were learned about parallel processing... but the
real science was known by the 1997 rematch... but AlphaGo is an entirely
different thing. Deep Blue's chess algorithms were good for playing
chess very well. The machine-learning methods AlphaGo uses are
applicable to practically anything."

Agree or disagree?

Darren





-- 
Darren Cook, Software Researcher/Developer
My New Book: Practical Machine Learning with H2O:
  http://shop.oreilly.com/product/0636920053170.do
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Alphago and solving Go

2017-08-07 Thread Darren Cook
> https://en.wikipedia.org/wiki/Brute-force_search explains it as 
> "systematically enumerating all possible candidates for the
> solution".
> 
> There is nothing systematic about the pseudo random variation 
> selection in MCTS;

More semantics, but as it is pseudo-random, isn't that systematic? It
only looks like it is jumping around because we are looking at it from
the wrong angle.

(Systematic pseudo-random generation gets very hard over a cluster, of
course...)

> it may not even have sufficient entropy to guarantee full
> enumeration...

That is the most interesting idea in this thread. Is there any way to
prove it one way or the other? I'm looking at you here, John - sounds
right up your street :-)

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mailing list working?

2017-06-08 Thread Darren Cook
> It has achieved /that/ goal, but the further goal of getting AIs to explain 
> and 
> teach is still out there.

Or just to reach the same level, with less computational effort (either
the training stage, or the playing engine), is an interesting challenge,
that is likely to teach us something that can be applied to other
problem domains.

>> So Computer-go achieved its goal and we can close this list?

For me, the big milestone in computer go was reached years ago: it was
when I could no longer beat the go app on my phone. (I think it was the
first Android release of Crazy Stone.)

Darren




-- 
Darren Cook, Software Researcher/Developer
My New Book: Practical Machine Learning with H2O:
  http://shop.oreilly.com/product/0636920053170.do
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Zen lost to Mi Yu Ting

2017-03-22 Thread Darren Cook
>>> The issue with Japanese rules is easily solved by refusing to play
>>> under ridiculous rules. Yes, I do have strong opinions. :)
>>
>> And the problem with driver-less cars is easily "solved" by banning
>> all road users that are not also driver-less cars (including all 
>> pedestrians, bikes and wild animals).
> 
> I think you misunderstand the sentiment completely. It is not: Japanese
> rules are difficult for computers, so we don't like them.
> 
> It is: Japanese rules are problematic on many levels, ...

Yes, that was the sentiment I understood. Chinese rules (Tromp-Taylor,
etc.) are nice and clean, so easy to implement. They were useful props
to make the progress up until now. The real world is messy and
illogical, as are the corner cases in Japanese rules. Assuming you are
in this for the AI learnings, not just to make a strong Chinese-rules go
program, why not embrace the messiness!

(Japanese rules are not *that* hard. IIRC, Many Faces, and all other
programs, including my own, scored in them, before MCTS took hold and
being able to shave milliseconds off scoring became the main decider of
a program's strength.)

Darren


-- 
Darren Cook, Software Researcher/Developer
My New Book: Practical Machine Learning with H2O:
  http://shop.oreilly.com/product/0636920053170.do
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Zen lost to Mi Yu Ting

2017-03-22 Thread Darren Cook
> The issue with Japanese rules is easily solved by refusing to play under 
> ridiculous rules. Yes, I do have strong opinions. :)

And the problem with driver-less cars is easily "solved" by banning all
road users that are not also driver-less cars (including all
pedestrians, bikes and wild animals).

Or how about this angle: humans are still better than the programs at
Japanese rules. Therefore this is an interesting area of study.

Darren




-- 
Darren Cook, Software Researcher/Developer
My New Book: Practical Machine Learning with H2O:
  http://shop.oreilly.com/product/0636920053170.do
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] UEC cup 1st day result

2017-03-18 Thread Darren Cook
> Can you say something more on "Fine Art"?
> From which country is it? Who is Tencent?

Tencent is a very big Chinese Internet company; it is described here as
the largest gaming company in the world:
  https://en.wikipedia.org/wiki/Tencent

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Question: Time Table World Championships

2017-03-09 Thread Darren Cook
> English official page has the info.
> http://www.worldgochampionship.net/english/

Thanks. Is it three hours, with sudden death? It says there is byo-yomi
from 5 minutes left, but didn't mention seconds per move, so it is just
a 300, 299, 288, 287, ... kind of countdown?

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] dealing with multiple local optima

2017-02-27 Thread Darren Cook
> But those video games have a very simple optimal policy. Consider Super 
> Mario: 
> if you see an enemy, step on it; if you see a whole, jump over it; if you see 
> a 
> pipe sticking up, also jump over it; etc.

A bit like go? If you see an unsettled group, make it live. If you have
a ko, play a ko threat. If you see have two 1-eye groups near each
other, join them together. :-)

Okay, those could be considered higher-level concepts, but I still
thought it was impressive to learn to play arcade games with no hints at
all.

Darren


> 
> On Sat, Feb 25, 2017 at 12:36 AM, Darren Cook <dar...@dcook.org 
> <mailto:dar...@dcook.org>> wrote:
> 
>  > ...if it is hard to have "the good starting point" such as a trained
>  > policy from human expert game records, what is a way to devise one.
> 
> My first thought was to look at the DeepMind research on learning to
> play video games (which I think either pre-dates the AlphaGo research,
> or was done in parallel with it): https://deepmind.com/research/dqn/
> <https://deepmind.com/research/dqn/>
> 
> It just learns from trial and error, no expert game records:
> 
> 
> http://www.theverge.com/2016/6/9/11893002/google-ai-deepmind-atari-montezumas-revenge
> 
> <http://www.theverge.com/2016/6/9/11893002/google-ai-deepmind-atari-montezumas-revenge>
> 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] dealing with multiple local optima

2017-02-24 Thread Darren Cook
> ...if it is hard to have "the good starting point" such as a trained
> policy from human expert game records, what is a way to devise one.

My first thought was to look at the DeepMind research on learning to
play video games (which I think either pre-dates the AlphaGo research,
or was done in parallel with it): https://deepmind.com/research/dqn/

It just learns from trial and error, no expert game records:

http://www.theverge.com/2016/6/9/11893002/google-ai-deepmind-atari-montezumas-revenge

Darren



-- 
Darren Cook, Software Researcher/Developer
My New Book: Practical Machine Learning with H2O:
  http://shop.oreilly.com/product/0636920053170.do
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] it's alphago

2017-01-11 Thread Darren Cook
> https://games.slashdot.org/story/17/01/04/2022236/googles-alphago-ai-secretively-won-more-than-50-straight-games-against-worlds-top-go-players

The five Lee Sedol games last year never felt like they were probing
Alpha Go's potential weaknesses. E.g. things like whole board semeai,
complex whole board ko fights, obscure under-the-stones tesuji, etc.

I wondered if anyone here had studied those 50 games and found anything
interesting or impressive, along those lines? I.e. if I was going to
look at just one game, which one should it be?

Thanks,
Darren


-- 
Darren Cook, Software Researcher/Developer
My New Book: Practical Machine Learning with H2O:
  http://shop.oreilly.com/product/0636920053170.do
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Go Tournament with hinteresting rules

2016-12-15 Thread Darren Cook
> I have been told that bots that are based on MC play better when they only 
> record the result of each roll out (W or L)
> rather than the margin of victory.
> 
> To me this is counter-intuitive.
> 
> Does anyone have an intelligible reason why it should be so?

The search then optimizes for the probability of winning, rather than
optimizing for the largest margin of victory.

Imagine two stock traders. Their goal is to beat the market over the
next 12 months, and they both choose 10 companies from the FTSE100.
Trader A randomly chooses 10 companies with dividends that are paying
over the average for the FTSE100. Trader B chooses the 10 companies with
the highest dividends. Intuitively trader B should have earned more by
the end of the year, but there is a good chance that at least one
company will go bankrupt, and another will cut its dividend. Maybe the
other 8 choices will do well enough to keep him ahead overall, but
chances are that trade A will come out ahead.

Games of go tend to be dominated by life and death battles. There may be
a way for black to kill white's group, and win big, but it is awfully
complicated and we don't have time for exhaustive search. If we can
still win by letting white's group live small, that is a much safer path.

There is also a pragmatic reason: it is just one bit of information to
pass up the tree, so very easy to make a single number for chance of
win. With margin of victory you end up with the problem of how to pass a
probability distribution up the tree, and then what to do with it at the
top.
(The presence of the life/death battles means the distribution tends to
have multiple peaks, not be nice and gaussian.)

Darren



-- 
Darren Cook, Software Researcher/Developer
My New Book: Practical Machine Learning with H2O:
  http://shop.oreilly.com/product/0636920053170.do
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AlphaGo selfplay 3 games

2016-09-15 Thread Darren Cook
> DeepMind published AlphaGo's selfplay 3 games with comment.

I've just been playing through the AlphaGo-Lee first game. When it shows
a variation, is this what AlphaGo was expecting, i.e. its prime
variation? Or is the follow-up "just" the opinion of the pro commentators?

(E.g. game 1, move 13, the keima; the commentary says "While will
attach..." Can I read that as meaning this is the move AlphaGo would
have chosen if black had played there?)

Thanks,
Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Video of Aja Huang's presentation

2016-07-06 Thread Darren Cook
> Any chance someone has put this on Youtube for those of us who primarily 
> consume 
> videos on phones or tablets (where a 2.0GB is very large to store locally)? 
> And 
> if so, replying with a link here would be deeply appreciated.

+1. It is actually 3GB, for a 40 minute video! I had to start rapidly
clearing disk space as it was downloading.

(If you cannot do it, is there any legal restriction on someone else
doing the conversion and uploading it?)

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] DarkForest is open-source now.

2016-06-10 Thread Darren Cook
>> At 5d KGS, is this the world's strongest MIT/BSD licensed program? ...
>> actually, is there any other MIT/BSD go program out there? (I thought
>> Pachi was, but it is GPLv2)
> 
> Huh, that's interesting, because Darkforest seems to have copy-pasted
> the pachi playout policy:
> 
> https://github.com/facebookresearch/darkforestGo/blob/master/board/pattern.c#L36
> 
> https://github.com/pasky/pachi/blob/master/playout/moggy.c#L101

Uh-oh. Though it does say "inspired by" at the top, and also that it is
not used by the main engine:

// This file is inspired by Pachi's engine
//   (https://github.com/pasky/pachi).
// The main DarkForest engine (when specified
//   with `--playout_policy v2`) does not depend on it.
// However, the simple policy opened with
//   `--playout_policy simple` will use this library.


Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] DarkForest is open-source now.

2016-06-10 Thread Darren Cook
> DarkForest Go engine is now public on the Github (pre-trained CNN models are 
> also public). Hopefully it will help the community.
> 
> https://github.com/facebookresearch/darkforestGo

Ooh, BSD license (i.e. very liberal, no GPL virus). Well done! :-)

At 5d KGS, is this the world's strongest MIT/BSD licensed program? ...
actually, is there any other MIT/BSD go program out there? (I thought
Pachi was, but it is GPLv2)

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Google used TPUs for AlphaGo

2016-05-21 Thread Darren Cook
> http://itpro.nikkeibp.co.jp/atcl/column/15/061500148/051900060/ 
> (in Japanese).  The performance/watt is about 13 times better, 
> a photo in the article shows.

Has anyone found out exactly what the "Other" in the photo is? The
Google blog was also rather vague on this.

(If you didn't click through, the chart just say "Relative TPU
Performance/Watt", with Other being between 0 and 4, and TPU being
between 11 and 14.)

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Google used TPUs for AlphaGo

2016-05-19 Thread Darren Cook
> It's be interesting to know what the speedup factor against, say,
> Tesla K40 is.

Or against the P100 chip [1], which claims the same "order of magnitude"
speed-up on neural nets by doing the same thing (half-precision floating
point).

Darren

[1]:
http://nvidianews.nvidia.com/news/nvidia-delivers-massive-performance-leap-for-deep-learning-hpc-applications-with-nvidia-tesla-p100-accelerators

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] scoring

2016-03-26 Thread Darren Cook
> I've implemented the Tromp Taylor algorithm. As a comparison I use
> gnugo.
> Now something odd is happening. If I setup a board of size 11
> (boardsize 11), then put a stone (play b a1) and then ask it to run the
> scoring (final_score), then it takes minutes before it finishes. That
> alone is kind of strange with only 1 stone? Furthermore it returns a
> score of "B+9.0". I would expect a score of B+121.0 there. Where am I
> going wrong here?

It sounds like GnuGo's final_score is doing "--score aftermath", rather
than "--score end", as described here at the end of section 3.9.6 here:
   http://www.gnu.org/software/gnugo/gnugo_3.html#SEC396

Using estimate_score instead of final_score should be quicker. (see
http://www.gnu.org/software/gnugo/gnugo_19.html)

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] UEC cup 2nd day

2016-03-24 Thread Darren Cook
Thanks for the very interesting replies, David, and Remi.

No-one is using TensorFlow, then? Any reason not to? (I'm just curious
because there looks to be a good Udacity DNN course
(https://www.udacity.com/course/deep-learning--ud730), which I was
considering, but it is using TensorFlow.)


Remi wrote:
> programming back-propagation efficiently on the GPU. We did get a GPU
> version working, but it took a lot of time to program it, and was not
> so efficient. So the current DCNN of Crazy Stone is 100% trained on
> the CPU, and 100% running on the CPU. My CPU code is efficient,
> though. It is considerably faster than Caffe. My impression is that
> Caffe is inefficient because it uses the GEMM approach, which may be
> good for high-resolution pictures, but is not for small 19x19
> boards.

I did a bit of study on what GEMM is, and found this article and the 2nd
comment on it quite interesting:


http://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/

The comment, by Scott Gray, mentioned:

  So instead of thinking of convolution as a problem of one large gemm
operation, it’s actually much more efficient as many small gemms. To
compute a large gemm on a GPU you need to break it up into many small
tiles anyway. So rather than waste time duplicating your data into a
large matrix, you can just start doing small gemms right away directly
on the data. Let the L2 cache do the duplication for you.


He doesn't quantify large vs. small; though I doubt anyone is doing
image recognition on 19x19 pixel images :-)

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] UEC cup 2nd day

2016-03-23 Thread Darren Cook
David Fotland wrote:
> There are 12 programs here that have deep neural nets.  2 were not
> qualified for the second day, and six of them made the final 8.  Many
> Faces has very basic DNN support, but it’s turned off because it
> isn’t making the program stronger yet.  Only Dolburam and Many Faces
> don’t have DNN in the final 8.  Dolburam won in Beijing, but the DNN
> programs are stronger and it didn’t make the final 4.

Are all the DNN programs (or, at least, all 6 in the top 8) also using MCTS?
(Re-phrased: is there any currently strong program not using MCTS?)

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Darren Cook
> ...
> Pro players who are not familiar with MCTS bot behavior will not see this.

I stand by this:

>> If you want to argue that "their opinion" was wrong because they don't
>> understand the game at the level AlphaGo was playing at, then you can't
>> use their opinion in a positive way either.

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Darren Cook
> ... we witnessed hundreds of moves vetted by 9dan players, especially
> Michael Redmond's, where each move was vetted. 

This is a promising approach. But, there were also numerous moves where
the 9-dan pros said, that in *their* opinion, the moves were weak/wrong.
E.g. wasting ko threats for no reason. Moves even a 1p would never make.

If you want to argue that "their opinion" was wrong because they don't
understand the game at the level AlphaGo was playing at, then you can't
use their opinion in a positive way either.

> nearly all sporting events, given the small sample size involved) of
> statistical significance - suggesting that on another week the result
> might have been 4-1 to Lee Sedol.

If his 2nd game had been the one where he created vaguely alive/dead
groups and forced a mistake, and given that we were told the computer
was not being changed during the match, he might have created 2 wins
just by playing exactly the same.

And if he had known this in advance he might then have realized that
creating multiple weak groups and some large complicated kos are the way
to beat it, and so it could well have gone 4-1 to Lee Sedol in "another
week".

C'mon DeepMind, put that same version on KGS, set to only play 9p
players, with the same time controls, and let's get 40 games to give it
a proper ranking. (If 5 games against Lee Sedol are useful, 40 games
against a range of players with little to lose, who are systematically
trying to find its weaknesses, are going to be amazing.)

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Go Bot for the Browser?

2016-03-19 Thread Darren Cook
> If I remember correctly, it is not browser implementation, but rather a 
> frontend. The actual computation runs on server, browser only communicates 
> the 
> moves and shows the results.

No, a quick test shows once it loads it has not made any server calls.
It has a 14MB file which looks like [1]

Darren


[1]:
// 8 layer network trained on GoGoD data, truncated to 6 decimal places
to reduce size
var json_net = {"layers": [{"layer_type": "input", "out_sy": 25,
"out_depth": 8, "out_sx": 25}, {"layer_type"
: "conv", "sy": 25, "sx": 25, "out_sx": 19, "out_sy": 19, "stride": 1,
"pad": 0, "biases": {"depth":
 64, "sx": 1, "sy": 1, "w": [0.519023, -1.379795, -0.495255, -0.051380,
-0.466160, -1.380873, -0.630742
, -0.174662, -0.743714, -1.288785, -0.607110, -0.536119, -0.819585,
-0.248130, -0.629681, -0.004683,
 -0.408890, -1.701742, -0.011255, -0.833270, -0.665327, -0.127002,
-0.793772, -0.518614, -1.390844, -1
.982825, -0.012530, -0.140848, -1.255086, -0.761665, -0.077154,
-0.748323, -0.086952, -0.175683, -1.526860
, 0.098685, -0.030402, -0.903232, -
...

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Go Bot for the Browser?

2016-03-19 Thread Darren Cook
BTW, if anyone is pursuing this further, JavaScript supports binary
arrays (I've used them in some WebGL work I've been doing), and browser
coverage is rather good:  http://caniuse.com/#feat=typedarrays

What that means (in rough order of usefulness):
  1. It loads into memory directly - the alternative is that 14MB of
JavaScript needs to be parsed;

  2. Minor file size saving (it is only minor, because the text file
version would almost certainly be served gzipped, and text files
compress really well);

  3. You don't need to truncate the precision;

  4. Minor web server CPU saving (point 2: no need to gzip).

Darren

> It has a 14MB file which looks like:
>
> // 8 layer network trained on GoGoD data, truncated to 6 decimal places
> to reduce size
> var json_net = {"layers": [{"layer_type": "input", "out_sy": 25,
> "out_depth": 8, "out_sx": 25}, {"layer_type"
> : "conv", "sy": 25, "sx": 25, "out_sx": 19, "out_sy": 19, "stride": 1,
> "pad": 0, "biases": {"depth":
>  64, "sx": 1, "sy": 1, "w": [0.519023, -1.379795, -0.495255, -0.051380,
> -0.466160, -1.380873, -0.630742
> , -0.174662, -0.743714, -1.288785, -0.607110, -0.536119, -0.819585,
> -0.248130, -0.629681, -0.004683,
>  -0.408890, -1.701742, -0.011255, -0.833270, -0.665327, -0.127002,
> -0.793772, -0.518614, -1.390844, -1
> .982825, -0.012530, -0.140848, -1.255086, -0.761665, -0.077154,
> -0.748323, -0.086952, -0.175683, -1.526860
> , 0.098685, -0.030402, -0.903232, -
> ...


-- 
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by O'Reilly: (ask me for a discount code!)
  http://shop.oreilly.com/product/0636920030928.do
Also on Amazon and at all good booksellers!
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Darren Cook
> You can also look at the score differentials. If the game is perfect,
> then the game ends up on 7 points every time. If players made one
> small error (2 points), then the distribution would be much narrower
> than it is.

I was with you up to this point, but players (computer and strong
humans) play to win, not to maximize the score. So a small error in the
opening or middle game can literally be worth anything by the time the
game ends.

> I am certain that there is a vast gap between humans and perfect
> play. Maybe 24 points? Four stones??

24pts would be about two stones (if each handicap stone is twice komi,
e.g. see http://senseis.xmp.net/?topic=2464).

The old saying is that a pro would need to take 3 to 4 stones against
god (i.e. perfect play).

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Darren Cook
> You are right, but from fig 2 of the paper can see, that mc and value
> network should give similar results:
> 
> 70% value network should be comparable to 60-65% MC winrate from this
> paper, usually expected around move 140 in a "human expert game" (what
> ever this means in this figure :)

Thanks, that makes sense.

>>> Assuming that is an MCTS estimate of winning probability, that
>>> 70% sounds high (i.e. very confident);
> 
>> That tweet says 70% is from value net, not from MCTS estimate.

I guess I need to go back and read the AlphaGo papers again; I thought
it was still an MCTS program at the top-level, and the value network was
being used to influence the moves the tree explores. But from this, and
some other comments I've seen, I have the feeling I've misunderstood.

Darren





signature.asc
Description: OpenPGP digital signature
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Game 4: a rare insight

2016-03-13 Thread Darren Cook
From Demis Hassabis:
  When I say 'thought' and 'realisation' I just mean the output of
  #AlphaGo value net. It was around 70% at move 79 and then dived
  on move 87

  https://twitter.com/demishassabis/status/708934687926804482

Assuming that is an MCTS estimate of winning probability, that 70%
sounds high (i.e. very confident); when I was doing the computer-human
team experiments, on 9x9, with three MCTS programs, I generally knew I'd
found a winning move when the percentages moved from the 48-52% range
to, say, 55%.

I really hope they reveal the win estimates for each move of the 5
games. It will especially be interesting to then compare that to the
other leading MCTS programs.

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo

2016-03-12 Thread Darren Cook
Well done, Aja and all the DeepMind team (including all the "backroom
boys" who've given the reliability on the hardware side).

BTW, I've gained great pleasure seeing you sitting there with the union
jack, representing queen and country; you'll probably receive a
knighthood. :-)

> Thanks all. AlphaGo has won the match against Lee Sedol. But there
> are still 2 games to play.

I love your focus! The mainstream media might start to lose interest
now, but at least the people on this list appreciate the implications of
the difference between 5-0 and 3-2. Best of luck in the last two games!

(And just when you thought you were almost back to a nice quiet studious
life, I heard rumour (*) that a Ke Jie match is coming soon.)

Darren

*: I say rumour, as the source was an interview with Demis Hassabis, but
only published in Chinese.

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-11 Thread Darren Cook
>> global, more long-term planning. A rumour so far suggests to have used the
>> time for more learning, but I'd be surprised if this should have sufficed.
> 
> My personal hypothesis so far is that it might - the REINFORCE might
> scale amazingly well and just continuous application of it...

Agreed. What they have built is a training data generator, that can
churn out 9-dan level moves, 24 hours a day. Over the years I've had to
throw away so many promising ideas because they came down to needing a
9-dan pro to, say, do the tedious job of ranking all legal moves in each
test position.

What I'm hoping Deep Mind will do next is study how to maintain the same
level but using less hardware, until they can shrink it down to run on,
say, a high-end desktop computer. The knowledge gained obviously has a
clear financial benefit just in running costs, and computer-go is a nice
objective domain to measure progress. (But the cynic in me suspects
they'll just move to the next bright and shiny AI problem.)

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Finding Alphago's Weaknesses

2016-03-10 Thread Darren Cook
> In fact in game 2, white 172 was described [1] as the losing move,
> because it would have started a ko. ...

"would have started a ko" --> "should have instead started a ko"

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AlphaGo won first game!

2016-03-09 Thread Darren Cook
Wow - didn't expect that. Congratulations to the AlphaGo team!

Ingo wrote:
> Similar with CrazyStone. After move 26 CS gave 56 % for AlphaGo
> and never went below this value. Soon later it were 60+ %, and
> never went lower, too.

Did it show jumps at some of the key moves the human experts thought
were decisive? (E.g. white 80, then 84-88 as poor, then either black 119
or black 123 as the losing move):
  https://gogameguru.com/alphago-defeats-lee-sedol-game-1/

(I guess the "reversal" at white 136 was just an MCTS program knowing it
is ahead, and playing for probability, not winning margin.)

I.e. is it fair to say that other computer programs will appreciate and
understand the computer's moves better than the human's moves, so saying
it is ahead is to be expected? (confirmation bias)

Darren

P.S. Lee Sedol says he was shocked, and never expected to lose, even
when he was behind. I wonder if he did any special preparation for this
match? (E.g. playing handicap games against other strong MCTS program,
to appreciate how they behave when they have a lead.)


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Kasparov on AlphaGo

2016-03-07 Thread Darren Cook
Current edition of New Scientist has an article (p.26) by Gary Kasparov
on the AlphaGo vs. Lee Sedol match. (Just a page, no deep analysis;
though the facing page is also interesting: about Facebook applying AI
to map-making.)

Darren

P.S. I think you can view online with a free subscription:

https://www.newscientist.com/article/2079162-garry-kasparov-weighs-up-ai-challenge-to-worlds-best-go-player/
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Deep Learning learning resources?

2016-03-02 Thread Darren Cook
I'm sure quite a few people here have suddenly taken a look at neural
nets the past few months. With hindsight where have you learnt most?
Which is the most useful book you've read? Is there a Udacity (or
similar) course that you recommend? Or perhaps a blog or youtube series
that was so good you went back and read/viewed all the archives?

Thanks!
Darren

P.S. I was thinking pragmatic, and general, how-to guides for people
dealing with challenging problems similar to computer go, but if you
have recommendations for latest academic theories, or for a very
specific field, I'm sure someone would appreciate hearing it.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] longest 3x3 game

2016-02-20 Thread Darren Cook
>> The longest I've been able to find, by more or less random sampling,
>> is only 521 moves,
> 
> Found a 582 move 3x3 game...

Again by random sampling?

Are there certain moves(*) that bring games to an end earlier, or
certain moves(*) that make games go on longer? Would weighting them
appropriately in your random playouts help?

*: or move sequences. E.g. playing a stone in atari that the opponent
then does not capture. (No idea if that makes a game longer or shorter,
just meaning it as an example.)

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-01 Thread Darren Cook
> someone cracked Go right before that started. Then I'd have plenty of
> time to pick a new research topic." It looks like AlphaGo has 
> provided.

It seems [1] the smart money might be on Lee Sedol:

1. Ke Jie (world champ) – limited strength…but still amazing… Less than
5% chance against Lee Sedol now. But as it can go stronger, who knows
its future…
2. Mi Yuting (world champ) – appears to be a ‘chong-duan-shao-nian (kids
on the path to pros)’, ~high-level amateur.
3, Li Jie (former national team player) – appears to be pro-level. one
of the games is almost perfect (for AlphaGo)


On the other hand, AlphaGo got its jump in level very quickly (*), so it
is hard to know if they just got lucky (i.e. with ideas things working
first time) or if there is still some significant tweaking possible in
these 5 months of extra development (October 2015 to March 2016).

Have the informal game SGFs been uploaded anywhere? I noticed (Extended
Data Table 1) they were played *after* the official game each day, so
the poor pro should have been tired, but instead he won 2 of the 5 (day
1 and day 5). Was this just due to the short time limits, or did Fan Hui
play a different style (e.g. more aggressively)?


Darren



[1]: Comment by xli199 at
http://gooften.net/2016/01/28/the-future-is-here-a-professional-level-go-ai/

[2]: When did DeepMind start working on go? I suspect it might only
after have been after the video games project started to wound down,
which would've Feb 2015? If so, that is only 6-8 months (albeit with a
fairly large team).
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Replicating AlphaGo results

2016-01-28 Thread Darren Cook
>   I'd propose these as the major technical points to consider when
> bringing a Go program (or a new one) to an Alpha-Go analog:
> ...
>   * Are RL Policy Networks essential?  ...

Figure 4b was really interesting (see also Extended Tables 7 and 9): any
2 of their 3 components, on a single machine, are stronger than Crazy
Stone and Zen. And the value of the missing component:

   Policy Network: +813 elo
 Rollouts: +713 elo
Value Network: +474 elo

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game Over

2016-01-28 Thread Darren Cook
> If you want to view them in the browser, I've also put them on my blog:
> http://www.furidamu.org/blog/2016/01/26/mastering-the-game-of-go-with-deep-neural-networks-and-tree-search/
> (scroll down)

Thanks. Has anyone (strong) made commented versions yet? I played
through the first game, but it just looks like a game between two
players much stronger than me :-)

(Ingo, are you analyzing them with e.g. CrazyStone? Is there a
particular point where it adjusts who it thinks is winning?)

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game Over

2016-01-28 Thread Darren Cook
> here a comment by Antti Törmänen
> http://gooften.net/2016/01/28/the-future-is-here-a-professional-level-go-ai/

Thanks, exactly what I was looking for. He points out black 85 and 95
might be mistakes, but didn't point out any dubious white (computer)
moves. He picks out a couple of white moves as particularly good, e.g.
108, which is also an empty triangle: obviously AlphaGo isn't being held
back by any "good shape" heuristics ;-)

I hope he comments the other four games!

Darren


-- 
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by O'Reilly: (ask me for a discount code!)
  http://shop.oreilly.com/product/0636920030928.do
Also on Amazon and at all good booksellers!
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game Over

2016-01-27 Thread Darren Cook
> Google beats Fan Hui, 2 dan pro, 5-0 (19x19, no handicap)!
> ...
> I read the paper...

Is it available online anywhere, or only in Nature?

I just watched the video, which was very professionally done, but didn't
come with the SGFs, information on time limits, number of CPUs, etc.
Aja, David - surely the NDAs no longer apply, and you can now tell us
all the details?

Darren

P.S. Curiously the BBC ran an article today on how Facebook is getting
close to top pro level too: http://www.bbc.co.uk/news/technology-35419141

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Strong engine that maximizes score

2015-11-17 Thread Darren Cook
> I am trying to create a database of games to do some machine-learning
> experiments. My requirements are:
>  * that all games be played by the same strong engine on both sides,
>  * that all games be played to the bitter end (so everything on the board
> is alive at the end), and
>  * that both sides play trying to maximize score, not winning probability.

GnuGo might fit the bill, for some definition of strong. Or Many Faces,
on the level that does not use MCTS.

Sticking with MCTS, you'd have to use komi adjustments: first find two
extreme values that give each side a win, then use a binary-search-like
algorithm to narrow it down until you find the correct value for komi
for that position. This will take approx 10 times longer than normal
MCTS, for the same strength level.

(I'm not sure if this is what Pachi is doing?)

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Strong engine that maximizes score

2015-11-17 Thread Darren Cook
> Attempting to maximize the score is not compatible with being a
> strong engine.  If you want a dan level engine it is maximizing
> win-probability.

If you narrow it down such that komi 25.5, 27.5, and 29.5 give a black
win with 63% to 67% probability, but komi 31.5 jumps to black only
winning 46%, and komi 33.5 gives black only 43%, then I think we can
score the position as black is ahead by 31pts.

If, on this number of playouts, the MCTS program is 3d, then I think it
is reasonable to say this is a 3-dan estimate of the score.  (Of course,
we used perhaps 10 times the number of playouts to turn a 3-dan player
into a 3-dan score-estimator.)

Darren

>> Sticking with MCTS, you'd have to use komi adjustments: first find
>> two extreme values that give each side a win, then use a
>> binary-search-like algorithm to narrow it down until you find the
>> correct value for komi for that position. This will take approx 10
>> times longer than normal MCTS, for the same strength level.
>> 
>> (I'm not sure if this is what Pachi is doing?)
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Standard Computer Go Datasets - Proposal

2015-11-13 Thread Darren Cook
> standard public fixed dataset of Go games, mainly to ease comparison of
> different methods, to make results more reproducible and maybe free the
> authors of the burden of composing a dataset. 

Maybe the first question should be is if people want a database of
*positions* or *games*.

I imagine a position database to be a set of board descriptions, with
each pro move marked on it. Ideally each move would say not just the
number of times it was chosen, but break it down by rank of player.

Each would have a zobrist hash calculated, in all 8 combinations, and
the lowest chosen. This handles rotations and duplicates. If there was
as a ko-illegal point on the board that needs to be stored, and also be
part of the zobrist hash.


A database of positions has some advantages:
  * No licensing issues (*)
  * Rotational duplicates already removed
  * Ready-to-go with the information (most) programs want to learn.


The advantages of storing games:
  * accountability/traceability
  * for programs who want to learn sequences of moves.

Darren


*: At least that was my conclusion when I looked into this before. Game
collections can be copyrighted; moves cannot. A database of moves can be
freely distributed, even it was generated from copyrighted game
collections, as long as there exists no way to regenerate the game
collection from it.

Text corpora (used in machine translation studies, for instance) follow
the same idea: if you split the corpora into sentences, then shuffle
them up randomly, you can distribute the set of sentences.

(I did wonder about storing player ranks, e.g. if a given position has a
move chosen by only a single 9p, and you can then extract each follow-up
position, you could extract a game. But, IMHO, you cannot regenerate any
particular game collection this way. If it is a concern, it can be
solved by only using a random 80% of moves from games.)

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Frisbee Go Simulation

2015-11-12 Thread Darren Cook
> If one or two of these cells are outside the board the
> move will count as a pass. If the landing cell is occupied by another
> stone the move is also counted as a pass. Illegal moves are also counted
> as pass moves. 

Alternatively, the probability could be adjusted for the number of legal
moves.  (E.g. taking the easy example of (1,1) on an empty board,and eps
of 0.2, you'd adjust (1,1), (2,1) and (1,2) to each be 1/3 probability).

This does away with the involuntary pass concept. (But if you keep it, I
agree with John Tromp that it is just a wasted move, not able to cause
early game termination.)

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Komi 6.5/7.5

2015-11-05 Thread Darren Cook
> Of course, that's anecdata...anyone is welcome to prove or disprove this
> old claim by analyzing the stats on KGS, or Tygem or wherever else.

Don't forget the distortion due to people knowing the komi, and playing
to win, rather than playing to maximize their score.

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Fast pick from a probability list

2015-10-07 Thread Darren Cook
> I have a probability table of all possible moves. What is the
> fastest way to pick with probability, possibly with reducing the
> quality of probability?!
> 
> I could not find any discussion on this on computer-go, but probably
> I missed it :(

I may have misunderstood the question, but there was some discussion on
how to pick from moves which have different weights. ... the one I found
is from May 2009, and it seems the online archives only go back to 2010!
So I've pasted together the thread, below.

Darren


Question by Isaac:

> I'm about to work on heavy playouts, and I'm not sure how to choose a
> move during the playout. I intend to have weights for various
> features. I thought about 3 versions:
> 
> 1. In a position, calculate all the weights and the total weight.
> Then, play one move i with the probability weight_i/total_weight.
> 
> 2. Select a move randomly. Calculate the weight of it, then squash
> that weight in the [0,1] range. Play that move with that
> "probability".
> 
> 3. Same as 2., but play that move if the "probability" is higher than
> a certain treshold.
> 
> Which one do you think works best? I'm looking forward to other
> ideas, too. :)

Álvaro replies:


You have the most control with option 1. You can implement this fast
by keeping the sum of the weights for each row and for the total
board. You then "roll" a number between 0 and total_weight, and
advance through the rows subtracting the probability of each row until
you would cross 0, then go along the row subtracting the probability
of each point, until you would cross zero. Pick the point where the
process ends.

I initially implemented a similar scheme using a binary tree, and I
think it was Rémi who told me about this method, which is simpler and
faster in practice.

You may have problems with floating-point precission doing this. The
easy solution is using integers for weights, but perhaps there are
ways to make the code robust while keeping the more natural
floating-point values.


Bill Spight also replied:

Keeping cumulative weights, as Alvaro suggested, is one way to go. You
can improve #1 by choosing a possible play randomly, and then making the
play with the probability weight/maximum_weight.

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] KGS access problem

2015-09-10 Thread Darren Cook
> I have problems to access the KGS server. My Firefox 40.0.3
> (under Windows 8.1) is even not allowing me to visit the website
> www.gokgs.com.
> Argument: "Diffie-Hellman key is too weak"

Here is how to have Firefox not be so fussy:


http://letusexplain.blogspot.co.uk/2015/08/solved-server-has-weak-ephemeral-diffie.html

There seems no workaround for Chrome, so chrome users will still
provider the consumer pressure on server operators to install a more
secure key.

Here is background of what it is defending against:

http://arstechnica.com/security/2015/05/https-crippling-attack-threatens-tens-of-thousands-of-web-and-mail-servers/

I.e. my understanding is that it allows a hacker to user a
man-in-the-middle attack, so effectively https with a 512-bit key is as
secure as http... but only if you believe someone is actively trying to
eavesdrop on your browser session. In the case of IGS, it could be that
the NSA is trying to sniff out people using the Chinese Opening, as a
way to build up a list of potential commie activists, so lets hope the
SSL certificate is fixed soon...

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] re comments on Life and Death

2015-09-04 Thread Darren Cook
> Robert, David Fotland has...
> I find your critique a little painful. 

I don't think Robert was critiquing - he was asking for David's
definition of group strength and connection strength.

> the "stupid" monte carlo works so much better.

Does it? I thought "stupid" monte carlo (i.e. light playouts MCTS) hits
a scalability wall at around 5kyu.

BTW, David, if you choose the CPU to suit it, can the traditional Many
Faces beat the best MCTS programs?


I find it interesting that monte-carlo needed a jump in CPU speed to
reach the point where we could see its usefulness. I wonder if there are
more traditional (*) techniques that got overlooked or abandoned 10
years ago because they were just too slow, which might now start to
become reasonable.

*: By "traditional" I guess I mean closer to the way humans approach the
game of go, thinking in terms of eyes, groups, connections, influence, etc.

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mental Imagery in Go - playlist

2015-08-05 Thread Darren Cook
 However, i have to admit that in 1979 i was a false prophet when i claimed
 the brute-force approach is a no-hoper for Go, even if computers become a
 hundred times more powerful than they are now ...

I think you are okay: at the point where computers were 100 times
quicker than in 1979, monte-carlo was still too slow for anyone to
realize its potential.

(Fastest supercomputer now is 33.86 petaflop/s, which is approx 10^8 to
10^9 quicker than the fastest in 1979! I think it is about the same
ratio for a typical desktop.) :-)

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mental Imagery in Go - playlist

2015-08-05 Thread Darren Cook
 I think you are right, though.  In my opinion, calling MCTS brute 
 force isn't really fair, the brute force portion really doesn't
 work and you need to add a lot of smarts both to the simulations and
 to the way you pick situations to simulate to make things work.

In chess, basic min-max, with an evaluation function that is just the
point values for pieces I learnt as a lad (9 for queen, 5 for rook, 3
for knight/bishop, 1 for pawn) would never have beaten Kasparov.

(Or could it? I've not followed computer chess closely enough to be
sure, but I did hear that Deep Blue was fairly sophisticated software,
not just a lot of hardware.)

Darren

P.S. Isn't brute force the term used to mean that you can see
measurable improvements in playing strength just by doubling the CPU
speed (and/or memory or other hardware restraint). Alpha-beta with all
the trimmings, and MCTS with a good pattern library, both seem to qualify.


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGT endgame solver

2015-07-13 Thread Darren Cook
 performance at endgames is worse than middle, because IMHO MC 
 simulations don't evaluate the values (due to execution speed) of 
 yose-moves and play such moves in random orders.  Assuming there are 7, 
 3, 1 pts moves left at a end position, for example.

(Sorry for two messages). I just thought, don't pattern libraries
capture this? (I may be wrong on this, but I thought all the strong
programs were using patterns to influence search.) The yose patterns
come up multiple times in every game, so shouldn't stopping a monkey
jump get searched before a hane-connect move on the 1st line?

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGT endgame solver

2015-07-13 Thread Darren Cook
 yose-moves and play such moves in random orders.  Assuming there are 7, 
 3, 1 pts moves left at a end position, for example.  Correct order is 
 obviously 7, 3 and 1 (sente gets +5 pts) but all combinations are played 
 at the same probability in MC simulations now.  The average of the 
 scores has, thus, a big error.  

Does the MCTS program just play bad moves, or does it ever play
game-losing moves? (E.g. at the point in the game where 2pt gote is the
biggest move, which is still before CGT kicks in, isn't it?)

(I'm quite curious how many playouts are needed for correct play on
http://senseis.xmp.net/diagrams/44/bc3b65c106c9db92bec3a13d0e713718.sgf
- or do all the strong programs get it wrong and lose the game?)

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGT endgame solver

2015-07-13 Thread Darren Cook
 I imagine it would be fairly easy to swap from MCTS to a CGT solver once it
 could be applied.. Or is this not interesting for some reason?

It only becomes usable once the game is fairly much decided. (Though,
you can construct artificial positions where it gives you a correct move
is non-obvious (*) ) IIRC Martin's papers tried to extend it back a bit
earlier in the game, and that is the state of the art as far as I know.

In contrast MCTS, while theoretically only giving you a rough estimate,
is in fact giving you a Good Enough estimate of the score by the time
CGT could be used.

Darren

*: http://senseis.xmp.net/?MathematicalGo

I guess you've already read Chilling Gets the Last Point ?  There is
also a lot on Sensei's Library:  http://senseis.xmp.net/?CGTPath

I spent some time trying to extend the ideas, when first learning of
CGT, and got nothing worth reporting. They did inspire some of the work
I did on life and death analysis; it was intellectually stimulating, but
I kept coming back to the same core issue: you need to do search, to
handle the kind of situations that come up in real games.


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Using GPUs?

2015-06-26 Thread Darren Cook
Steven wrote:
 http://arxiv.org/abs/1412.6564 (nvidia gtx titan black)
 http://arxiv.org/abs/1412.3409 (nvidia gtx 780)

Thanks - I had read those papers but hadn't realized the neural nets
were run on GPUs.

Nikos wrote:
 https://timdettmers.wordpress.com/2015/03/09/deep-learning-hardware-guide/

This was very useful, thanks!

 As for hardware breakthroughs, Nvidia has announced that its next
 generation GPUs (codenamed Pascal) will offer 10x the performance in 2016,
 so you might want to wait a little more.

One of the comments, on the above blog, questions that 10x speed-up:

https://timdettmers.wordpress.com/2015/03/09/deep-learning-hardware-guide/comment-page-1/#comment-336

Confirmed here:
  http://blogs.nvidia.com/blog/2015/03/17/pascal/

So, currently they use a 32-bit float, rather than a 64-bit double, but
will reduce that to 16-bit to get a double speed-up. Assuming they've
been listening to customers properly, that must mean 16-bit floats are
good enough for neural nets?

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Using GPUs?

2015-06-26 Thread Darren Cook
 It is not exactly Go, but i have a monte-carlo tree searcher on the GPU for
 the game of Hex 8x8
 Here is a github link https://github.com/dshawul/GpuHex

The engine looks to be just the middle 450 lines of code; quite compact!
So running playouts on a GPU worked out well?

Would doing the same thing for go be just a matter of writing more lines
of code, or needing more memory on the GPU, or is there some more
fundamental difference between hex and go that makes the latter less
suitable? (e.g. in hex pieces are only added to the board, whereas in go
they can be removed and loops can happen - does that make GPU-ing
algorithms harder?)

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Using GPUs?

2015-06-25 Thread Darren Cook
I wondered if any of the current go programs are using GPUs.

If yes, what is good to look for in a GPU? Links to essential reading on
this topic would be welcome. (*)

If not, is there some hardware breakthrough being waited for, or some
algorithmic one?

Darren

*: After many years of being happy with built-in graphics, I'm now
thinking to get a gaming PC, to show off some WebGL data
visualizations. Assuming the cost is in the same ballpark, I thought I'd
get one that would allow some scientific computing experiments too.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] OT (maybe): Arimaa bot notably stronger

2015-04-22 Thread Darren Cook
The Slashdot article was low on info, but an Arimaa program, Sharp,
apparently beat the humans to win the $12000 prize described here:
  http://arimaa.com/arimaa/challenge/2015/

There is some description of what it did to improve here:

http://arimaa.com/arimaa/forum/cgi/YaBB.cgi?board=devTalk;action=display;num=1429402345;start=1#1

To my untrained eye it looks like they are all game-specific, rather
than something we could steal from to use in other games and other
domains :-)

Darren

-- 
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by O'Reilly: (ask me for a discount code!)
  http://shop.oreilly.com/product/0636920030928.do
Also on Amazon and at all good booksellers!
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS future

2015-04-03 Thread Darren Cook
 BTW I am a Linux guy true and true since 1994.  But I am DAMN tempted
 to write it in C#.

I use mono on linux [1], and c# is an OK language for this kind of
thing. RestSharp is an interesting library for web service *clients*,
but of course you are writing a server.

Lots of C++ programmers on this list, and C++11 is now quite a safe and
productive language. But socket support sucks. Asio is the best choice,
and it is not at all easy to use or debug; I feel I might just go for
low-level BSD sockets next time. At least then there is a huge body of
working code.

If it was me, personally, I'd hack together a first version in PHP,
using mysql as a DB. You would not believe how many
hacked-together-first-versions in PHP are still running 5 or 10 years
later, because it is good enough.

But node.js might be a better choice. It handles multiple
connections/threads better than PHP, it has ready to go web server
modules and tons of example code, it is fashionable, and there are lots
of jobs for people who can show they've worked on a node.js
server/client project.

 My goal is to move away from interpreted languages and release SOLID
 .exe or bin for unices.

Are you talking about servers or clients there?

Always go for a well-known interpreted language for a server-side
application, unless you are CPU or memory bound. (And if you are you,
you've done something very wrong: Facebook reached half a billion active
users running a PHP back-end; Wikipedia too...)

And using a clearly defined http web server protocol, probably built
around json, and then the clients will take care of themselves: all
modern languages, except C/C++, come with a good web client library, and
a good json library. Just hand out a helper C library, perhaps with a
C++ wrapper API.

Darren

[1]: I then With MonoDevelop, which is actually rather good, but with a
few minor irritating bugs that it seems will never get fixed.

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS future

2015-04-03 Thread Darren Cook
 I disagree with that. Why does it suck?

(Getting a bit OT for computer-go, so I replied off-list; if anyone was
following the conversation, and wants to be CC-ed let me know.)

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] fast + good RNG

2015-03-29 Thread Darren Cook
 I measured that std::mt19937_64 (the mersenne twister from the standard
 c++ libraries) uses about 40% cpu time during playouts.
 
 So I wonder: is there a faster prng while still generating good enough
 random?

This question suggests some:
  http://stackoverflow.com/q/1640258/841830

This question is not that useful, but contains links to more information
on alternative random algorithms, and a link to a video presentation:
  http://stackoverflow.com/q/19665818/841830

Random number generation in multi-threaded programs is an interesting
topic, and worth being aware of. (E.g. you don't want to use 16 threads,
only to find all 16 generate the same random sequence, so each thread
generates the same playout as all the other threads.)
(See the example of using thread_local in
http://codereview.stackexchange.com/a/84112 )

40% sounds high. Are you re-initializing the generator each time you
need a new random number?

Darren



-- 
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by O'Reilly: (ask me for a discount code!)
  http://shop.oreilly.com/product/0636920030928.do
Also on Amazon and at all good booksellers!
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Fwd: Teaching Deep Convolutional Neural Networks to Play Go

2015-03-16 Thread Darren Cook
 To be honest, what I really want is for it to self-learn,...

I wonder if even the world's most powerful AI (i.e. the human brain)
could self-learn go to, say, strong dan level? I.e. Give a boy genius a
go board, the rules, and two years, but don't give him any books, hints,
or the chance to play against anyone who has had access to books/teaching.

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Fwd: Announcement Call for Papers ACG 2015 -- deadline 1 March 2015

2015-01-27 Thread Darren Cook
(I didn't see it, but my apologies if someone already posted this.)


 Forwarded Message 
Subject: Announcement Call for Papers ACG 2015 -- deadline 1 March 2015
Date: Tue, 23 Dec 2014 16:40:17 +0100

Dear all,

With great pleasure we announce the 14th International Conference Advances
in Computer Games 2015, to be held early July 2015, at the University in
Leiden, the Netherlands. (Exact date in early July to be confirmed shortly.)

Papers to the conference will undergo a full peer review process, and
Proceedings are expected to be published in the Springer LNCS Lecture Notes
of Computer Science series.

The ACG2015 conference will be organized, as usual, as part of the ICGA
events, the World Computer Chess Championships and the Computer Olympiad.

Important dates:

Paper submission deadline: 1 March 2015
Notification of acceptance: 25 March 2015
Camera ready papers due: 1 May 2015

Conference website: http://acg2015.wordpress.com
Submissions website: https://www.conftool.net/acg2015  (to be operational
shortly)

Please find attached the Call for Papers including the Program Committee
and further details.

Looking forward to welcoming you in Leiden,
Aske Plaat
Jaap van den Herik
Walter Kosters

Leiden University
Aske

Aske Plaat  -  +31.6.46467007  -  aske.pl...@gmail.com  -  http://plaat.nl





CALL_FOR_PAPERS ACG2015.pdf
Description: Adobe PDF document
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional NeuralNetworks

2015-01-11 Thread Darren Cook
 Is KGS rank set 9 dan when it plays against Fuego?

Aja replied:
 Yes.

I'm wondering if I've misunderstood, but does this mean it is the same
as just training your CNN on the 9-dan games, and ignoring all the 8-dan
and weaker games? (Surely the benefit of seeing more positions outweighs
the relatively minor difference in pro player strength??)

Darren

P.S.

 I did answer Hiroshi's questions.
 
 http://computer-go.org/pipermail/computer-go/2014-December/007063.html

Thanks Aja! It seems you wrote three in a row, and I only got the first
one. I did a side-by-side check from Dec 15 to Dec 31, and I got every
other message. So perhaps it was just a problem on my side, for those
two messages.

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Evaluation function through Deep Convolutional Neural Networks

2015-01-09 Thread Darren Cook
 The discussion on move evaluation via CNNs got me wondering: has anyone
 tried to make an evaluation function with CNNs ?

My first thought was a human can find good moves with a glance at a
board position, but even the best pros need to both count and use search
to work out the score. So NNs good for move candidate generation, MCTS
good for scoring?

Darren


-- 
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by O'Reilly: (ask me for a discount code!)
  http://shop.oreilly.com/product/0636920030928.do
Also on Amazon and at all good booksellers!
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional NeuralNetworks

2015-01-09 Thread Darren Cook
Aja wrote:
 I hope you enjoy our work. Comments and questions are welcome.

I've just been catching up on the last few weeks, and its papers. Very
interesting :-)

I think Hiroshi's questions got missed?

Hiroshi Yamashita asked on 2014-12-20:
 I have three questions.
 
 I don't understand minibatch. Does CNN need 0.15sec for a positon, or
 0.15sec for 128 positions?

I also wasn't sure what minibatch meant. Why not just say batch?

 Is KGS rank set 9 dan when it plays against Fuego?

For me, the improvement from just using a subset of the training data
was one of the most surprising results.

Darren


-- 
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by O'Reilly: (ask me for a discount code!)
  http://shop.oreilly.com/product/0636920030928.do
Also on Amazon and at all good booksellers!
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2015-01-09 Thread Darren Cook
On 2014-12-19 15:25, Hiroshi Yamashita wrote:
 Ko fight is weak. Ko threat is simpley good pattern move.

I suppose you could train on a subset of data: only positions where
there was a ko-illegal move on the board. Then you could learn ko
threats. And then use this alternative NN when meeting a ko-illegal
position in a game.

But, I imagine this is more fuss than it is worth; the NN will be
integrated into MCTS search, and I think the strong programs already
have ways to generate ko threat candidates.

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Darren Cook
 When I had an opportunity to talk to Yann LeCun about a month ago, I asked
 him if anybody had used convolutional neural networks to play go and he
 wasn't aware of any efforts in that direction. 

There was work using neural networks in the mid 1990s, when I first
started with computer go. I think the problem, at that time, came down
to if you use just a few features it was terrible quality, but if you
used more interesting inputs the training times increased exponentially,
so much so that it became utterly impractical.

I suppose this might be another idea, like monte carlo, that just needed
enough computing power for it to become practical for go; it'll be
interesting to see how their attempts to scale it turn out. I've added
the paper in my Christmas Reading list :-)

Darren

 Teaching Deep Convolutional Neural Networks to Play Go
 http://arxiv.org/pdf/1412.3409v1.pdf

 Thier move prediction got 91% winrate against GNU Go and 14%
 against Fuego in 19x19.




-- 
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by O'Reilly: (ask me for a discount code!)
  http://shop.oreilly.com/product/0636920030928.do
Also on Amazon and at all good booksellers!
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [computer-go] Rank on servers at 9x9

2010-02-13 Thread Darren Cook
 The difference between ManyFaces blitz and slow is also a lovely gem of
 information. A quick check shows both accounts are playing a lot of
 
 I have a data about this.
 AyaMC4 and AyaMC play on same condition, but it is not for human.
  AyaMC4 is blitz.
 
 Human is about 1 rank weaker in blitz.

Hi Hiroshi,
That is interesting; can I just check I've not misunderstood? You are
saying that humans play about 1 rank (more precisely, 0.7 ranks) weaker
when playing blitz against a fixed strength opponent?

Going back to the Many Faces blitz (1.8d) and slow (0.6d), suggests that
ManyFaces playing blitz against a human playing at slow time levels
would be 1.8d - 0.7 = 1.1d. I.e. ManyFaces *loses* half a rank by
thinking more :-)

That is unlikely to be the case, implying people lose more strength
playing blitz? In the Aya experiment I wonder if people playing slowly
against a fast opponent tend to speed up and not use all their thinking
time?

Darren

P.S. Using your last year's data, the difference is 1.2 ranks, which
exactly matches the ManyFaces data, but still suggests ManyFaces gets no
benefit from thinking more.


 AyaMC4  1k (0.1k) 10sec/mov (8cores)  1minute  + 15sec byoyomi (x10)
 AyaMC   1k (0.8k) 10sec/mov (8cores) 10minutes + 30sec byoyomi (x5)
 AyaMC2  3k (2.6k) 1 playouts 10minutes + 30sec byoyomi (x5)
 AyaBot2 4k (3.6k)  5000 playouts 10minutes + 30sec byoyomi (x5)
 AyaBot4 5k (4.6k)  5000 playouts 10minutes + 30sec byoyomi (x5)
 
 These are last year's data,
 
 AyaMC4  2k (1.4k) 10sec/mov (8cores)  1minute  + 15sec byoyomi (x10)
 AyaMC   3k (2.6k) 10sec/mov (8cores) 10minutes + 30sec byoyomi (x5)
 (about 16 playouts/mov)


-- 
Darren Cook, Software Researcher/Developer

Specializing in intelligent search (in multiple languages), discovery
of context, aiding communication, and basically helping people find
and make good use of their data.

http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Rank on servers at 9x9

2010-02-12 Thread Darren Cook
Thanks for all the replies! It is much appreciated.

Hiroshi Yamashita wrote:
 I think we can guess something from CGOS rating.
 I made a table about CGOS and KGS bot rating.
 At least in 19x19, CGOS and KGS rank is similar.

That is a very interesting table. It is curious that Zen is only 2 ranks
higher on 9x9. However Aya is 4 ranks higher and Fuego [1] is about 2kyu
on KGS, so (if KGS and CGOS 19x19 correspond for it too, then) also 4
ranks higher.

The difference between ManyFaces blitz and slow is also a lovely gem of
information. A quick check shows both accounts are playing a lot of
games, so this should be statistically significant; the ranking charts
are hard to read, but it looks like 1 to 1.5 ranks difference. (Is there
a way on KGS to get the underlying rating number?)

Darren



 
 
 CGOS  KGS? CGOS 9x9  CGOS 19x19 KGS rank 19x19
 
 1800  6k   gnugo3.7.10   gnugo3.7.106k GnuGo  (postneo, etc)
 1900  5k
 2000  4k
 2100  3k Aya693c_1c 3k AyaMC2 (10k playouts)
 2200  2k   Aya693a_10k  2k pachi2
 2300  1k pachi-c919f1k ManyFaces  (30minutes)
 2400  1d mfgo12-610-2c  1d ManyFaces1 (10sec blitz)
 2500  2d   Aya693_1c Zen-4.9-1c 2d Zen19 Zen  (15sec/move)
 2600  3d   Fuego-1095-1c
 2700  4d   Zen-4.9-1c
 2800  5d   Zengg9-4x4c
 2900  6d
 
 Hiroshi Yamashita


[1]: http://www.gokgs.com/graphPage.jsp?user=Fuego


-- 
Darren Cook, Software Researcher/Developer

Specializing in intelligent search (in multiple languages), discovery
of context, aiding communication, and basically helping people find
and make good use of their data.

http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Rank on servers at 9x9

2010-02-12 Thread Darren Cook
 The difference between ManyFaces blitz and slow is also a lovely gem of
 information.
 2300  1k pachi-c919f1k ManyFaces  (30minutes)
 2400  1d mfgo12-610-2c  1d ManyFaces1 (10sec blitz)
 
 My guess is that this is caused by huge amount of domain-knowledge from
 classical ManyFaces used to prune the tree initially - that gives big
 constant boost against MCTS bots with equal thinking time, but with
 additional time this does not contribute anything anymore and MFGo
 scales similarly to other MCTS bots.

The above stat was from KGS, not CGOS. So, my thought was: MCTS doesn't
scale with extra thinking time as well as humans do. Which conveniently
matches my hypothesis that MCTS doesn't use extra CPU cycles very
efficiently.

Though, as you say, those foolish humans foolishly managing their blitz
time foolishly does distort things a bit.

Darren


-- 
Darren Cook, Software Researcher/Developer

Specializing in intelligent search (in multiple languages), discovery
of context, aiding communication, and basically helping people find
and make good use of their data.

http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Rank on servers at 9x9

2010-02-11 Thread Darren Cook
Do any of the strongest MCTS programs have a rank at 9x9 on any major
server? I found the fuego9 account on KGS but it appears to be
unranked and only playing free games (*). The ManyFaces account
appears to play only 19x19.

I know the programs are stronger at 9x9 than 19x19, but I'm trying to
get a figure of just exactly how strong they are against humans of known
rank. Ideally in a statistically meaningful way (not just a short series
of games against one player), and on non-supercomputer hardware.

Thanks in advance,

Darren

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] scalability analysis with pachi

2010-01-17 Thread Darren Cook
 Yes. And while worrying about what happens after a win rate of 97%
 sounds like splitting hairs, I think we're talking about an awkward
 way of measuring something that's of practical interest.

Yes. How can a program be strong enough to win 97% and not win 100%.
Over on the fuego list Martin Mueller discovered some interesting test
positions (e.g. only winning move is playing inside your own benson-safe
region, IIRC) by looking at games where fuego lost against weaker opponents.

An addition to CGOS that shows Biggest Upsets Of The Month, excluding
losses on time, would be an interesting way to build up a test suite. I
don't know if that is a trivial SQL query for Don, or something harder
though.

Darren
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] GoChild Web2.0 is live on Google AppEngine

2009-12-29 Thread Darren Cook
 GoChild is designed for learning how to play Go efficiently.

Here is the English link for Michael :-)
  http://www.gochildgame.com/en/index.htm

I just tried it, and it works quite well. Some comments:

1. firefox (on linux) keeps complaining I have no plugin for
audio/x-wav. I think the mime type could just be audio/wav. But mp3
files would be more friendly (and use less bandwidth).

2. It took me a while to realize I needed to choose a pack, by undoing
the tree, and *then* move over to the questions tab and choose a
question. (Clicking the board to move to the next puzzle was also not
intuitive; a big flashing NEXT button would be easier.)
Organizing them into graded puzzle sets would be preferable, with the
first set loaded by default.

3. The purpose of each puzzle is not clear, especially when the name of
the pack isn't on the screen any more. (It took me a while to realize
finding the tiger mouth means  I have to put the black stones in atari.)

Darren


 
 Web App - http://gochild2009.appspot.com
 
 Official site - http://www.gochildgame.com
 
 Regards,
 gosharplite
 
 
 
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


-- 
Darren Cook, Software Researcher/Developer

Specializing in intelligent search (in multiple languages), discovery
of context, aiding communication, and basically helping people find
and make good use of their data.

http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Gongo: Go in Go

2009-12-13 Thread Darren Cook
 Javabot is at about 6.5k but they're not really comparable anymore,
 because I added an array to keep track of the liberties for each
 point.

Do you mean you added the array to Gongo or to the java version? I.e. is
Gongo twice as quick as the java version because the java version is
doing more, or twice as quick even though it is also doing more?

Darren

 Thought I'd announce that I've ported the Java refbot to the Go
 language (with some modifications).

 I'm getting about 10,000 random playouts/second on 9x9 using a single
 thread on a 32-bit iMac, using the gc compiler, which doesn't do any
 optimization. I suspect that a board structure that tracked
 pseudo-liberties could do better.
 Source code:
 http://github.com/skybrian/Gongo

-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Fuego parameter question

2009-12-06 Thread Darren Cook
 You also need to set max_nodes quite high or Fuego will keep stopping to
 clear out its tree. I'm setting it to max_games*50, so for 8000:
   uct_param_search max_nodes 40

 According to my notes fuego uses 75M + (65M per million max_nodes). So
 15 million nodes will use about 1Gb.  (That is on 32-bit linux.)
 
 I miss something:
 max_games and max_nodes are correlated or not ?
 
 why do you chose max_nodes = max_games*50 ?
 is it boardsize dependent ?

My tests have all been at 9x9 so it probably is. When I used
max_games*32 it sometimes hit the limit and had to spend time clearing
out its tree.

 I guess that either max_games or max_nodes should be specified, but not
 both (the first reached is the limiting factor ?)

max_games is how many playouts it does. I.e. how strong it is.

max_nodes is how much memory you want to give it; if too low then it
will have to clear out some moves from its tree which wastes time (it
might reduce strength too).

BTW, looking at your config file, I think go_rules is not just to set a
text string, but also does the same as the go_param_rules calls. Testing
from gogui, I tried each of go_rules japanese  chinese and  kgs and
gogui reports all the go_param_rules have changed (including super ko rule).

(Also I think someone reported on the fuego list that uct_param_search
number_playouts 2 stopped giving any advantage after some bug fixes??)

Darren

-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Fuego parameter question

2009-12-05 Thread Darren Cook
 uct_param_player ignore_clock 1
 uct_param_player max_games 8000
 uct_param_search number_threads 1
 uct_command_player ponder 0

I learnt the other day that ignore_clock only ignores the game time
settings, but there is still a default 10 seconds per move. To get rid
of that add:
  go_param timelimit 99

You also need to set max_nodes quite high or Fuego will keep stopping to
clear out its tree. I'm setting it to max_games*50, so for 8000:
  uct_param_search max_nodes 40

According to my notes fuego uses 75M + (65M per million max_nodes). So
15 million nodes will use about 1Gb.  (That is on 32-bit linux.)

 I expected to win 50 to 60% of the games, but won 88% of 1300 games.  There
 were several games that Fuego lost due to a superko violation.
 
 Am I missing a parameter to set the rules to Chinese with superko?

go_rules chinese

(I think the tromp  taylor, but I'm surprised superko is not part of it)

You can also use go_param_rules to set each rule aspect separately.
(BTW, I find using gogui is the best way to understand the fuego settings.)

 Am I missing a parameter to give the strength I've seen in KGS tournaments?
 Perhaps Many Faces is relatively stonger with few playouts due to its
 knowledge and Fuego will do better with more playouts?

Can you compare the total CPU time spent on a game by each of Fuego and
Many Faces and if Fuego is using less then increase max_games accordingly?
Or, given that you just want a strong opponent for regression testing,
forget cpu time and simply keep doubling max_games until you reach 50% :-)

To answer your question I also have these in my config, which I got from
[1].
 uct_param_search lock_free 1
 uct_param_search virtual_loss 1

The first makes it stronger when using multiple threads. I'm not sure
what the second is doing...

Darren

[1]:http://www.cs.ualberta.ca/TechReports/2009/TR09-09/TR09-09.pdf

-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] KCC won the 3rd UEC Cup

2009-12-05 Thread Darren Cook
 Rank  NameAuthor  Score*
 ...
 11Katsunari   Shinich Sei 2
 
 Katsunari used alpha-beta search and the others used MCTS.

Katsunari came second at the UEC cup, but last at GPW Cup. I know there
was some grumbling over the seeding at the UEC Cup but that is still
quite a difference. Did it have technical problems at Hakone?

Darren

-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: Hahn system tournament and MC bots

2009-11-25 Thread Darren Cook
 This is taken onto account in the tree.
 If playing one move lead 10% of time to +10, and 90% to -20,
 the resulting value is -17
 (of course with the bot evaluation/playout)
 
 Reducing the value to -17 is losing a lot of information. Another move
 might have 20% chances of +10 and 80% chances of -24 giving -17, are
 they really just as good?
 ...
 To put this another way, I think that it would be a step in the right
 direction to be able to handle the uncertainities of the values in the
 tree. Maybe some already do that?

When I read this it reminded me of experiments I tried before to pass
more than one piece of information up from the leaf nodes of a (min-max)
tree. E.g. a territory estimate and an influence estimate. I gave up as
it got too complex to handle incomparable nodes (e.g. move A gets more
territory, less influence). I remember having a really good reason to
want to delay reducing multiple features to a single number , but it is
all a bit fuzzy now.

Does this type of search have a name, and any associated research?

Darren

-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: A cluster version of Zen is running on cgos 19x19

2009-11-24 Thread Darren Cook
 Also, on 19x19 board, current 16-core cluster version performs almost 
 the same as 8-core shared memory pc such as Mac Pro, which Yamato used 
 for KGS.

Hi Hideki,
Is that difference due to a scaling limit of Zen, or is this due to the
cluster overhead? Would moving from gigabit to infiniband help, or is
the limit more to do with the lack of shared memory?

T2K HPC cluster

This seems to be a cluster specification rather than an actual machine.
Can you tell us more about how many cores you are experimenting with,
and how the programs scale? (Are all your experiments with Zen, or are
you trying to run other programs on a cluster too?)

Darren

-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] OT: gambling (was: Hahn system...)

2009-11-24 Thread Darren Cook
 No professional gambler, if he had the numbers laid out for him, would
 ever choose unoptimal play, ...
 
 A professional gambler has a 2 step task.
 1. Find a weaker player (aka fish)
 2. capture the fish('s bankroll)

Big Deal, by Anthony Holden, is a fine read (a professional writer took
a year off to become a poker pro), and nicely shows the balance between
maths, bluffing and hustling by *professional* gamblers.

Darren

-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Shodan Go Bet: Nov '09 Update

2009-11-22 Thread Darren Cook
The Shodan Go bet page has been up six months now:
   http://dcook.org/gobet/

In the voting, 128 people have now responded. For a while the three
choices (Computer Wins vs. Too Close To Call vs. John Wins) were getting
 equal numbers of votes. But recently John has pulled ahead, with 42%
thinking he will beat the computer. Uh-oh.

The deadline to play those games is the end of next year, so you
programmers have about 12 months to get your programs comfortably above
3 dan level. Please.

John and I would like to make this a face-to-face event, if we can. Is
anyone interested in sponsoring such an event, or knows a company that
might be? At this stage we're open to any and all offers and suggestions
(*). Please get in touch off-list.

It may not have the glamour of playing a professional or world champion,
but I personally feel this competition, to win against a strong human
player in an even match, is much more meaningful. There are no excuses:
no handicap, no unaffordable supercomputer and John is very familiar
with computer go.

By the way, in the other survey (when will a computer beat the world
champion?), the votes are spread quite widely, but over 60% of the
respondents think it will be some time in the next 20 years. On the
other end of the spectrum, 11 die-hards still think it will never happen.

If you disagree, and can explain why, the voting pages also allow you to
leave comments!

Darren

*: But it cannot be in Japan; a wager between two individuals is not
allowed here.

-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Shodan Go Bet: Japanese and other languages

2009-11-22 Thread Darren Cook
There is a Japanese version of the go bet page here:
  http://dcook.org/gobet/index.ja.html

I'm very grateful to Yasuhiro Ike-san for the translation.

I thought I had a Chinese translation arranged, but that seems to have
fallen through. I'm especially interested in Chinese and Korean
translations because there is so much interest in go in those countries,
but I'd appreciate volunteers for translations to any language. I don't
really have budget, but can offer publicity (e.g. in the Japanese page I
have links to Yasuhiro's company website and his algorithm books).

Darren

-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Opening databases

2009-11-18 Thread Darren Cook
 I am running some public available bots on KGS now and than ( MoGo,
 GnuGo, Fuego, ...) under Windows (precompiled builds).
 
 Where may i get information about the format of the opening databases
 of these bots ? Is there a common format for opening databases so far
 ?

I don't think there is a standard format. This project was recently
being discussed on the fuego-devel list (under the Request for
discussion - big 19x19 opening book thread), so is a good start?
   http://fusekilibrary.sourceforge.net/

Darren


-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MPI vs Thread-safe

2009-10-31 Thread Darren Cook
 Parallelization *cannot* provide super-linear speed-up.
...
 The result follows from a simulation argument. Suppose that you had a
 parallel process that performed better than N times a serial program.
 Construct a new serial program that simulates the parallel process. There is
 a contradiction.
 ...

The paper suggests the cause of the super-linear speed-up is that a
thread gets caught in local optima (which it doesn't escape from within
the designated thinking time). It also says if each thread was given
more time then the speed-up factor may become less.

It sounds like it is definitely worth exploring the parameters of root
parallelization some more.

 At the risk of belaboring the obvious, extra memory associated with
 each processor or node (cache, main memory, hard disk, whatever) is
 one reason adding nodes can in practice give a speedup greater than
 the increase in processing power.

That is an interesting idea. The paper says: experiments were performed
on the supercomputer Hyugens, which has 120 nodes, each with 16 cores
POWER5 running at 1.9Ghz and having 64Gb of memory per node.

Experiments were done from 1 to 16 threads, but I cannot see mention in
the paper whether 16 threads means one nodes using 16 cores, or 16 nodes
using one core of each. Perhaps Guillaume can tell us.

The experiment to decide if a hardware or algorithm cause of the
super-linear speed-up is to try the single thread version by having it
run 4 searches one after another, and merging results. I.e. if that
gives a performance boost over one long search then the cause must be
algorithmic? (?)

Darren


-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MC hard positions for MCTS

2009-10-27 Thread Darren Cook
 But the biggest problem is that the path to life/ko is very narrow.
 The defender has many useful moves and the invader has few.
 So MCTS will falsely judge invadable areas to be safe.

Interesting, I'd not thought about it in that respect: I know I can soon
find positions where the defender has only one way to defend but the
attacker has many choices.

But, has anyone gathered stats on positions, from real games, that
require precise play by the defender/attacker/both/neither? Is defending
really easier than attacking?

Darren


-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] First ever win of a computer against a pro 9P as black (game of Go, 9x9).

2009-10-27 Thread Darren Cook
 I will offer some anecdotal evidence concerning humans playing other
 humans, from club and tournament playing experience: you will find that
 shorter time limits amplify the winning probability of stronger players...

Another anecdote. At a Fost Cup (Computer Go tournament) from 10-15
years ago, a pro player had made his own program. I think it was based
on patterns and though it wasn't one of the stronger programs, it played
very quickly. This was at Nihon Kiin, and another pro friend popped in
to visit; I forget his name, but he was one of the top 9p players. He
played the fast program, and they played a 19x19 game at the pace of at
least 60 moves/minute.

I forget if it was an even game or 9-stone handicap as it didn't matter
- the pro killed every group. But what impressed me was he made shapes
and strength that even dan players would've had to work hard to get. A
wall of stones along one side of the board naturally ended up being in
just the right place to work with joseki played earlier on the other
side of the board, stones played long before ended up on just the
critical points to kill, yet he took not even a breath to plan any of this.

So, I wonder if the blitz strength of very strong go players is
something special and peculiar to the game of go. Patterns and shape
knowledge is so important in go, that humans (*) gain relatively little
extra strength from extra thinking.

Darren

*: Meaning very strong players who've spent years studying and
appreciating good shape.

-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/  (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


  1   2   3   >