Re: [Computer-go] Golois5 is KGS 4d

2017-01-11 Thread George Dahl
For people interested in seeing the reviews for ICLR 2017 for the paper:
https://openreview.net/forum?id=Bk67W4Yxl

On Tue, Jan 10, 2017 at 6:46 AM, Detlef Schmicker  wrote:

> Very interesting,
>
> but lets wait some days for getting an idea of the strength,
> 4d it reached due to games against AyaBotD3, now it is 3d again...
>
>
> Detlef
>
> Am 10.01.2017 um 15:29 schrieb Gian-Carlo Pascutto:
> > On 10-01-17 15:05, Hiroshi Yamashita wrote:
> >> Hi,
> >>
> >> Golois5 is KGS 4d.
> >> I think it is a first bot that gets 4d by using DCNN without search.
> >
> > I found this paper:
> >
> > https://openreview.net/pdf?id=Bk67W4Yxl
> >
> > They are using residual layers in the DCNN.
> >
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-02-01 Thread George Dahl
If anything, the other great DCNN applications predate the application of
these methods to Go. Deep neural nets (convnets and other types) have been
successfully applied in computer vision, robotics, speech recognition,
machine translation, natural language processing, and hosts of other areas.
The first paragraph of the TensorFlow whitepaper (
http://download.tensorflow.org/paper/whitepaper2015.pdf) even mentions
dozens at Alphabet specifically.

Of course the future will hold even more exciting applications, but these
techniques have been proven in many important problems long before they had
success in Go and they are used by many different companies and research
groups. Many example applications from the literature or at various
companies used models trained on a single machine with GPUs.

On Mon, Feb 1, 2016 at 12:00 PM, Hideki Kato  wrote:

> Ingo Althofer:
>  >:
> >Hi Hideki,
> >
> >first of all congrats to the nice performance of Zen over the weekend!
> >
> >> Ingo and all,
> >> Why you care AlphaGo and DCNN so much?
> >
> >I can speak only for myself. DCNNs may be not only applied to
> >achieve better playing strength. One may use them to create
> >playing styles, or bots for go variants.
> >
> >One of my favorites is robot frisbee go.
> >http://www.althofer.de/robot-play/frisbee-robot-go.jpg
> >Perhaps one can teach robots with DCNN to throw the disks better.
> >
> >And my expectation is: During 2016 we will see many more fantastic
> >applications of DCNN, not only in Go. (Olivier had made a similar
> >remark already.)
>
> Agree but one criticism.  If such great DCNN applications all
> need huge machine power like AlphaGo (upon execution, not
> training), then the technology is hard to apply to many areas,
> autos and robots, for examples.  Are DCNN chips the only way to
> reduce computational cost?  I don't forecast other possibilities.
> Much more economical methods should be developed anyway.
> #Our brain consumes less than 100 watt.
>
> Hideki
>
> >Ingo.
> >
> >PS. Dietmar Wolz, my partner in space trajectory design, just told me
> >that in his company they started woth deep learning...
> >___
> >Computer-go mailing list
> >Computer-go@computer-go.org
> >http://computer-go.org/mailman/listinfo/computer-go
> --
> Hideki Kato 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [computer-go] Neural networks

2009-10-14 Thread George Dahl
Neural networks are not considered obsolete by the machine learning
community; in fact there is much active research on neural networks
and the term is understood to be quite general.  SVMs are linear
classifiers for hand-engineered features.  When a single layer of
template-matchers isn't enough, neural networks can be quite effective
for extracting features that could be a kernel for an SVM or
whatnot.  If anything, neural networks research gets marketed as
research on probabilistic graphical models more often than it gets
marketed as kernel machines research.

- George

On Wed, Oct 14, 2009 at 9:06 AM, Rémi Coulom remi.cou...@univ-lille3.fr wrote:
 Petr Baudis wrote:

  Hi!

  Is there some high-level reason hypothesised about why there are
 no successful programs using neural networks in Go?

  I'd also like to ask if someone has a research tip for some
 interesting Go sub-problem that could make for a nice beginner neural
 networks project.

  Thanks,


 At the time when it was fashionable, I would have sold my pattern-Elo stuff
 as a neural network, because, in neural-network jargon, it is in fact a
 one-layer network with a softmax output. Since the development of
 support-vector machines, neural networks have been considered completely
 obsolete in the machine-learning community. From a marketing point of view,
 it is not a good idea to do research on neural networks nowadays. You must
 give your system another name.

 Rémi
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Random weighted patterns

2009-07-16 Thread George Dahl
Thanks! I had never seen the alias method before and it is quite ingenious!
- George

On Thu, Jul 16, 2009 at 3:04 AM, Martin Muellermmuel...@cs.ualberta.ca wrote:
 If you want to take many samples from a fixed, or infrequently changing,
 distribution, you can do it in O(1) time per sample, with O(n) initial setup
 costs. This is quite clever and goes by the name of alias method.
 See http://cg.scs.carleton.ca/~luc/rnbookindex.html, page 107-111

 For weighted patterns, the row sum method by Rémi is probably hard to beat.
 It was discussed here a while ago.

        Martin
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Really basic question

2009-07-06 Thread George Dahl
I think he is missing the tree search part.  Just doing a one-ply
lookahead and then doing playouts will not make a strong bot.  I would
like to defer an explanation of UCT (or something else) to someone who
is more of an expert.

- George

On Mon, Jul 6, 2009 at 8:25 PM, Raymond Woldcomputergol...@w-wins.com wrote:
 Fred Hapgood wrote:

 I have a really basic question about how MC works in the context of Go.

 Suppose the problem is to make the first move in a game, and suppose we
 have accepted as a constraint that we will abstain from just copying
 some joseki out of a book -- we are going to use MC to figure out the
 first move de novo. We turn on the software and it begins to play out
 games. My question is: how does the software pick its first move?  Does
 it move entirely at random? Sometimes it sounds that way MC works is by
 picking each move at random, from the first to the last, for a million
 games or so. The trouble is that the number of possible Go games is so
 large that a million games would not even begin to explore the
 possibilities.  It is hard to imagine anything useful emerging from
 examining such a small number. So I'm guessing that the moves are not
 chosen at random.  But even if you reduce the possibilities to two
 options per move, which would be pretty impressive, you'd still run out
 of your million games in only twenty moves, after which you would be
 back to picking at random again.

 We don't know why it works. It's just a matter of empirical fact that the
 win rate in random play-outs is a decent indicator of the strength of a
 move. The math involved is likely to be hideous and probably of little
 practical interest, and I don't know of anyone that has tried.

 What am I missing??

 I don't think you're missing anything at all.
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] static evaluators for tree search

2009-02-17 Thread George Dahl
At the moment I (and another member of my group) are doing research on
applying machine learning to constructing a static evaluator for Go
positions (generally by predicting the final ownership of each point
on the board and then using this to estimate a probability of
winning).  We are looking for someone who might be willing to help us
build a decent tree search bot that can have its static evaluator
easily swapped out so we can create systems that actually play over
GTP.  As much as we try to find quantitative measures for how well our
static evaluators work, the only real test is to build them into a
bot.

Also, if anyone knows of an open source simple tree search bot
(perhaps alpha-beta or something almost chess like) for Go, we might
be able to modify it ourselves.

The expertise of my colleague and I is in machine learning, not in
tree search (although if worst comes to worst I will write my own
simple alpha-beta searcher).  We would be eager to work together with
someone on this list to try and create a competitive bot.  We might at
some point create a randomized evaluator that returns win or loss
nondeterministically for a position instead of a deterministic score,
so an ideal collaborator would also have some experience with
implementing monte carlo tree search (we could replace playouts with
our evaluator to some extent perhaps).  But more important is
traditional, chess-like searching algorithms.

If anyone is interested in working with us on this, please let me
know!  We have a prototype static evaluator complete that is producing
sane board ownership maps, but we will hopefully have many even better
ones soon.

- George
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread George Dahl
I am aware such a decoupled program might not exist, but I don't see
why one can't be created.  When you say the move generator has to be
very disciplined what do you mean?  Do you mean that the evaluator
might be used during move ordering somehow and that generating the
nodes to expand is tightly coupled with the static evaluator?
- George

On Tue, Feb 17, 2009 at 12:55 PM, Dave Dyer dd...@real-me.net wrote:

 While your goal is laudable, I'm afraid there is no such thing
 as a simple tree search with a plug-in evaluator for Go.  The
 problem is that the move generator has to be very disciplined,
 and the evaluator typically requires elaborate and expensive to
 maintain data structures.  It all tends to be very convoluted and
 full of feedback loops, in addition to the actual alpha-beta which
 is trivial by comparison.

 If you look at GnuGo or some other available program, I'm pretty sure
 you'll find a line of code where the evaluator is called, and you could
 replace it, but you'll find it's connected to a pile of spaghetti.

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] static evaluators for tree search

2009-02-17 Thread George Dahl
You're right of course.  We have a (relatively fast) move pruning
algorithm that can order moves such that about 95% of the time, when
looking at pro games, the pro move will be in the first 50 in the
ordering.  About 70% of the time the expert move will be in the top
10.  So a few simple tricks like this shouldn't be too hard to
incorporate.

However, the main purpose of making a really *simple* alpha-beta
searching bot is to compare the performance of static evaluators.  It
is very hard for me to figure out how good a given evaluator is (if
anyone has suggestions for this please let me know) without seeing it
incorporated into a bot and looking at the bot's performance.  There
is a complicated trade off between the accuracy of the evaluator and
how fast it is.  We plan on looking at how well our evaluators
predicts the winner or territory outcome or something for pro games,
but in the end, what does that really tell us?  There is no way we are
going to ever be able to make a fast evaluator using our methods that
perfectly predicts these things.

So I have two competing motivations here.  First, I want to show that
the evaluators I make are good somehow.  Second, I want to build a
strong bot.

- George

On Tue, Feb 17, 2009 at 2:04 PM,  dave.de...@planet.nl wrote:
 A simple alfabeta searcher will only get a few plies deep on 19x19, so it
 won't be very useful (unless your static evaluation function is so good that
 it doesn't really need an alfabeta searcher)

 Dave
 
 Van: computer-go-boun...@computer-go.org namens George Dahl
 Verzonden: di 17-2-2009 18:27
 Aan: computer-go
 Onderwerp: [computer-go] static evaluators for tree search

 At the moment I (and another member of my group) are doing research on
 applying machine learning to constructing a static evaluator for Go
 positions (generally by predicting the final ownership of each point
 on the board and then using this to estimate a probability of
 winning).  We are looking for someone who might be willing to help us
 build a decent tree search bot that can have its static evaluator
 easily swapped out so we can create systems that actually play over
 GTP.  As much as we try to find quantitative measures for how well our
 static evaluators work, the only real test is to build them into a
 bot.

 Also, if anyone knows of an open source simple tree search bot
 (perhaps alpha-beta or something almost chess like) for Go, we might
 be able to modify it ourselves.

 The expertise of my colleague and I is in machine learning, not in
 tree search (although if worst comes to worst I will write my own
 simple alpha-beta searcher).  We would be eager to work together with
 someone on this list to try and create a competitive bot.  We might at
 some point create a randomized evaluator that returns win or loss
 nondeterministically for a position instead of a deterministic score,
 so an ideal collaborator would also have some experience with
 implementing monte carlo tree search (we could replace playouts with
 our evaluator to some extent perhaps).  But more important is
 traditional, chess-like searching algorithms.

 If anyone is interested in working with us on this, please let me
 know!  We have a prototype static evaluator complete that is producing
 sane board ownership maps, but we will hopefully have many even better
 ones soon.

 - George
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread George Dahl
I really don't like the idea of ranking moves and scoring based on the
distance to the top of a list for a pro move.  This is worthless if we
ever want to surpass humans (although this isn't a concern now, it is
in principle) and we have no reason to believe a move isn't strong
just because a pro didn't pick it.  Perhaps another pro would pick a
different move in the same situation.  If we had a pro that ranked all
the legal moves, or at least the top 10 or so, this would be one
thing, but those data are almost never available.  Or if we had two
pro's watching a game and giving what they thought was the best move
at each point and we scored an evaluator based on how it did only when
the pro's agreed (although this would bias scoring towards things like
forced moves).  Also, there are bad moves and then even worse moves
(like filling the eyes of a powerful living group, killing it).  If an
evaluator makes catastrophic evaluations sometimes and plays a perfect
pro move other times, it could still be much worse in balance (if we
can't tell when it is blundering versus being brilliant).

I think it would be much more informative to compare evaluator A and
evaluator B in the following way.
Make a bot that searched to a fixed depth d before then calling a
static evaluator (maybe this depth is 1 or 2 or something small).  Try
and determine the strength of a bot using A and a bot using B as
accurately as possible against a variety of opponents.  The better
evaluator is defined to be the one that results in the stronger bot.

Obviously this methods introduces a whole host of new problems (even
finding the strength of a running bot is non-trivial), but at least
it attempts to measure what we would eventually care about --- playing
strength.  So of course we care about how fast the static evaluators
are, because we might be able to search more nodes with a faster
evaluator, but for measuring the quality of the evaluations, I can't
at the moment think of a better way of doing this.

One of the problems with my suggestion is that maybe the evaluators
are better at evaluating positions beyond a certain number of moves
and that if we could just get to that depth before calling them, they
would be much more accurate.  Or maybe changes to how searching works
can compensate for weaknesses in evaluators or emphasize strengths.
Really one would want the strongest possible bot built around a single
evaluator versus the strongest possible bot built around another
evaluator, but this is clearly impossible to achieve.

I guess the another question is, what would you need to see a static
evaluator do to be so convinced it was useful that you then built a
bot around it?  Would it need to win games all by itself with one ply
lookahead?

- George

On Tue, Feb 17, 2009 at 2:41 PM, Dave Dyer dd...@real-me.net wrote:

 This is old and incomplete, but still is a starting point you might
 find useful  http://www.andromeda.com/people/ddyer/go/global-eval.html

 General observations (from a weak player's point of view):

 Go is played on a knife edge between life and death.  The only evaluator
 that matters is is this stone alive, and there are no known proxies
 that will not fall short a significant amount of the time.  If you fall
 short once or twice in a game against a competent player, you will lose.

 General strategic considerations will play you false every time.

 -- Notwithstanding the above, improving general considerations
 will improve play, but not much.  It's all about the minutia of
 the situation.

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread George Dahl
Really?  You think that doing 20-50 uniform random playouts and
estimating the win probability, when used as a leaf node evaluator in
tree search, will outperform anything else that uses same amount of
time?  I must not understand you.  What do you mean by static
evaluator?  When I use the term, I mean something quite general,
basically any function of the board state that doesn't do explicit
lookahead.

I guess 20-50 optimized playouts take so little time they would just
be so much faster than the sort of evaluators I might make that could
use hundreds of thousands of floating point multiplies.  But more
towards that costly end of the regime, I think doing thousands of
random playouts would be really bad (but then maybe you could just
expand more nodes in the tree and do fewer playouts instead).  I am
looking for higher quality, more costly evaluators than 50 MC
playouts.  But that is a good point that I should compare to those
evaluators since if my evaluator is going to take more time it had
better show some other advantage.

- George

On Tue, Feb 17, 2009 at 6:13 PM, Darren Cook dar...@dcook.org wrote:
 I think it would be much more informative to compare evaluator A and
 evaluator B in the following way.
 Make a bot that searched to a fixed depth d before then calling a
 static evaluator (maybe this depth is 1 or 2 or something small).  Try
 and determine the strength of a bot using A and a bot using B as
 accurately as possible against a variety of opponents.  The better
 evaluator is defined to be the one that results in the stronger bot.

 If you do this I'd suggest also including monte-carlo as one of your
 static evaluators. You want a score, but monte carlo usually returns
 information like 17 black wins, 3 white wins. However you can instead
 just sum ownership in the terminal positions, so if A1 is owned by black
 15 times, white 5 times, count that as a point for black. If exactly
 equal ownership count the point for neither side. (Alternatively just
 sum black and white score of each terminal position.)

 You could have two or three versions using different number of playouts
 (with the result trade-off of more playouts means fewer nodes visited in
 the global search); I suspect 20-50 playouts will be optimum.

 My hunch is that monte carlo version will always out perform any static
 evaluation, given the same overall time (*). But it would be interesting
 to know.

 Darren

 *: Or run the experiment giving the static evaluation four times the
 clock time, on the assumption there is more potential for optimization
 in complex code.

 --
 Darren Cook, Software Researcher/Developer
 http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
open source dictionary/semantic network)
 http://dcook.org/work/ (About me and my work)
 http://dcook.org/blogs.html (My blogs and articles)
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] static evaluators for tree search

2009-02-17 Thread George Dahl
GPUs can speed up many types of neural networks by over a factor of 30.

- George

On Tue, Feb 17, 2009 at 8:35 PM, terry mcintyre terrymcint...@yahoo.com wrote:
 
 From: dhillism...@netscape.net dhillism...@netscape.net

 Perhaps the biggest problem came from an unexpected quarter. MC playouts
 are very fast and neural nets are a bit slow. (I am talking about the
 forward pass, not the off-line training.) In the short time it took to feed
 a board position to my net and get the results, I could have run enough MC
 playouts to obtain a better estimate of the ownership map. :/

 Would GPUs be better suited to neural nets than to MC playouts? If so, would
 this tilt the playing field in favor of neural nets on GPUs,. giving them an
 advantage over MC on  relatively fewer general-purpose CPUs? A GPU with
 hundreds of shaders is relatively cheap compared to even a handful of x86
 processors.

 The same argument may apply to other forms of classification which map well
 to GPUs.


 
 A Good Credit Score is 700 or Above. See yours in just 2 easy steps!

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Presentation of my personnal project : evolution of an artificial go player through random mutation and natural selection

2009-02-13 Thread George Dahl
How do you perform the neuro-evolution?  What sort of genetic
operators do you have?  Do you have any sort of crossover?  How do you
represent the board and moves to the networks?

- George

On Fri, Feb 13, 2009 at 2:42 PM, Ernest Galbrun
ernest.galb...@gmail.com wrote:
 Hello,
 I would like to share my project with you : I have developped a program
 trying to mimic evolution through the competition of artificial go players.
 The players are made of totally mutable artificial neural networks, and the
 compete against each other in a never ending tournament, randomly mutating
 and reproducing when they are successful. I have also implemented a way to
 share innovation among every program. I am currently looking for additional
 volunteer (we are 4 at the moment) to try this out.
 If you are interested, pleas feel free to answer here, or directly email me.
 I have just created a blog whose purpose will be to explain how my program
 work and to tell how it is going.
 (As for now, it has been running consistently for about a month, the players
 are still rather passive, trying to play patterns assuring them the greatest
 territory possible.)
 Here is the url of my blog : http://goia-hephaestos.blogspot.com/
 Ernest Galbrun


 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: GCP on ICGA Events 2009 in Pamplona

2009-01-14 Thread George Dahl
I have heard 100 million as an estimate of the total number of Go
players worldwide.
- George

On Wed, Jan 14, 2009 at 7:42 AM, Mark Boon tesujisoftw...@gmail.com wrote:
 It's difficult to get hard data about this. Go is only the most popular game
 in Korea. In other countries like Japan and China it comes second by far to
 a local chess variation.

 Possibly Chess is more ingrained in Western culture than Go is in Asia, I
 don't know really. But Chess has the population-numbers of West vs. East
 against it. If there are more chess-players than Go-players in the world
 then it won't be by much. But the Go market is probably a lot bigger. Look
 only at the money in professional Go tournaments. It's probably an order of
 magnitude more than the money in professional Chess. But I must admit this
 is just a guess of mine.

 Mark


 On Jan 12, 2009, at 9:22 AM, steve uurtamo wrote:

 i think you might be estimating this incorrectly.

 s.

 On Sat, Jan 10, 2009 at 9:00 AM, Gian-Carlo Pascutto g...@sjeng.org
 wrote:

 Ingo Althöfer wrote:

 What prevents you from freezing in your chess
 activities for the next few months and hobbying
 full (free) time on computer go.

 The amount of chess players compared to the amount of go players.

 --
 GCP
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 results (so far)

2008-12-24 Thread George Dahl
So if I understand this correctly, you only allow moves on the 3rd,
4th, or 5th lines to be considered (in both the tree and the playouts)
unless there is another stone within manhattan distance of two?

What would be really interesting is if one of the stronger open source
engines was modified to have this rule on 19 by 19.  It seems like it
should surely help even a stronger engine if it only was applied in
the tree up to some fixed depth and with distance 3.  If the rule
never eliminates the moves that the engine would have picked without
the rule it seems really unlikely to hurt performance.

So I think the results would be more and more interesting the stronger
the base version is.  Is there a rule like this that fits pro play?

- George

On Wed, Dec 24, 2008 at 8:46 AM, Don Dailey dailey@gmail.com wrote:
 19x19 results of 3,4,5 rank rule:


 Rank Name   Elo+- games score oppo. draws
   1 d2p   2050   21   21   273   57%  20000%
   2 base  2000   21   21   273   43%  20500%


 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] FW: computer-go] Monte carlo play?

2008-11-16 Thread George Dahl
So you say that: ...I'm observing that most of the increase in level
comes from the selection during exploration and only in small part
from the selection during simulation., could you elaborate at all?
This is very interesting.  That almost suggests it might be fruitful
to use the patterns in the tree, but keep lighter playouts.
- George

On Sun, Nov 16, 2008 at 10:39 PM, Mark Boon [EMAIL PROTECTED] wrote:
 Some months ago I did several experiments with using tactics and patterns in
 playouts. Generally I found a big boost in strength using tactics. I also
 found a boost in strength using patterns but with a severe diminishing
 return after a certain number and even becoming detrimental when using large
 number of patterns (1,000s to 10,000s). Since I was using a generalized
 pattern-matcher, using it slowed things down considerably. Although it
 played a lot better with the same number of playouts, if I compared MC
 playouts with patterns to a MC playout without patterns using the same
 amount of CPU time the gain was not so obvious. Since most of the gain in
 strength was gained by just a few patterns I concluded just as David that it
 was probably better to just use a handful of hard-coded patterns during
 playouts.

 I only recently started to do real experiments with hard-coded patterns and
 so far my results are rather inconclusive. I found when mixing different
 things it's not always clear what contributes to any increased strength
 observed. So I'm still in the process of trying to dissect what is actually
 contributing where. I found for example that a lot of the increased level of
 play using patterns does not come from using them during playouts but comes
 from the effect they have on move-exploration. I don't know if this is due
 to my particular way of implementing MC playouts in combination with UCT
 search, but moves matching a pattern (usually) automatically make it first
 in the tree-expansion as well and generally I can say so far I'm observing
 that most of the increase in level comes from the selection during
 exploration and only in small part from the selection during simulation.

 For example, in one particular experiment using just 5 patterns I saw a
 win-rate of 65% against the same program not using patterns (with the same
 number of playouts). But when not using the patterns during exploration saw
 the win-rate drop to just 55%.

 I still have a lot of testing to do and it's too early to draw any hard
 conclusions. But I think it's worthwhile trying to distinguish where the
 strength is actually gained. Better yet, finding out exactly 'why' it gained
 strength, because with MC playouts I often find results during testing
 highly counter-intuitive, occasionally to the point of being (seemingly)
 nonsensical.

 I also think what Don was proposing with his reference-bot could be
 interesting. Trying to make it play around ELO 1700 on CGOS just using 5,000
 (light) playouts. I don't know if it's possible, but I think it's a fruitful
 exercise. At a time where most people are looking at using more and more
 hardware to increase playing-strength, knowing what plays best at the other
 end of the spectrum is valuable as well. With that I mean, finding what
 plays best using severely constrained resources.

 Mark

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] FW: computer-go] Monte carlo play?

2008-11-16 Thread George Dahl
I look forward to hearing more!  Happy testing.
- George

On Sun, Nov 16, 2008 at 11:53 PM, Mark Boon [EMAIL PROTECTED] wrote:

 On 17-nov-08, at 02:42, George Dahl wrote:

 So you say that: ...I'm observing that most of the increase in level
 comes from the selection during exploration and only in small part
 from the selection during simulation., could you elaborate at all?
 This is very interesting.  That almost suggests it might be fruitful
 to use the patterns in the tree, but keep lighter playouts.

 That's exactly what it's suggesting. But as I said, I need to do some more
 testing to make a hard case for that.

 Mark

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Git, any other ideas?

2008-10-24 Thread George Dahl
mercurial or bazaar.
I use bazaar myself.  It took me 5 minutes to figure out how to do the
very basics, which so far has been enough for me.  I think both have
eclipse plugins, but I haven't used them.

- George


On Fri, Oct 24, 2008 at 2:03 PM, Mark Boon [EMAIL PROTECTED] wrote:
 Due to several recommendations from this list I decided to take a look at
 git.

 After wasting a few hours trying to get the Eclipse plugin to work I decided
 to give up. I might give it a look again when it comes with a reliable
 installer / update-link.

 Any other ideas?

 I can keep using Subversion and mirror it. But then traffic can only go one
 way...

 Mark

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Git, any other ideas?

2008-10-24 Thread George Dahl
If you are interested, search for git versus bazaar or mercurial
versus git or whatever for any pair of mercurial, git, and bazaar on
google.  For my purposes, it really didn't matter too much which one I
used so I used the first thing that worked.  Git has a reputation for
being very fast and having loads of advanced features and being hard
to learn.

- George

On Fri, Oct 24, 2008 at 2:45 PM, Don Dailey [EMAIL PROTECTED] wrote:
 On Fri, 2008-10-24 at 16:03 -0200, Mark Boon wrote:
 Due to several recommendations from this list I decided to take a
 look at git.

 After wasting a few hours trying to get the Eclipse plugin to work I
 decided to give up. I might give it a look again when it comes with a
 reliable installer / update-link.

 Any other ideas?

 I would suggest that you stay with Git.  I think it is rapidly becoming
 the king and I think it's probably the best.

 You are probably going to have some pain with anything at first - it's
 worth going beyond the learning curve.

 Git is very simple to use from the command line and you should check out
 instaweb (git instaweb) which builds a very nice web page where you
 can trace your projects history and so on.

 - Don




 I can keep using Subversion and mirror it. But then traffic can only
 go one way...

 Mark

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 7.5-komi for 9x9 in Beijing

2008-10-08 Thread George Dahl
I agree that the komi should not be changed unless there is a very
compelling reason.  My engine would have to be entirely recreated to
support a different komi and I only want to maintain one engine for
each boardsize.
 - George

On Wed, Oct 8, 2008 at 3:46 PM, Don Dailey [EMAIL PROTECTED] wrote:
 On Wed, 2008-10-08 at 11:47 -0700, Christoph Birk wrote:
 On Wed, 8 Oct 2008, Don Dailey wrote:
  much more common.There were just a few games that used 6.5 komi
  because when I first started CGOS I had set 6.5 by mistake but I think
  that was just for a few hours at most.   The vast majority of these are
  7.5 komi games:

 After all this discussion about komi for 9x9 games, wouldn't you
 think that using 7.5 was a mistake and go back to 6.5 ?

 Why?

 First of all, it is not known that the correct komi is odd,  we only
 have the observation that without seki it would be odd and seki is
 relatively rare.   That is far from a proof - it's a hunch based on a
 weak premise,  we assume that the rarity of seki is statistical evidence
 that komi is odd.   And even if we accepted it as such then we admit the
 possibility that it is really even.

 But let's say the correct komi is 7.   I think that is fairly likely.  I
 also believe that if komi is even, it's not going to be 6,  it's going
 to be 8.   I base this on weak statistical evidence from CGOS games that
 I mini-maxed.   Also, would you think 5.5 or 7.5 is better?   5.5 gives
 black a huge advantage.   I think komi is 7 or 8.

 But let's say it's 7.  I don't see any reason to favor 6.5 over 7.5
 unless as Dave Fotland says we want to favor the first player as is done
 in many other games.The only reason I would favor one over the other
 is if it turned out that in practical play the games ended up closer.
 For instance if black won a 53% at 6.5 komi and white wins 51% at 7.5
 komi, I would favor 7.5 because it kept the scores close.   I believe
 6.5 would give black a bigger advantage than 7.5 gives white in
 practical play.

 It would be great if we could prove that 7 is correct but I don't think
 we have a reasonable way to do this.

 Is there any way to prove that with best play the game cannot end in
 seki?   I would accept that as a strong indication that 7 is probably
 correct, because I doubt anyone believes 5 or 9 is correct.  I think the
 candidates are 6, 7 and 8.

 - Don








 Christoph
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] anyone applied ideas from Modelling Uncertainty in the Game of Go?

2008-09-06 Thread George Dahl
Has anyone applied the ideas in Modelling Uncertainty in the Game of Go by
Stern, Graepel, and MacKay?The paper can be found at:
http://research.microsoft.com/~dstern/papers/sterngraepelmackay04.pdf

It was quite a fascinating paper!
- George
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] CGOS server boardsize

2008-08-01 Thread George Dahl
One thing to consider is that for some bots it may be very very hard to
change the board size.  My (as yet incomplete) bot will be like this.  It
will require thousands of CPU hours to adapt itself to a new board size so I
want to work with as few board sizes as possible since I need to collect
training data for each one.- George

On Fri, Aug 1, 2008 at 4:51 PM, Don Dailey [EMAIL PROTECTED] wrote:

 How about rotating board sizes?   Each round changes the board size.
 Just an idea.

 One time long ago I considered making a server where there were no time
 controls.  You just played at whatever pace you choose.  The server
 would try to keep your bot busy playing many different games
 simultaneously.  Whenever your move is complete, the server hands you a
 new position to compute which likely would be from some other game.

 Slower bots of course play less games.  Scheduling for this is an
 interesting problem, especially if avoiding mismatches is a priority.

 - Don



 On Fri, 2008-08-01 at 13:09 -0700, Christoph Birk wrote:
  On Fri, 1 Aug 2008, [EMAIL PROTECTED] wrote:
   Something that has worked well in other games would be to change the
   third CGOS every month. Each month, the parameters would be announced
   and the CGOS started empty except for the anchor(s). At the end of the
   month, the bot at the top?would be?the winner. That would allow us to
   experiment with novel settings like 11x11 boards or 20 seconds per game
   that might be interesting for a short while but maybe not for long. It
   can be a way of keeping things fresh and leveling the playing field a
   little.
 
  It also would need a lot more maintenance ...
  IMHO there would not much to be learned from (eg) 11x11.
  I think of CGOS as a testing arena, not a monthly tournament
  to find the best program at some arbitrary setting.
 
  Christoph
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] linux and windows

2008-07-17 Thread George Dahl
I don't have access to windows machines to test and I don't know anything
about windows.  I can barely use it.  Although when my Go bot is complete, I
would welcome anyone who wants to port it for me. :)-George

On Thu, Jul 17, 2008 at 12:29 PM, David Fotland [EMAIL PROTECTED]
wrote:

 It irks me a little that Linux people refuse to consider porting their
 programs to Windows :)  With cygwin, it's pretty easy to port Linux
 programs.  Since these programs work on CGOS they have a gtp interface, so
 they don't even need cygwin.  Just recompile using gcc and use a free GTP
 windows GUI.  It's pretty trivial.

 Not trolling for flames, just expressing an opinion.  If someone is not
 willing to put in one day effort to port from Linux to Windows, why should
 they expect anyone else to put in one day effort to make Linux available as
 a platform?  It seems Linux people are just as chauvinistic as Windows
 people :)

 David

  -Original Message-
  From: [EMAIL PROTECTED] [mailto:computer-go-
  [EMAIL PROTECTED] On Behalf Of Don Dailey
  Sent: Thursday, July 17, 2008 9:18 AM
  To: computer-go
  Subject: Re: [computer-go] Computer Go tournament at EGC, Leksand,
  Sweden
 
 
 
  Erik van der Werf wrote:
   On Thu, Jul 17, 2008 at 3:53 PM, Nick Wedd [EMAIL PROTECTED]
  wrote:
  
   Steenvreter   no   yes
  
  
   Hi Nick,
  
   I never said yes. At this point it is rather unlikely that
  Steenvreter
   will participate. Steenvreter only runs on linux. Since the machines
   in Leksand run windows and remote computation is not allowed (which
  is
   funny considering the tournament is on KGS) I pretty much have to be
   present myself.
  That always irks me when I hear this kind of thing.   The world is
  basically windows chauvinistic and it's common to find little
  consideration given to any other platform.
 
  Did you know that you can create your own linux environment without
  having to touch the machine you will be using?   My wife has her own
  windows machine that she doesn't want me touching,  but I have a
  complete linux install via an external hard drive that leaves her
  machine untouched.  Although the install is specific to that
  machine, it is easy to build universal setups that will boot on any
  modern PC into Linux, without touching the hard drive of that
  machine.This would require that you bring a memory stick of some
  kind or perhaps an external USB hard drive.You can get big ones
  really cheap now, and they are very compact. You plug it into the
  USB port and then boot into Linux.
 
  In my opinion, the tournament organizers should do this for you and the
  other potential Linux participants since Linux is becoming more and
  more
  popular and apparently it is already very popular with Go
  programmers. There are several possibilities for setting up
  machines
  that could use either Windows or Linux that would not require major
  effort on their part - just one good Linux guy helping them.
 
  I also feel for the Mac people and also people that have built programs
  that run on networks of workstations or other potential supercomputer
  programs that would not be able to participate.
 
  - Don
 
 
 
 
   I did not find cheap flights for a short visit and I
   probably won't have time to attend the EGC for a full week, also
   housing seems to be getting difficult.
  
   So for now better assume that Steenvreter will *not* participate in
  Leksand.
  
   Erik
   ___
   computer-go mailing list
   computer-go@computer-go.org
   http://www.computer-go.org/mailman/listinfo/computer-go/
  
  
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] komi for 9 by 9 will always be 7.5 right?

2008-05-26 Thread George Dahl
I just wanted to confirm that there are no plans for changing the komi
on CGOS to anything but 7.5 ever.  I just started a 7400 cpu-hour
computation to generate training data for my Go bot and it is
inextricably linked to the komi, I will have to regenerate training
data (and then retrain) my bot if I want it to play properly with any
other komi (and of course boardsize).
- George
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] scala

2007-12-27 Thread George Dahl
I like what I have seen of it, but haven't used it too seriously yet.-George

On Dec 27, 2007 5:43 PM, Don Dailey [EMAIL PROTECTED] wrote:

 Has anyone here taken a serious look at scala the programming language?

 It seems (to me) to be a very high level functionally oriented Java.
 Part of the reason I don't like Java is because it's such a low level
 language (might as well program in C),  but this language has a very
 nice high level scripting language feel to it.

 It appears to be about as fast as Java - it produces java bytecodes, and
 uses java libraries - but it isn't java.It's also a functional
 language (functions are first class.)

 It is statically typed,  but much of this is hidden from view by type
 inference.

 Any opinions on it?  I probably won't do GO with it,  but I've been
 looking for a really fast high level language that has batteries
 included and is cross platform. This seems to fit the bill more than
 anything so far.

 - Don
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] unconditional life and death

2007-12-13 Thread George Dahl
Please excuse me if this question has been answered before, my brief
look through the archives I have did not find it.  How does one
compute unconditional life and death?  Ideally, in an efficient
manner.  In other words, I want to know, for each group of stones on
the board that share a common fate, if they are alive with certainty
or if there fate is unknown.  I only want to find results that would
be totally obvious to a very weak human player and provably correct.
What algorithms are used?
- George
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] unconditional life and death

2007-12-13 Thread George Dahl
Thanks!
- George

On 12/13/07, Jason House [EMAIL PROTECTED] wrote:


 On Dec 13, 2007 4:40 PM, George Dahl [EMAIL PROTECTED] wrote:
  Please excuse me if this question has been answered before, my brief
  look through the archives I have did not find it.  How does one
  compute unconditional life and death?  Ideally, in an efficient
  manner.  In other words, I want to know, for each group of stones on
  the board that share a common fate, if they are alive with certainty
  or if there fate is unknown.  I only want to find results that would
  be totally obvious to a very weak human player and provably correct.
  What algorithms are used?

 The standard one is Benson's algorithm
 http://senseis.xmp.net/?BensonsAlgorithm

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] GoGui and python

2007-08-24 Thread George Dahl
He has two consecutive newlines since print adds one unless the print
statement has a comma at the end.
- George

On 8/24/07, Hellwig Geisse [EMAIL PROTECTED] wrote:
 Thomas,

 On Fri, 2007-08-24 at 17:26 -0500, Thomas Nelson wrote:

  command = raw_input()
  print = myName\n

 the following is taken directly from the protocol specification:

 -

 2.6 Response Structure

 If successful, the engine returns a response of the form

 =[id] result

 Here '=' indicates success, id is the identity number given in the
 command, and result is a piece of text ending with two consecutive
 newlines.

 -

 Please note the two consecutive newlines.

 As others have already pointed out, you have to flush the
 output if it is buffered.

 Hellwig

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] U. of Alberta bots vs. the Poker pros

2007-07-26 Thread George Dahl
As I understand it, bots can try to estimate and play at the Nash
equilibrium.  In some sense, that is optimal.
Alternatively/additionally the bot can deviate from equilibrium play
based on opponent modelling.

Finding the NE is hard.  I think that is why the rules are restricted,
to make it easier to find the NE.

- George

On 7/26/07, Dave Dyer [EMAIL PROTECTED] wrote:

 The only thing a computer can to is to model opponent's behavior, which may 
 deviate from the best play. What did I miss?

 No, you didn't miss a thing.  I look forward to meeting you
 at a poker table, preferably with high stakes.

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Hint for good Bayes book wanted

2007-07-23 Thread George Dahl

Don't forget that David MacKay's book can be downloaded free from his
site so you can see exactly what you are getting before you buy it.
http://www.inference.phy.cam.ac.uk/mackay/itila/book.html
- George

On 7/23/07, chrilly [EMAIL PROTECTED] wrote:

Thanks, I did also a search on Amazon and these two looked the most
interesting ones. I can order now with greater confidence.

Chrilly

 You could try something like:

 Information Theory, Inference  Learning Algorithms
 by David MacKay

 or maybe

 Data Analysis: A Bayesian Tutorial
 by Devinderjit Sivia  John Skilling

 Erik
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Hint for good Bayes book wanted

2007-07-23 Thread George Dahl

I own that book and can also recommend it.
- George

On 7/23/07, Łukasz Lew [EMAIL PROTECTED] wrote:


Absolutely the best book I've seen is:

Christopher M. Bishop
Pattern Recognition and Machine Learning

It's totally awesome!

Strong points:
- It have both Bayesian and non Bayesian ways explained
- the explanation is clear
- figures are so helpful (and aesthetic)
- it concentrates on prediction and classification and have
algorithmic perspective
   (contrary to MacKay's book)

There is a free chapter on graphical models:
http://research.microsoft.com/~cmbishop/PRML/Bishop-PRML-sample.pdf

Lukasz Lew

On 7/23/07, chrilly [EMAIL PROTECTED] wrote:
 I have a Phd in statistics. But Bayesian methods were at that time a
 non-topic. I know the general principles, but I want to learn a little
bit
 more about the latest developments in the field. Bayes is now chic,
there
 are many books about it. I assume also a lot of bad ones.
 Can anyone recommend me a good state of the art book about Bayesian
 inference. Should be somewhat in the applied direction, but also with a
 sound mathematical background.

 Chrilly

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Neural Networks

2007-07-20 Thread George Dahl

FANN (http://leenissen.dk/fann/) is a great neural network library
written in C.  I don't know much about books on *programming* neural
networks specifically, but I know of many great books on neural
networks.  I am a big fan of Bishop's Neural Networks for Pattern
Recognition even if you aren't necessarily going to use them just for
pattern recognition.
- George

On 7/20/07, Joshua Shriver [EMAIL PROTECTED] wrote:

Anyone recommend a good book on programming Neural Networks in C or C++?

Been digging around the net for while and haven't come up with
anything other than an encyclopedia-like definition/writeup. No
examples or tutorials.

Thanks!
-Josh
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] creating a random position

2007-07-09 Thread George Dahl

On 7/9/07, Erik van der Werf [EMAIL PROTECTED] wrote:


On 7/9/07, George Dahl [EMAIL PROTECTED] wrote:
 I think this is what I want.  Thanks!  So I might have to repeat this
 a few hundred times to actually get a legal position?

Are you aware that nearly all of these positions will be final positions?

So I'll repeat my question: why do you need any of this? If you only
need final positions it's probably much better to take them from real
games, and if you actually need middle game positions you will have to
use a different procedure...

E.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/




Won't the final positions be much more likely to be rejected since they are
much more likely to be illegal?  What is your claim about the distribution
of the number of stones on the board with this scheme?

I am hoping to use this method to help generate training data for a
learning system that learns certain graph properties of the board that can
also be computed deterministically from the board position.  I know that
might sound crazy, but it is working towards the eventual goal of creating
feature extractors for Go positions.  By learning to map Go positions as an
array of stones to Go positions as graphs of strings (instead of just
mapping them with a hand coded algorithm) I can take intermediate results in
the learner's computation and use it as a feature for another learner.
- George
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] creating a random position

2007-07-08 Thread George Dahl

How would one go about creating a random board position with a uniform
distribution over all legal positions?  Is this even possible?  I am
not quite sure what I mean by uniform.  If one flipped a three sided
coin to determine if each vertex was white,black or empty, then one
would have to deal with stones with no liberties somehow.  Could those
just removed?

- George
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] creating a random position

2007-07-08 Thread George Dahl

On 7/8/07, Paul Pogonyshev [EMAIL PROTECTED] wrote:

George Dahl wrote:
 How would one go about creating a random board position with a uniform
 distribution over all legal positions?  Is this even possible?  I am
 not quite sure what I mean by uniform.  If one flipped a three sided
 coin to determine if each vertex was white,black or empty, then one
 would have to deal with stones with no liberties somehow.  Could those
 just removed?

As I remember from theory of probability, you can create such a uniformly
random position this way[1]:

  1. create a really random position, i.e. traverse all intersection and
 assign a black/white/empty state at random to each;

  2. if it happens to be not legal, discard and repeat step 1.

I believe it should be very fast, and this mustn't be difficult to check.
I.e. rate of discards should be low enough for speed of algorithm to be
speed of step 1 times C, where C is small.

However, this will tend to give you very artificial-looking positions.
Whether it is fine for your use-case, you know better.

  [1] http://en.wikipedia.org/wiki/Rejection_sampling

Paul



I think this is what I want.  Thanks!  So I might have to repeat this
a few hundred times to actually get a legal position?
- George
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Explanation to MoGo paper wanted.

2007-07-04 Thread George Dahl

Pro games are cheating unless the program is one of the players. :)

You are right though, sometimes compromises must be made when
seeding an algorithm.  My ideas on using domain knowledge from
humans are sort of about maximizing a ratio.  The ratio of program
performance to domain knowledge added (by humans, directly).
Obviously it is hard to quantify these sorts of things, but if program
A is 3 times as good (whatever that means) as program B and uses only
twice the human given Go knowledge, I would rather have program A.
- George

On 7/4/07, Benjamin Teuber [EMAIL PROTECTED] wrote:

And how much would generating patterns from pro games be cheating? How
about a system that gives a reward to shapes it actually played in a
game, the pro games are then used as seed to start the system..
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Java hounds salivate over this:

2007-06-17 Thread George Dahl

Posting that code would be really helpful!  I too was thinking about
modifying libego's move choosing algorithms.  But I haven't gotten
anywhere yet since I have been working on a proof of concept
experiment for what I will be planning to do later.

- George

On 6/17/07, Darren Cook [EMAIL PROTECTED] wrote:

 libego is a very optimised library. indeed, very hard
 to change. If it fits your needs, go for it. Its
 simply the best you can do.

 BUT, If you want to try different MCGO approachs with
 libego, I'm sure it will be far more hard to change
 than using slowish java.

I've been refactoring the libego playouts to allow me to easily plug in
different move choosing algorithms, and choose between them at run-time.
I was willing to accept a slight slowdown, but ironically got a 5%
speed-up (on random playouts).

I want to work on the interface a bit, but then I'll post my code.

Darren

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] open source Go AI's written in pure python

2007-05-24 Thread George Dahl

Does anyone know of any open source Go AI's written in pure python?

Thanks,
George
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] producing a good probability distribution over legal moves

2007-05-19 Thread George Dahl

Why does this pose a problem?  Presumably the monte carlo evaluator
will give the same position a similar score assuming it has enough
time.  This would just cause a duplicate training pattern, or two
training patterns with identical input and slightly different output.
I guess I don't quite understand what the issue would be.

- George

On 5/17/07, Daniel Burgos [EMAIL PROTECTED] wrote:

But it is very difficult that a board position is repeated between games. I
don't see how you will use the training pairs in the new games.

2007/5/17, George Dahl  [EMAIL PROTECTED]:

 What I am actually proposing is collapsing the results of the playouts
 offline and then having a function that maps board positions to
 playout values without actually doing playouts.  So I would use an MC
 player to generate a lot of training pairs of the form (position,
 score) where position would be some sort of vector representation of
 the board position and score would be a single scalar value that
 corresponded to the value the Monte Carlo program decided after many
 simulations that the position had.  So I would create something that
 learned a value function for positions.  Then, once training was
 complete, this function could be evaluated very quickly and hopefully
 give the same results that, say, 100k playouts would give.  But since
 it was so fast, the program could use the extra time to actually do
 more playouts.  The difference would be that the new playouts would be
 biased by the value function.  So the probability of picking a move
 would be proportional to the value function evaluated on the resulting
 position.  This way I would be bootstrapping a general global position
 evaluator and using it to improve monte carlo simulations.

 Imagine if you had a monte carlo program that took almost no time to
 run.  You would use it to do heavy playouts for another monte carlo
 program to make it even stronger.

 The reason this might be easier than learning from a database of
 professional games is that it is easy to generate scalar scores of
 positions with a monte carlo program.  Presumably it is also easier to
 learn how to play like a mediocre monte carlo program than like a
 professional player.

 - George



 On 5/17/07, Zach Keatts [EMAIL PROTECTED] wrote:
  What you would have after your training/evaluator phase is a hueristic
  knowlege of possibly better montecarlo trees to consider.  This will
  definitely cut down on the search space, but could also alienate a
strong
  search path.  I have been thinking along these same line for some
time.  The
  problem then lies in where you can decide what trees would be worth
looking
  at initially.  What about a database of professional games?  Take the
  winning games as examples of strong searches that ended in a win.  The
  problem is even more complex because where in the winning tree do you
tell
  montecarlo to start searching?  Will you assign a higher probability to
each
  move in those games (defining a known probabilistically stronger
predicted
  result)?
 
  That is one approach.  The other approach is the purely simulated
approach
  where you run simulations and gradually allow your probability function
to
  evolve based on the results.  Although this is a purer approach I think
the
  aforementioned strategy may yield some useful experimentation.  The
strong
  point is that it will take advantage of the human brain's innate pattern
  recognition and calculation skills.  Since we have recorded games we
have
  plenty of examples of this thought process.  For a 9Dan winning game,
  those trees surely are worth investigating...
 
 
  On 5/16/07, George Dahl [EMAIL PROTECTED] wrote:
  
   I find Monte-Carlo Go a fascinating avenue of research, but what pains
   me is that a huge number of simulations are performed each game and at
   the end of the game the results are thrown out.  So what I was
   thinking is that perhaps the knowledge generated by the simulations
   could be collapsed in some way.
  
   Suppose that epsilon greedy versions of a reasonably strong MC Go
   program were to play a very large number of games against themselves.
   By epsilon greedy versions I mean that with probability epsilon a
   random move is played and with probability 1- epsilon the move the MC
   Player would normally play is played.  Each position in these games
   would be stored along with the Monte Calro/UCT evaluation for that
   position's desirability.  This would produce an arbitrarily large
   database of position/score pairs.  At this point a general function
   approximator / learning algorithm (such as a neural network) could be
   trained to map positions to scores.  If this was successful, it would
   produce something that could very quickly (even a large neural net
   evaluation or what have you would be much faster than doing a large
   number of MC playouts) map positions to scores.  Obviously the scores
   would not be perfect since the monte carlo program did

[computer-go] producing a good probability distribution over legal moves

2007-05-16 Thread George Dahl

I find Monte-Carlo Go a fascinating avenue of research, but what pains
me is that a huge number of simulations are performed each game and at
the end of the game the results are thrown out.  So what I was
thinking is that perhaps the knowledge generated by the simulations
could be collapsed in some way.

Suppose that epsilon greedy versions of a reasonably strong MC Go
program were to play a very large number of games against themselves.
By epsilon greedy versions I mean that with probability epsilon a
random move is played and with probability 1- epsilon the move the MC
Player would normally play is played.  Each position in these games
would be stored along with the Monte Calro/UCT evaluation for that
position's desirability.  This would produce an arbitrarily large
database of position/score pairs.  At this point a general function
approximator / learning algorithm (such as a neural network) could be
trained to map positions to scores.  If this was successful, it would
produce something that could very quickly (even a large neural net
evaluation or what have you would be much faster than doing a large
number of MC playouts) map positions to scores.  Obviously the scores
would not be perfect since the monte carlo program did not play
anywhere near perfect Go.  But this static evaluator could then be
plugged back into the monte carlo player and used to bias the random
playouts.  Wouldn't it be useful to be able to quickly estimate the MC
score without doing any playouts?

Clearly this idea could be extended recursively with a lot of offline
training.  What makes this formulation more valuable is that given
enough time and effort someone familiar with machine learning should
be able to produce a learning architecture that can actually learn the
MC scores.  It would be a straightforward, if potentially quite
difficult, supervised learning task with effectively unlimited data
since more could be generated at will.  Such a learning architecture
could be used in the manner I described above or thrown at the more
general reinforcement learning problem.


Does anyone have any thoughts on this idea?  Does anyone know of it
being tried before?

- George
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] strength of libego

2007-05-14 Thread George Dahl

Does anyone know offhand about how strong libego is out of the box
on 9 by 9?  Best guess at an approximate rank?

- George
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Fast Board implementation

2007-01-14 Thread George Dahl

What should the mercy threshold be for other board sizes than 9 by 9,
particularly 19 by 19?
- George Dahl



 Here are a few speedup tricks that have helped me.

 1. The mercy rule. Since I'm incrementally keeping track of a list of empty
 points, it's no real extra pain to keep track of the number of black and
 white stones on the board. If the difference between them exceeds a
 threshold, the game is over. Ending early has an added bonus that I know the
 outcome without needing to score the board. (You can shoot yourself in the
 foot here. Best to pick a more conservative threshold the closer you are to
 interior nodes of the tree.) For exterior nodes far from any interior nodes,
 I use a threshold of 25 stones.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/