Re: [computer-go] Re: Former Deep Blue Research working on Go

2007-10-13 Thread Harri Salakoski



Considering how monte carlo actually works, I think it's plausible
to argue that it works best where the distance to endgame is small.

Is it then natural use it only after middle game.
Build fuseki-joseki-extend scripted engine and change for monte-carlo engine 
in middle game?


t. harri 


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: Former Deep Blue Research working on Go

2007-10-13 Thread Chris Fant
Not only is it interesting to know what the strongest engine is, but
also what the strongest opener is, the strongest middle-gamer, and the
strongest finisher.  It seems like a general consensus that UCT makes
for a strong finisher.


On 10/13/07, Harri Salakoski [EMAIL PROTECTED] wrote:

  Considering how monte carlo actually works, I think it's plausible
  to argue that it works best where the distance to endgame is small.
 Is it then natural use it only after middle game.
 Build fuseki-joseki-extend scripted engine and change for monte-carlo engine
 in middle game?

 t. harri

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: Former Deep Blue Research working on Go

2007-10-12 Thread terry mcintyre
From: Dave Dyer [EMAIL PROTECTED]

Considering how monte carlo actually works, I think it's plausible
 to argue that it works best where the distance to endgame is small.

 For a 19x19 board, the playing speed may be only a factor of 4 worse,
 but the effective learning speed for an opening position might be
 exponentially worse.  In other words, doing 4x as many playouts won't
 get you to the same quality of play.  I'm not aware of any data about
 what the scaling exponent is, but I'll wager 1 is not the answer.



Humans tend to read out various local situations - this group dies, this one 
lives, this can be 
killed, this can be defended. For endgame moves, there is a method of analysis 
- if white 
plays first , what is the local effect on the score? If black, what then? Who 
gets sente? That 
information is cached, and periodically checked - did such-and-such a play 
alter the status? 
These strategies greatly winnow the search tree. ( I'd be tempted to 
dynamically add and alter
callback patterns which would trigger appropriately. )

When it comes to opening moves, it might be that programs need to use opening 
books,
joseki patterns, and rules of thumb to narrow the search process. The 
evaluator, as some 
have suggested, should differ in the opening; instead of playing out to the 
bitter end, a 
rough map of expectations should suffice. Designing such a mapping function 
would
be an interesting machine learning exercise; self-play could tune the results.

A few days ago, I was playing a teaching game with a 5 dan player. At several 
points, he used 
a form of local null-move analysis, though he didn't call it that. If black 
plays X, and white ignores 
that play, black follows up with Y - with devastating results. Therefore, white 
must reply to X, 
unless white has an even bigger threat. Having sente, and a position slightly 
altered in his favor,
black then plays Z, which kicks white in the head. But Z before X does not work 
so well ... move 
order often makes the difference between a very strong and a weak play.

A poster recently mentioned 19x19 games and handicap stones. This would help to 
quickly separate 
wheat from chaff. If program A could defeat all contending programs more than 
half the time with a 
two or three stone handicap, we'd take that as clear evidence of superiority. 
This could spur the development
of much stronger programs. We know that top human players can give the 
strongest current 19x19 programs a 
9 stone handicap and win better than half the time. Future programs, evolved to 
give current contenders a 
large handicap and win, will be a lot closer to beating top human players.





   

Be a better Globetrotter. Get better travel answers from someone who knows. 
Yahoo! Answers - Check it out.
http://answers.yahoo.com/dir/?link=listsid=396545469___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Re: Former Deep Blue Research working on Go

2007-10-11 Thread Dave Dyer

Considering how monte carlo actually works, I think it's plausible
to argue that it works best where the distance to endgame is small.

For a 19x19 board, the playing speed may be only a factor of 4 worse,
but the effective learning speed for an opening position might be
exponentially worse.  In other words, doing 4x as many playouts won't
get you to the same quality of play.  I'm not aware of any data about
what the scaling exponent is, but I'll wager 1 is not the answer.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/