I'm curious to find out what is meant by "lazy". If, as I am led to
believe by your report, Monte Carlo strategies applied to Double Step
Races are "lazy", yet they converge to perfect play, then I'm not sure
why we are meant to worry. I certainly understand that the strategies
can converge faste
>It is very interesting to me that you use the clump correction rule. I
>could never get that to work in Fuego, either.
It is my impression (with no analysis whatsoever, so draw your own
conclusions) that Fuego's use of an evaluation function helps it to
overcome problems in the playouts.
Pebbl
Following Mogo, Pebbles uses the "3-in-a-row" modification
that automatically plays in the center of 3-point eyeshapes.
Fuego's rules triples the chance of making a correct play when
a 3-point eyespace exists, but does not guarantee that any play
will be made. The rules do guarantee that the best
On Wed, Aug 19, 2009 at 2:11 PM, Brian Sheppard wrote:
> My conclusion is the same as Gian-Carlo Pascutto's: I am convinced
> that the phenomenon of laziness is real, and that it hurts
> practical strength.
>
Unfortunately this is not that point that is in question - I think we all
agree on this
Following Mogo, Pebbles uses the "3-in-a-row" modification
that automatically plays in the center of 3-point eyeshapes.
Mogo's rule guarantees that an opponent will not be able to
convert a 3-point eyespace into two eyes. The downside of
Mogo's rule is that it wastes a *lot* of moves when it
gener
Fuego has no trouble with the mercy rule here - I guess our threshold
is high enough. However, it has no clue about how to play out the
nakade shape. So it starts out with 57% wins for White, and it needs
maybe 30K simulations until the search pushes it below 50%. Then the
score keeps dropp
Speaking of laziness, I have been intending to post a study
concerning capturing races, but I haven't gotten around to it.
So is it surprising that MC is lazy, given that MC programmers
are lazy? :-)
Ingo's Double Step Race is a simplified model of capturing race.
My model was more complex, and I
Don wrote:
> But how do you create the required tension in a way that
> produces a program that plays the game better?
At least in high handicap go on 19x19 (with the "dynamic bot" being
the stronger player) it seems to work when the bot is kept
in some 35-45 % corridor, as long as it is clearly
>
> PS: Once again I would like to mention my report on "Laziness of Monte
> Carlo", at http://www.althofer.de/mc-laziness.pdf
> In the meantime, a student has found the same phenomenon in UCT search
> (instead of basic MC). Also in discrete online optimization (so outside
> of combinatorial games
2009/8/19 terry mcintyre
> Consider the game when computer is black, with 7 stones against a very
> strong human opponent.
>
> Computer thinks every move is a winning move; it plays randomly; a
> half-point win is as good as a 70-point win.
>
> Pro gains ground as computer makes "slack moves", ta
Jeff Nowakowski wrote:
>On Wed, Aug 19, 2009 at 07:27:00AM -0700, terry mcintyre wrote:
>>Consider the game when computer is black, with 7 stones against a very
>>strong human opponent.
>> ...
>
> Didn't this game actually happen? Didn't MoGo *beat* a pro
> with 7 stones?
It was long ag
zen wins many more of its "even" games with no handicap than it does
with even, say, an even 2 stone handicap as either black or white. i
haven't compiled numbers for it (i'm not zen's maintainer), but i
watched it happen over the course of about 50 games one day. it was
pretty consistently worse
On Wed, Aug 19, 2009 at 07:27:00AM -0700, terry mcintyre wrote:
>Consider the game when computer is black, with 7 stones against a very
>strong human opponent.
>
>Computer thinks every move is a winning move; it plays randomly; a
>half-point win is as good as a 70-point win.
Didn'
On Wed, Aug 19, 2009 at 9:39 AM, Magnus Persson wrote:
> Don, what you write is certainly true for even games, but I think the
> problem is a real one in high handicap games with the computer as white. I
> use a hack to make Valkyria continue playing the opening in handicap games
> as white. It is
Consider the game when computer is black, with 7 stones against a very strong
human opponent.
Computer thinks every move is a winning move; it plays randomly; a half-point
win is as good as a 70-point win.
Pro gains ground as computer makes "slack moves", taking slightly less than its
full du
Don, what you write is certainly true for even games, but I think the
problem is a real one in high handicap games with the computer as
white. I use a hack to make Valkyria continue playing the opening in
handicap games as white. It is forbidden to resign in the opening and
early middle gam
One must decide if the goal is to improve the program or to improve it's
playing behavior when it's in a dead won or dead lost positions.
It's my belief that you can probably cannot improve the playing strength
soley with komi manipulation, but at a slight decrease in playing strength
you can pro
Forthcoming human-vs-computer games in go:
http://www.althofer.de/ieee-go-0.jpg
http://www.althofer.de/ieee-go-1.jpg
http://www.althofer.de/ieee-go-2.jpg
http://www.althofer.de/ieee-go-3.jpg
http://oase.nutn.edu.tw/FUZZ_IEEE_2009/result.htm
Ingo.
--
GRATIS für alle GMX-Mitglieder: Die maxdome M
One last rumination on dynamic komi:
The main objection against introducing dynamic komi is that it ignores the true
goal
of winning by half a point. The power of the win/loss step function as scoring
function underscores
the validity of this critique. And yet, the current behaviour of mc bots,
19 matches
Mail list logo