Hi, Don
I can find arguments to disagree. I think what makes humans homo
sapiens is reasoning, not the ability to compute numerical simulations.
As a human (I am one) I feel disappointed when the explanation I get for
a best move is after x millions of simulated matches it proved to be
the
On Wed, 2007-07-11 at 11:47 +0100, Jacques Basaldúa wrote:
What you
call a dirty hack, patterns deeply implemented in their brains.
What you call a dirty hack, patterns deeply implemented in their
brains.
The dirty hack I'm referring to is the robotic way this is implemented
in programs, not
On 7/11/07, Don Dailey [EMAIL PROTECTED] wrote:
The dirty hack I'm referring to is the robotic way this is implemented
in programs, not how it's done in humans. With a pattern based program
you essentially specify everything and the program is not a participant
in the process. It comes down
Perhaps some day a mad Dr. Frankenstein will implement massively parallel
supercomputing using an array of brains in petri dishes. But it will
still be the meat that is intelligent. It's the only substance
capable of that.
I read an article several months back where a researcher used mice
On Wed, 2007-07-11 at 09:06 -0500, Richard Brown wrote:
I'm compelled to point out that neural nets, _trained_ on patterns,
which
patterns themselves are then discarded, have the ability to
recognize
novel patterns, ones which have never been previously seen, let alone
stored. The list of
Hi David,
(...) I cannot imagine that progress will be
made without a great deal of domain knowledge.
Depending on what you exactly mean I disagree.
I mean progress by the standard usually applied to computer Go:
programs that can beat 1D humans on a full board, and then get
better.
For me
Nonetheless, a program that could not only play a decent game of go, but
somehow emulate the _style_ of a given professional would be of interest,
would it not?
Is this the case in chess? If so, I've never heard of it.
___
computer-go mailing list
On 7/10/07, Chris Fant [EMAIL PROTECTED] wrote:
Nonetheless, a program that could not only play a decent game of go, but
somehow emulate the _style_ of a given professional would be of interest,
would it not?
Is this the case in chess? If so, I've never heard of it.
I don't think that it
Don wrote:
Of course now we just had to go and spoil it all by imposing domain
specific rules. I have done the same and I admit it.It would be fun
to see how far we could go if domain specific knowledge was forbidden as
an experiment. Once patterns are introduced along with other direct
Benjamin wrote:
I have build just for fun a simple BackGammon engine. [...]
Interesting - did you also try it for chess, or do you think there's
no point in this?
This is a bit of speculation since I don't know enough about chess but I
suspect that uniform random simulation in go is about as
Quoting Yamato [EMAIL PROTECTED]:
In other words UCT works well when evaluation/playouts is/are strong. I
believe
there are still improvements possible to the UCT algorithm as shown by the
recent papers by Mogo and Crazystone authors, but what really will make a
difference is in the quality in
Hi, Magnus
Magnus Persson wrote:
Weak tactics is a problem of the playouts in my opinion.
UCT as a general search method has thus little to do with
ladders and other game specific details. If there are no
tactical mistakes in the playouts the problems disappear.
Also tactics has a large
Quoting Jacques Basaldúa [EMAIL PROTECTED]:
Hi, Magnus
Magnus Persson wrote:
Weak tactics is a problem of the playouts in my opinion. UCT as a
general search method has thus little to do with ladders and other
game specific details. If there are no tactical mistakes in the
playouts the
Don Dailey wrote:
I have posted before about the evils of trying to extract
knowledge from human games. I don't think it is very effective
compared to generating that knowledge from computer games for
several reasons.
I would agree if we could have quality games played by computers.
In
But you can improve the prior probabilities of your search function by
remembering shapes (hopefully more abstract ones in the future,
including more knowledge about the neighbourhood) that seemed like good
moves before, so I don't share your opinion.
Whether or not this knowledge shout also be
On Thu, 2007-07-05 at 09:47 +0100, Jacques Basaldúa wrote:
Don Dailey wrote:
I have posted before about the evils of trying to extract
knowledge from human games. I don't think it is very effective
compared to generating that knowledge from computer games for
several reasons.
I would
On Wed, 2007-07-04 at 11:34 +0200, Magnus Persson wrote:
but what really will make a
difference is in the quality in the playouts.
I would like to suggest a more abstract view of things. In the purest
form of the algorithm there isn't an artificial distinction between the
tree and the
On Wed, 2007-07-04 at 16:57 -0400, George Dahl wrote:
Of course now we just had to go and spoil it all by imposing domain
specific rules. I have done the same and I admit it.It would be
fun
to see how far we could go if domain specific knowledge was
forbidden as
an experiment.
And how much would generating patterns from pro games be cheating? How
about a system that gives a reward to shapes it actually played in a
game, the pro games are then used as seed to start the system..
___
computer-go mailing list
Pro games are cheating unless the program is one of the players. :)
You are right though, sometimes compromises must be made when
seeding an algorithm. My ideas on using domain knowledge from
humans are sort of about maximizing a ratio. The ratio of program
performance to domain knowledge
On Thu, 2007-07-05 at 00:53 +0200, Benjamin Teuber wrote:
And how much would generating patterns from pro games be cheating? How
about a system that gives a reward to shapes it actually played in a
game, the pro games are then used as seed to start the system..
I have posted before about the
On Wed, 2007-07-04 at 19:23 -0400, Don Dailey wrote:
On Thu, 2007-07-05 at 01:09 +0200, Magnus Persson wrote:
Just to disturb the vision a strong go program without hardwired go
knowledge I
currently think that there are some really important things in Go that
are
really hard or even
On Thu, 2007-07-05 at 01:09 +0200, Magnus Persson wrote:
Just to disturb the vision a strong go program without hardwired go
knowledge I
currently think that there are some really important things in Go that
are
really hard or even impossible to learn with for examples patterns.
The ideal
In other words UCT works well when evaluation/playouts is/are strong. I
believe
there are still improvements possible to the UCT algorithm as shown by the
recent papers by Mogo and Crazystone authors, but what really will make a
difference is in the quality in the playouts.
Sylvain said that good
I believe this claim is true in two senses:
1) If the computation necessary to find better moves is too
expensive, performing many dumb playouts may be a better investment.
2) If the playouts are too deterministic, and the moves are merely
pretty good, the program may avoid an important
2) If the playouts are too deterministic, and the moves are merely pretty
good, the program may avoid an important move and thus misjudge the value of
a position.
IMO, this is the most interesting part of Computer Go today. How can
one possibly design an optimal playout agent when making a
Hello all,
We just presented our paper describing MoGo's improvements at ICML,
and we thought we would pass on some of the feedback and corrections
we have received.
(http://www.machinelearning.org/proceedings/icml2007/papers/387.pdf)
I have the feeling that the paper is important, but it is
I have the feeling that the paper is important, but it is completly
obfuscated by the strange reinforcement learning notation and jargon. Can
anyone explain it in Go-programming words?
The most important thing in the paper is how to combine RAVE(AMAF)
information with normal UCT. Like this:
I have build just for fun a simple BackGammon engine. [...]
Interesting - did you also try it for chess, or do you think there's no
point in this?
Regards,
Benjamin
___
computer-go mailing list
computer-go@computer-go.org
I have build just for fun a simple BackGammon engine. [...]
Interesting - did you also try it for chess, or do you think there's no
point in this?
The Hydra team has thought about this. Especially the Hydra chess expert GM
Lutz. Some endgames are difficult to understand, but the moves are
The most important thing in the paper is how to combine RAVE(AMAF)
information with normal UCT. Like this:
uct_value = child-GetUctValue();
rave_value = child-GetRaveValue();
beta = sqrt(K / (3 * node-visits + K));
uct_rave = beta * rave_value + (1 - beta) * uct_value;
Thanks for the
The most important thing in the paper is how to combine RAVE(AMAF)
information with normal UCT. Like this:
uct_value = child-GetUctValue();
rave_value = child-GetRaveValue();
beta = sqrt(K / (3 * node-visits + K));
uct_rave = beta * rave_value + (1 - beta) * uct_value;
Thanks for the
We felt also, that even if it works, the improvement
measured in Elos would not be very spectacular. The Elo/Effort ratio is low.
I was simply too lazy (or too professional) to give it a try.
it might be fun (even from a non-FPGA point of view) to try it just
to see where it lies versus a
We felt also, that even if it works, the improvement
measured in Elos would not be very spectacular. The Elo/Effort ratio is
low.
I was simply too lazy (or too professional) to give it a try.
it might be fun (even from a non-FPGA point of view) to try it just
to see where it lies versus a
I actually have a working chess program at a fairly primitive stage
which would be appropriate for testing UCT on chess.
My intuition (which is of course subject to great error) tells me that
it won't pay off. However, I'm still quite curious about this and will
probably give it a try at some
A long time ago ago I spent a few hours on writing a simple chess
program doing
UCT-search. I got to the point where it actually played better than random but
not very much.
It sort of reminded me of the strength of plain MC in 19x19 Go. The problem is
that many games become very long in chess
On 7/3/07, chrilly [EMAIL PROTECTED] wrote:
We just presented our paper describing MoGo's improvements at ICML,
and we thought we would pass on some of the feedback and corrections
we have received.
(http://www.machinelearning.org/proceedings/icml2007/papers/387.pdf)
They are
37 matches
Mail list logo