I'm about to implement this. Since I have multiple features (patterns,
is in atari, is adjacent to last play, etc.), the weight is the
product of the weight of all matching features.
I'm thinking about having a table of weights, storing the sum of each
row and the total (like Rémi
Isaac Deutsch kirjoitti:
My question/problem: How do I deal with features that have a weight of
zero? I'm sure there are certain 3x3 patterns that have the weight zero,
such as
###
#*.
#..
with * to be played. It's clear that when this pattern matches, the
total weight is set to zero.
In my example, #=border/edge of the board, not black. I was just
trying to come up with an example feature that might have weight zero
to illustrate my problem.
___
computer-go mailing list
computer-go@computer-go.org
Isaac Deutsch kirjoitti:
In my example, #=border/edge of the board, not black. I was just trying
to come up with an example feature that might have weight zero to
illustrate my problem.
I see. I was already thinking there may be no 3x3 patterns that should
never be played. But I cannot think
Isaac Deutsch wrote:
I'm about to implement this. Since I have multiple features
(patterns, is in atari, is adjacent to last play, etc.), the weight
is the product of the weight of all matching features.
I'm thinking about having a table of weights, storing the sum of
each row and the total
Thanks to both for the elaborate feedback! I was wondering about
pattern symmetry, too, and I also had a lookup table in mind since
there are only about 65000 combinations (and only a fraction of them
legal).
I had another idea for the zero weight problem: Keep a counter of how
many
This code should work with r -= weights[i++] in the loop body, and comes down
to a linear search through
cumulative sum of the weights. If the weights will be static for a number of
selections, you can store the
cumulative weights in an array and use binary search for selecting the move.
So
If you want to take many samples from a fixed, or infrequently
changing, distribution, you can do it in O(1) time per sample, with
O(n) initial setup costs. This is quite clever and goes by the name of
alias method.
See http://cg.scs.carleton.ca/~luc/rnbookindex.html, page 107-111
For
Thanks! I had never seen the alias method before and it is quite ingenious!
- George
On Thu, Jul 16, 2009 at 3:04 AM, Martin Muellermmuel...@cs.ualberta.ca wrote:
If you want to take many samples from a fixed, or infrequently changing,
distribution, you can do it in O(1) time per sample, with
When using patterns during the playout I had improvised some code to
select patterns randomly, but favour those with higher weights more or
less proportionally to the weight..
How many patterns, and are the weights constant for the whole game?
If relatively few, and constant, you can make a
You might look in the genetic algorithm literature, where they have to
do this for fitness-proportional reproduction. A useful buzzword is
roulette wheel.
Peter Drake
http://www.lclark.edu/~drake/
On Jul 15, 2009, at 4:06 PM, Mark Boon wrote:
When using patterns during the playout I had
If your weights are all between 0 and 1:
do
double r = rand(0 to 1)
int i = rand(0 to weightCount - 1)
until weight[i] r
I think that's right.
Mark Boon wrote:
When using patterns during the playout I had improvised some code to
select patterns randomly, but favour those with higher
I think you could do this with a binary tree - at each node keep a total of
the weight values of the subtree below the node.
If the pattern was hashed, then each bit could define a branch of the tree,
0 = left branch 1 = right branch.
Then you have a very simple divide and conquer algorithm.
, 2009 4:07 PM
To: computer-go
Subject: [computer-go] Random weighted patterns
When using patterns during the playout I had improvised some code to
select patterns randomly, but favour those with higher weights more or
less proportionally to the weight..
I was wondering though if there's
So many complex ideas :) Why not just multiply the weight of each pattern
by a random number and pick the biggest result?
Good for 5 patterns, not so good for 5000 patterns.
Darren
___
computer-go mailing list
computer-go@computer-go.org
...@computer-go.org [mailto:computer-go-
boun...@computer-go.org] On Behalf Of Darren Cook
Sent: Wednesday, July 15, 2009 8:47 PM
To: computer-go
Subject: Re: [computer-go] Random weighted patterns
So many complex ideas :) Why not just multiply the weight of each
pattern
by a random number
On Wed, Jul 15, 2009 at 10:37 PM, David Fotlandfotl...@smart-games.com wrote:
So many complex ideas :) Why not just multiply the weight of each pattern
by a random number and pick the biggest result?
David
That involves generating N random numbers and then doing N-1
comparisons. The n-ary
I must be missing something. Isn't the obvious trick:
int r = random(sum of weights);
int i = 0;
while (r weights[i]) {
r -= weights[i];
}
return i;
This way, you only have to generate one random number.
Peter Drake
http://www.lclark.edu/~drake/
On Jul 15, 2009, at 8:55 PM, Zach
In the loop i is always zero. I think your code is wrong.
You probably meant to loop over all the weights (or I should say on average
half the weights), and this code is slow if there are a lot of weights.
2009/7/16 Peter Drake dr...@lclark.edu
I must be missing something. Isn't the obvious
, 2009 4:07 PM
To: computer-go
Subject: [computer-go] Random weighted patterns
When using patterns during the playout I had improvised some code to
select patterns randomly, but favour those with higher weights more or
less proportionally to the weight..
I was wondering though
A reasonable xor+shift random (similar to Marsaglia's, but only 64bit
instead of 128 bits), using the pentium's rdtsc-instrunction to add
additional entropy (i used gnu's inline assembler) :::
#define rdtscll(val) \
__asm__ __volatile__(rdtsc : =A (val))
typedef unsigned long long
The question I have is how much does RDTSC slow this down?I may do
some tests to see.
- Don
A van Kessel wrote:
A reasonable xor+shift random (similar to Marsaglia's, but only 64bit
instead of 128 bits), using the pentium's rdtsc-instrunction to add
additional entropy (i used gnu's
The question I have is how much does RDTSC slow this down?I may do
I don't know.
If I understand correctly, rdtsc can be executed in parallen with *some*
other instructions (and places no barrier in the code), in which case
it could be executed at no additional cost (except the instruction
I did a simple test. I compiled this version and the version below to
get a sense of how fast it is.
Your version with RDTSC is quite slow compared to the version below.
I determined that RDTSC is taking almost all the time however, because
just for fun I took out that call and it was
I doesn't seem too fast based on my test, but perhaps there are ways to
optimize it.
I would like to mention that your generator is 64 bit and the other
generator was 32 bits but I'm running on a 64 bit core 2 duo OS and do
not know how much of a factor this is or isn't.
If RDTSC actually
I would like to mention that your generator is 64 bit and the other
That is correct.
My timings (Haven't profiled yet):
WITH TSC:
real0m0.902s
user0m0.870s
sys 0m0.002s
WITHOUT:
real0m0.613s
user0m0.582s
sys 0m0.000s
That's for 100 M random numbers.
And inlining would
That's way different that what I'm getting. Something is wrong here.
I'm getting 4.08 with RDTSC and when I just set new to 0ull I get 0.696
One of us must have done something wrong.
- Don
A van Kessel wrote:
I would like to mention that your generator is 64 bit and the other
#include stdio.h
#include stdlib.h
I'm including my code below. Also, here are my reported times:
Without RDTSC
real0m1.008s
user0m0.668s
sys 0m0.004s
With RDTSC
real0m5.790s
user0m3.956s
sys 0m0.016s
To turn on or off RDTSC see the rdtsc_rand line that says, if
I Just omitted the new variable:
/**/
BIGTHING rdtsc_rand(void)
{
static BIGTHING val=0xULL;
#if WANT_RDTSC
BIGTHING new;
rdtscll(new);
val ^= (val 15) ^ (val 14) ^ new;
#else
val ^= (val 15) ^ (val 14) ^ 9 ;
#endif
return
Has anyone looked at the resultant distributions, to
Well, mine was pretty random.
Maybe a bit too random.
Changing the 14 and 15 shifts effectively cripples it.
I have not looked at sequential correlation, only at the
distribution, by binning the values into a histogram.
As far as I could see
terry mcintyre wrote:
Has anyone looked at the resultant distributions, to
determine how random they are?
They should be run against the dieharder tests before any confidence is
placed in them. But I did look at them visually to see if there was
anything wrong that was immediately obvious
Ok, please check your number again.I set mine up like yours just in
case and I still get at least a 5x slowdown.
I think yours are probably wrong because I have seen numbers that
indicate RDTSC is really slow.
- Don
A van Kessel wrote:
I Just omitted the new variable:
Are you using the same hardware? I was looking over
the instruction timings, and there's a lot of
variation depending on the architecture.
--- Don Dailey [EMAIL PROTECTED] wrote:
That's way different that what I'm getting.
Something is wrong here.
I'm getting 4.08 with RDTSC and when I
Measurements ar pretty consistent.
Hardware:
Linux localhost.localdomain 2.6.23.1-42.fc8 #1 SMP Tue Oct 30 13:55:12 EDT 2007
i686 athlon i386 GNU/Linux
(I didnot even know it was an athlon !)
AvK
___
computer-go mailing list
Mine:
cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 6
model : 10
model name : AMD Athlon(tm) XP 2800+
stepping: 0
cpu MHz : 2079.595
cache size : 512 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
Michael Williams wrote:
I don't think Don was saying to use many calls to RDTSC to generate a
single random number.
I think he meant do something like (FastBadRand() XOR RDTSC).
The first part has low quality low-order bits and the second part has
low quality high-order bits.
That was my
Michael Williams wrote:
I don't think Don was saying to use many calls to RDTSC to generate a
single random number.
I think he meant do something like (FastBadRand() XOR RDTSC).
The first part has low quality low-order bits and the second part has
low quality high-order bits.
I also
Regarding the time used by RDTSC:
From
http://smallcode.weblogs.us/2007/12/07/performance-measurements-with-rdtsc/
Intel and Agner Fog recommend measuring the overhead
of RDTSC function and subtracting it from your result.
The overhead is relatively low (150-200 clock cycles)
and it occurs in
terry mcintyre wrote:
Regarding the time used by RDTSC:
From
http://smallcode.weblogs.us/2007/12/07/performance-measurements-with-rdtsc/
Intel and Agner Fog recommend measuring the overhead
of RDTSC function and subtracting it from your result.
The overhead is relatively low (150-200 clock
Zach Wegner wrote:
What you could do is XOR the RDTSC into the seed (or array or whatever
your RNG uses) at the beginning of each playout. That adds to the
chaos but doesn't slow it down much at all.
I like your idea. Yes, you could probably use a really fast simple RNG
and do this.
Hi,
As mentioned before, Monte Carlo simulations in physics was my thesis
topic, and there we need REALLY good PRNGs or we see the effect in
the results.
There is always a tradeoff between fast and good. If the newer Mersine
Twister algorithms (which are very good) is too slow and you want to
Personally, I think that much of the really high quality issues are not
that important for MC Go right now. I think that other things like not
having a reasonable distribution function (which UCT does a remarkable
job of smoothing over) completely overwhelm the effects of a poor PRNG.
I agree
That's very interesting and I believe it explains everything perfectly.
If I understand this correctly, my 4 billion possible seeds (I used a 32
bit seed) cannot produce 32 billion different games (against the same
deterministic opponent.) In my implementation there were also issues
with the
These shuffles are different than the one I used and attempted to
describe.
Cheers,
David
On 16, May 2008, at 12:55 PM, terry mcintyre wrote:
An interesting note from
http://en.wikipedia.org/wiki/Knuth_shuffle which
appears to be pertinent to Don's remarks about a
limited number of games:
If you are looking for a cheap fast and simple random number generator,
A post by George Marsaglia, an expert/guru on random number generation
has several and a C implemention.
These are one line generators, coded as macros.He also discusses
the period and quality of each of them.
For a long time I have pondered whether you could build a very high
quality random number generator that was extremely fast using the
pentium RDTSC instruction as a low order bit entropy source. The
idea would be to use an extremely fast (but low quality) pseudo random
number generator,
Heikki Levanto wrote:
In addition, xor_shift is better than builtin rand() and faster and
much smaller than MT.
I don't know that much about random numbers, so excuse my ignorance. But a
bit of googling got me to the Park - Miller Minimal Standard random number
generator
the only thing to watch is that you'll likely need
30+ bits from these guys to seed a prng, and
getting those bits in any organized way is likely
going to happen on a regular schedule (i.e. if
you get them in a loop, you're likely going to
space them out in an organized way).
s.
On 5/15/08, Don
IIRC the RDTSC instruction is very slow, on the order of 50 cycles. In
addition, I like having deterministic generators so that
hypothetically I could reproduce any result.
In my chess program I use a modified version of Agner Fog's Ranrot.
It's very fast and very random. There's a paper on his
I don't think Don was saying to use many calls to RDTSC to generate a single
random number.
I think he meant do something like (FastBadRand() XOR RDTSC).
The first part has low quality low-order bits and the second part has low
quality high-order bits.
Also, I can't imagine why executing a
Following url may help.
http://softwarecommunity.intel.com/isn/Community/en-US/forums/thread/30235396.aspx
-Hideki
Michael Williams: [EMAIL PROTECTED]:
I don't think Don was saying to use many calls to RDTSC to generate a single
random number.
I think he meant do something like (FastBadRand()
Hi,
I'm currently writing a UCT based go program in Clean to see if it is
feasible to write one in a functional language. This is my first attempt
at functional languages, and I'm having trouble with random numbers.
A mersenne twister module is available for Clean. Once initialized it is
Probably the reason that it is so slow is that it's aiming for a
cryptographically random number sequence. These are usually derived
ultimately from kernel timings (often via /dev/random on linux
systems) and it can take a while to establish a degree of confidence
in the randomness of these bits.
A couple of things I did:
I edited the Mersenne Twister code I'm using to get it's seed from the
system time in nanoseconds rather than in miliseconds to fix the problem of
wanting to generate more than one random number in a millisecond.
Instead of generating a new random number every time I
That was not the problem, as the source code is available and it doesn't
call system libraries at all. I was initializing with known seed in my
tests anyway.
Thanks though, in the process of composing a reply demonstrating the
problem, I found a partial solution. Now random number generation
A sorry about that. Glad to hear you fixed the problem. What was your
solution?
It seems you mentioned the global list approach in your email which I
missed too. Why did you think this was an ugly approach? I just put my
random array in a Singleton object (call it MyRandomNumberGenerator) which
Your solution would still be way faster, as I still have to generate a
new random number every time I need one.
The problem is lack of side effects in Clean. If I stick to functional
programming paradigm -and that is the whole point of my effort- I can't
write function which returns a
Berk,
Why do you need to initialize the seed more than 1 time?You should
use zobrist hashing.
Initialize the random number generator once when you start the programs.
Fill an array with random numbers from the generator at program startup
too. The array looks something like this for a
On Dec 19, 2007 10:24 AM, Don Dailey [EMAIL PROTECTED] wrote:
Why do you need to initialize the seed more than 1 time?You should
use zobrist hashing.
By design, my program reinitializes the random seeds. This is done at the
start of each genmove and is intended for repeatability.
Instead of generating a new random number every time I want to pick a
move in a random playout, I fill a large circular array with random
numbers when my go program launches (before it is actually playing a
game) and I have a method that just gets the next number out of the
array instead of
Hold on, I just re-read this.Do you think you must initialize the
generator after each number? You only need to initialize MT once, when
your program starts and then it will provide very good numbers for the
next several billion years.
- Don
Imran Hendley wrote:
A couple of things I
Imran Hendley wrote:
A sorry about that. Glad to hear you fixed the problem. What was your
solution?
It seems you mentioned the global list approach in your email which
I missed too. Why did you think this was an ugly approach? I just put
my random array in a Singleton object (call it
On Dec 19, 2007 11:25 AM, Don Dailey [EMAIL PROTECTED] wrote:
Imran Hendley wrote:
A sorry about that. Glad to hear you fixed the problem. What was your
solution?
It seems you mentioned the global list approach in your email which
I missed too. Why did you think this was an ugly
Yes, the large amount of state in a playout will probably enable you
to safely get away with what you are doing, but I would certainly want
to test the overall strength of the engine using this method this
versus using the straight-forward method of seeding the RNG once and
generating a new random
I probably go through my entire array 5 times per second during a UCT
search. So far I have been relying on the hope that I do not come back
to the beginning of my array at exactly the same position in a random
playout which would cause a bad cycle. If I come back at a different
position
Good information. Thanks.
- Dave Hillis
-Original Message-
From: Jacques Basaldúa [EMAIL PROTECTED]
To: computer-go@computer-go.org
Sent: Fri, 21 Sep 2007 7:44 am
Subject: [computer-go] Random move selection (was) Crazystone patterns
Dave Hillis, wrote:
Suppose I can generate
66 matches
Mail list logo