Re: [computer-go] UCT tree pruning

2009-06-02 Thread Michael Williams

Jason House wrote:
On Jun 2, 2009, at 6:56 PM, Michael Williams 
 wrote:


Two things:  Firstly, I'm storing (only in RAM) the precalculated 
Winrate and InvSqrtVisits and keeping them updated.

So my UCT formula went from

   Wins / Visits + sqrt(lnParentVisits / Visits)

to

   WinRate + sqrtLnParentVisits * InvSqrtVisits;



Which equations do you use for the incremental updates? Or do you just 
recompute the values?




It's not incremental.

  WinRate = Wins / Visits;
  InvSqrtVisits = 1 / sqrt(Visits);




This has a memory cost, but I don't care so much about RAM since I can 
send the nodes to disk.


And the second thing is to store in the parent node a reference to 
what is likely the UCT-best child node.  If the parent has been 
visited 100*boardspaces times, I will go directly to the likely-best 
child with probability 2047/2048.  Anytime a proper UCT loop occurs, 
the likely-best reference is updated (about 90% of the time there is 
no change, so I think it's safe).



What is a proper UCT loop?



By that I meant finding the the child that maximizes the UCT formula.





Jason House wrote:

That sounds like a good optimization. What did you do?
Sent from my iPhone
On Jun 2, 2009, at 3:16 PM, Michael Williams 
 wrote:
Update:  After concentrating on tightening the UCT loop, I've 
optimized myself back into needing the SDD  :/


But now I should be able to get to 20B nodes in just one day.

(still only doing 7x7 Go)


Michael Williams wrote:
Yes, when memory is full, I save and free all leaf nodes (which is 
the vast majority).  Nodes are loaded as needed.

Don Dailey wrote:



On Mon, Jun 1, 2009 at 4:57 PM, Michael Williams 
mailto:michaelwilliam...@gmail.com>> 
wrote:


  I've optimized my disk access to the point where I'm mostly CPU
  limited now, even when using a standard hard disk instead of an 
SSD.
   I can now create trees of up to about 30 billion nodes, which 
would

  take about a week.  The simulation rate is continuously going down
  because so much time is spent in UCT loops in the huge tree.


That's impressive.   Are you doing things which move parts of the 
tree onto the disk and back when needed? I'm curious about the 
details!


- Don






  Don Dailey wrote:



  On Mon, Jun 1, 2009 at 11:22 AM, Isaac Deutsch    >> 
wrote:



  > Well, I'll take that over crashing with an out-of-memory
  error. :)

 Still, pruning seems better to me and has the same 
effect. ;p



  But is it better?   I think it's not so obvious without 
thorough

  testing.

  Pruning throw away information that is lost forever and may 
need

  to be recalculated.   Requiring more simulations does not throw
  out results, but results in some inefficiencies.   So it's not
  clear to me which is better - it may even be that it depends on
  how much you push it.   I am just guessing but I would guess
  that pruning is better in the short term, worse in the longer
  term.   Imagine a search at a corespondence level, where the
  computer thinks for 24 hours.   Which method is best 
there?  Could you use hard disk or SSD?   Using some kind 
of caching

  system,  where you relegate the oldest unvisited nodes to the
  hard dirve.   It may be that nodes you might normally prune are
  unlikely to get used again but if they do you still have the
  data.This is no good unless you can guarantee that the disk
  is used very infrequently - but with SSD it may be more 
practical.



  - Don




 --
 Nur bis 31.05.: GMX FreeDSL Komplettanschluss mit DSL 6.000
  Flatrate und
 Telefonanschluss nur 17,95 Euro/mtl.!*
  http://portal.gmx.net/de/go/dsl02
 ___
 computer-go mailing list
 computer-go@computer-go.org
  
  >

 http://www.computer-go.org/mailman/listinfo/computer-go/



  
 




  ___
  computer-go mailing list
  computer-go@computer-go.org 


  http://www.computer-go.org/mailman/listinfo/computer-go/


  ___
  computer-go mailing list
  computer-go@computer-go.org 
  http://www.computer-go.org/mailman/listinfo/computer-go/



 



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
h

Re: [computer-go] UCT tree pruning

2009-06-02 Thread Jason House
On Jun 2, 2009, at 6:56 PM, Michael Williams > wrote:


Two things:  Firstly, I'm storing (only in RAM) the precalculated  
Winrate and InvSqrtVisits and keeping them updated.

So my UCT formula went from

   Wins / Visits + sqrt(lnParentVisits / Visits)

to

   WinRate + sqrtLnParentVisits * InvSqrtVisits;



Which equations do you use for the incremental updates? Or do you just  
recompute the values?




This has a memory cost, but I don't care so much about RAM since I  
can send the nodes to disk.


And the second thing is to store in the parent node a reference to  
what is likely the UCT-best child node.  If the parent has been  
visited 100*boardspaces times, I will go directly to the likely-best  
child with probability 2047/2048.  Anytime a proper UCT loop occurs,  
the likely-best reference is updated (about 90% of the time there is  
no change, so I think it's safe).



What is a proper UCT loop?




Jason House wrote:

That sounds like a good optimization. What did you do?
Sent from my iPhone
On Jun 2, 2009, at 3:16 PM, Michael Williams > wrote:
Update:  After concentrating on tightening the UCT loop, I've  
optimized myself back into needing the SDD  :/


But now I should be able to get to 20B nodes in just one day.

(still only doing 7x7 Go)


Michael Williams wrote:
Yes, when memory is full, I save and free all leaf nodes (which  
is the vast majority).  Nodes are loaded as needed.

Don Dailey wrote:



On Mon, Jun 1, 2009 at 4:57 PM, Michael Williams  > wrote:


  I've optimized my disk access to the point where I'm mostly CPU
  limited now, even when using a standard hard disk instead of  
an SSD.
   I can now create trees of up to about 30 billion nodes, which  
would
  take about a week.  The simulation rate is continuously going  
down

  because so much time is spent in UCT loops in the huge tree.


That's impressive.   Are you doing things which move parts of  
the tree onto the disk and back when needed? I'm curious  
about the details!


- Don






  Don Dailey wrote:



  On Mon, Jun 1, 2009 at 11:22 AM, Isaac Deutsch    >> wrote:



  > Well, I'll take that over crashing with an out-of- 
memory

  error. :)

 Still, pruning seems better to me and has the same  
effect. ;p



  But is it better?   I think it's not so obvious without  
thorough

  testing.

  Pruning throw away information that is lost forever and  
may need
  to be recalculated.   Requiring more simulations does not  
throw
  out results, but results in some inefficiencies.   So it's  
not
  clear to me which is better - it may even be that it  
depends on

  how much you push it.   I am just guessing but I would guess
  that pruning is better in the short term, worse in the  
longer

  term.   Imagine a search at a corespondence level, where the
  computer thinks for 24 hours.   Which method is best  
there?  Could you use hard disk or SSD?   Using some  
kind of caching
  system,  where you relegate the oldest unvisited nodes to  
the
  hard dirve.   It may be that nodes you might normally  
prune are

  unlikely to get used again but if they do you still have the
  data.This is no good unless you can guarantee that the  
disk
  is used very infrequently - but with SSD it may be more  
practical.



  - Don




 --
 Nur bis 31.05.: GMX FreeDSL Komplettanschluss mit DSL  
6.000

  Flatrate und
 Telefonanschluss nur 17,95 Euro/mtl.!*
  http://portal.gmx.net/de/go/dsl02
 ___
 computer-go mailing list
 computer-go@computer-go.org
  
  >

 http://www.computer-go.org/mailman/listinfo/computer-go/



   
--- 
--- 
--



  ___
  computer-go mailing list
  computer-go@computer-go.org 

  http://www.computer-go.org/mailman/listinfo/computer-go/


  ___
  computer-go mailing list
  computer-go@computer-go.org 
  http://www.computer-go.org/mailman/listinfo/computer-go/



--- 
--- 
--


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@

Re: [computer-go] UCT tree pruning

2009-06-02 Thread Łukasz Lew
On Wed, Jun 3, 2009 at 00:56, Michael Williams
 wrote:
> Two things:  Firstly, I'm storing (only in RAM) the precalculated Winrate
> and InvSqrtVisits and keeping them updated.
> So my UCT formula went from
>
>        Wins / Visits + sqrt(lnParentVisits / Visits)
>
> to
>
>        WinRate + sqrtLnParentVisits * InvSqrtVisits;
>
> This has a memory cost, but I don't care so much about RAM since I can send
> the nodes to disk.
>
> And the second thing is to store in the parent node a reference to what is
> likely the UCT-best child node.  If the parent has been visited
> 100*boardspaces times, I will go directly to the likely-best child with
> probability 2047/2048.  Anytime a proper UCT loop occurs, the likely-best
> reference is updated (about 90% of the time there is no change, so I think
> it's safe).

This is quite similar to epsilon trick described here:
http://www.mimuw.edu.pl/~lew/files/epsilon_trick.pdf

in short when you calculate best UCT child you visit it
max(best.visit_count * epsilon, 1) times
with epsilon = 0.05 for instance
It works well both for new and old nodes, but you have to keep the
counter of visits.
The soft way would be to recalculate best child with probability
min(1, 1/(best.visit_count*epsilon)).

Both variants of ET can give you some guarantees about the way the
tree is explored.

Łukasz

>
>
> Jason House wrote:
>>
>> That sounds like a good optimization. What did you do?
>>
>> Sent from my iPhone
>>
>> On Jun 2, 2009, at 3:16 PM, Michael Williams 
>> wrote:
>>
>>> Update:  After concentrating on tightening the UCT loop, I've optimized
>>> myself back into needing the SDD  :/
>>>
>>> But now I should be able to get to 20B nodes in just one day.
>>>
>>> (still only doing 7x7 Go)
>>>
>>>
>>> Michael Williams wrote:

 Yes, when memory is full, I save and free all leaf nodes (which is the
 vast majority).  Nodes are loaded as needed.
 Don Dailey wrote:
>
>
> On Mon, Jun 1, 2009 at 4:57 PM, Michael Williams
> mailto:michaelwilliam...@gmail.com>> wrote:
>
>   I've optimized my disk access to the point where I'm mostly CPU
>   limited now, even when using a standard hard disk instead of an SSD.
>    I can now create trees of up to about 30 billion nodes, which would
>   take about a week.  The simulation rate is continuously going down
>   because so much time is spent in UCT loops in the huge tree.
>
>
> That's impressive.   Are you doing things which move parts of the tree
> onto the disk and back when needed?     I'm curious about the details!
>
> - Don
>
>
>
>
>
>
>   Don Dailey wrote:
>
>
>
>       On Mon, Jun 1, 2009 at 11:22 AM, Isaac Deutsch         >>
> wrote:
>
>
>           > Well, I'll take that over crashing with an out-of-memory
>       error. :)
>
>          Still, pruning seems better to me and has the same effect. ;p
>
>
>       But is it better?   I think it's not so obvious without thorough
>       testing.
>
>       Pruning throw away information that is lost forever and may need
>       to be recalculated.   Requiring more simulations does not throw
>       out results, but results in some inefficiencies.   So it's not
>       clear to me which is better - it may even be that it depends on
>       how much you push it.   I am just guessing but I would guess
>       that pruning is better in the short term, worse in the longer
>       term.   Imagine a search at a corespondence level, where the
>       computer thinks for 24 hours.   Which method is best there?
>    Could you use hard disk or SSD?   Using some kind of caching
>       system,  where you relegate the oldest unvisited nodes to the
>       hard dirve.   It may be that nodes you might normally prune are
>       unlikely to get used again but if they do you still have the
>       data.    This is no good unless you can guarantee that the disk
>       is used very infrequently - but with SSD it may be more
> practical.
>
>
>       - Don
>
>
>
>
>                  --
>          Nur bis 31.05.: GMX FreeDSL Komplettanschluss mit DSL 6.000
>       Flatrate und
>          Telefonanschluss nur 17,95 Euro/mtl.!*
>       http://portal.gmx.net/de/go/dsl02
>          ___
>          computer-go mailing list
>          computer...@computer-go.org
>       
>              >
>
>          http://www.computer-go.org/mailman/listinfo/computer-go/
>
>
>
>
> 
>
>
>       

Re: [computer-go] UCT tree pruning

2009-06-02 Thread Michael Williams

Two things:  Firstly, I'm storing (only in RAM) the precalculated Winrate and 
InvSqrtVisits and keeping them updated.
So my UCT formula went from

Wins / Visits + sqrt(lnParentVisits / Visits)

to

WinRate + sqrtLnParentVisits * InvSqrtVisits;

This has a memory cost, but I don't care so much about RAM since I can send the 
nodes to disk.

And the second thing is to store in the parent node a reference to what is likely the UCT-best child node.  If the parent has been visited 100*boardspaces 
times, I will go directly to the likely-best child with probability 2047/2048.  Anytime a proper UCT loop occurs, the likely-best reference is updated (about 
90% of the time there is no change, so I think it's safe).



Jason House wrote:

That sounds like a good optimization. What did you do?

Sent from my iPhone

On Jun 2, 2009, at 3:16 PM, Michael Williams 
 wrote:


Update:  After concentrating on tightening the UCT loop, I've 
optimized myself back into needing the SDD  :/


But now I should be able to get to 20B nodes in just one day.

(still only doing 7x7 Go)


Michael Williams wrote:
Yes, when memory is full, I save and free all leaf nodes (which is 
the vast majority).  Nodes are loaded as needed.

Don Dailey wrote:



On Mon, Jun 1, 2009 at 4:57 PM, Michael Williams 
mailto:michaelwilliam...@gmail.com>> 
wrote:


   I've optimized my disk access to the point where I'm mostly CPU
   limited now, even when using a standard hard disk instead of an SSD.
I can now create trees of up to about 30 billion nodes, which would
   take about a week.  The simulation rate is continuously going down
   because so much time is spent in UCT loops in the huge tree.


That's impressive.   Are you doing things which move parts of the 
tree onto the disk and back when needed? I'm curious about the 
details!


- Don






   Don Dailey wrote:



   On Mon, Jun 1, 2009 at 11:22 AM, Isaac Deutsch >> 
wrote:



   > Well, I'll take that over crashing with an out-of-memory
   error. :)

  Still, pruning seems better to me and has the same effect. ;p


   But is it better?   I think it's not so obvious without thorough
   testing.

   Pruning throw away information that is lost forever and may need
   to be recalculated.   Requiring more simulations does not throw
   out results, but results in some inefficiencies.   So it's not
   clear to me which is better - it may even be that it depends on
   how much you push it.   I am just guessing but I would guess
   that pruning is better in the short term, worse in the longer
   term.   Imagine a search at a corespondence level, where the
   computer thinks for 24 hours.   Which method is best 
there?  Could you use hard disk or SSD?   Using some kind of 
caching

   system,  where you relegate the oldest unvisited nodes to the
   hard dirve.   It may be that nodes you might normally prune are
   unlikely to get used again but if they do you still have the
   data.This is no good unless you can guarantee that the disk
   is used very infrequently - but with SSD it may be more 
practical.



   - Don




  --
  Nur bis 31.05.: GMX FreeDSL Komplettanschluss mit DSL 6.000
   Flatrate und
  Telefonanschluss nur 17,95 Euro/mtl.!*
   http://portal.gmx.net/de/go/dsl02
  ___
  computer-go mailing list
  computer-go@computer-go.org
   
   >

  http://www.computer-go.org/mailman/listinfo/computer-go/



   
 




   ___
   computer-go mailing list
   computer-go@computer-go.org 
   http://www.computer-go.org/mailman/listinfo/computer-go/


   ___
   computer-go mailing list
   computer-go@computer-go.org 
   http://www.computer-go.org/mailman/listinfo/computer-go/



 



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.co

Re: [computer-go] UCT tree pruning

2009-06-02 Thread Jason House

That sounds like a good optimization. What did you do?

Sent from my iPhone

On Jun 2, 2009, at 3:16 PM, Michael Williams > wrote:


Update:  After concentrating on tightening the UCT loop, I've  
optimized myself back into needing the SDD  :/


But now I should be able to get to 20B nodes in just one day.

(still only doing 7x7 Go)


Michael Williams wrote:
Yes, when memory is full, I save and free all leaf nodes (which is  
the vast majority).  Nodes are loaded as needed.

Don Dailey wrote:



On Mon, Jun 1, 2009 at 4:57 PM, Michael Williams  > wrote:


   I've optimized my disk access to the point where I'm mostly CPU
   limited now, even when using a standard hard disk instead of an  
SSD.
I can now create trees of up to about 30 billion nodes, which  
would
   take about a week.  The simulation rate is continuously going  
down

   because so much time is spent in UCT loops in the huge tree.


That's impressive.   Are you doing things which move parts of the  
tree onto the disk and back when needed? I'm curious about the  
details!


- Don






   Don Dailey wrote:



   On Mon, Jun 1, 2009 at 11:22 AM, Isaac Deutsch >> wrote:



   > Well, I'll take that over crashing with an out-of- 
memory

   error. :)

  Still, pruning seems better to me and has the same  
effect. ;p



   But is it better?   I think it's not so obvious without  
thorough

   testing.

   Pruning throw away information that is lost forever and may  
need
   to be recalculated.   Requiring more simulations does not  
throw
   out results, but results in some inefficiencies.   So it's  
not
   clear to me which is better - it may even be that it  
depends on

   how much you push it.   I am just guessing but I would guess
   that pruning is better in the short term, worse in the longer
   term.   Imagine a search at a corespondence level, where the
   computer thinks for 24 hours.   Which method is best  
there?  Could you use hard disk or SSD?   Using some kind  
of caching

   system,  where you relegate the oldest unvisited nodes to the
   hard dirve.   It may be that nodes you might normally prune  
are

   unlikely to get used again but if they do you still have the
   data.This is no good unless you can guarantee that the  
disk
   is used very infrequently - but with SSD it may be more  
practical.



   - Don




  --
  Nur bis 31.05.: GMX FreeDSL Komplettanschluss mit DSL  
6.000

   Flatrate und
  Telefonanschluss nur 17,95 Euro/mtl.!*
   http://portal.gmx.net/de/go/dsl02
  ___
  computer-go mailing list
  computer-go@computer-go.org
   
   >

  http://www.computer-go.org/mailman/listinfo/computer-go/




--- 
--- 
--



   ___
   computer-go mailing list
   computer-go@computer-go.org 

   http://www.computer-go.org/mailman/listinfo/computer-go/


   ___
   computer-go mailing list
   computer-go@computer-go.org 
   http://www.computer-go.org/mailman/listinfo/computer-go/



--- 
--- 
--


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] UCT tree pruning

2009-06-02 Thread Michael Williams

I mean "SSD".

Michael Williams wrote:
Update:  After concentrating on tightening the UCT loop, I've optimized 
myself back into needing the SDD  :/


But now I should be able to get to 20B nodes in just one day.

(still only doing 7x7 Go)


Michael Williams wrote:
Yes, when memory is full, I save and free all leaf nodes (which is the 
vast majority).  Nodes are loaded as needed.


Don Dailey wrote:



On Mon, Jun 1, 2009 at 4:57 PM, Michael Williams 
mailto:michaelwilliam...@gmail.com>> 
wrote:


I've optimized my disk access to the point where I'm mostly CPU
limited now, even when using a standard hard disk instead of an SSD.
 I can now create trees of up to about 30 billion nodes, which would
take about a week.  The simulation rate is continuously going down
because so much time is spent in UCT loops in the huge tree.


That's impressive.   Are you doing things which move parts of the 
tree onto the disk and back when needed? I'm curious about the 
details!


- Don


 





Don Dailey wrote:



On Mon, Jun 1, 2009 at 11:22 AM, Isaac Deutsch  >> 
wrote:



> Well, I'll take that over crashing with an out-of-memory
error. :)

   Still, pruning seems better to me and has the same effect. ;p


But is it better?   I think it's not so obvious without thorough
testing.

Pruning throw away information that is lost forever and may need
to be recalculated.   Requiring more simulations does not throw
out results, but results in some inefficiencies.   So it's not
clear to me which is better - it may even be that it depends on
how much you push it.   I am just guessing but I would guess
that pruning is better in the short term, worse in the longer
term.   Imagine a search at a corespondence level, where the
computer thinks for 24 hours.   Which method is best there?  
Could you use hard disk or SSD?   Using some kind of caching

system,  where you relegate the oldest unvisited nodes to the
hard dirve.   It may be that nodes you might normally prune are
unlikely to get used again but if they do you still have the
data.This is no good unless you can guarantee that the disk
is used very infrequently - but with SSD it may be more 
practical.



- Don




   --
   Nur bis 31.05.: GMX FreeDSL Komplettanschluss mit DSL 6.000
Flatrate und
   Telefonanschluss nur 17,95 Euro/mtl.!*
http://portal.gmx.net/de/go/dsl02
   ___
   computer-go mailing list
   computer-go@computer-go.org

>

   http://www.computer-go.org/mailman/listinfo/computer-go/








___
computer-go mailing list
computer-go@computer-go.org 
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org 
http://www.computer-go.org/mailman/listinfo/computer-go/





___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/








___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] UCT tree pruning

2009-06-02 Thread Michael Williams

Update:  After concentrating on tightening the UCT loop, I've optimized myself 
back into needing the SDD  :/

But now I should be able to get to 20B nodes in just one day.

(still only doing 7x7 Go)


Michael Williams wrote:
Yes, when memory is full, I save and free all leaf nodes (which is the 
vast majority).  Nodes are loaded as needed.


Don Dailey wrote:



On Mon, Jun 1, 2009 at 4:57 PM, Michael Williams 
mailto:michaelwilliam...@gmail.com>> wrote:


I've optimized my disk access to the point where I'm mostly CPU
limited now, even when using a standard hard disk instead of an SSD.
 I can now create trees of up to about 30 billion nodes, which would
take about a week.  The simulation rate is continuously going down
because so much time is spent in UCT loops in the huge tree.


That's impressive.   Are you doing things which move parts of the tree 
onto the disk and back when needed? I'm curious about the details!


- Don


 





Don Dailey wrote:



On Mon, Jun 1, 2009 at 11:22 AM, Isaac Deutsch  >> 
wrote:



> Well, I'll take that over crashing with an out-of-memory
error. :)

   Still, pruning seems better to me and has the same effect. ;p


But is it better?   I think it's not so obvious without thorough
testing.

Pruning throw away information that is lost forever and may need
to be recalculated.   Requiring more simulations does not throw
out results, but results in some inefficiencies.   So it's not
clear to me which is better - it may even be that it depends on
how much you push it.   I am just guessing but I would guess
that pruning is better in the short term, worse in the longer
term.   Imagine a search at a corespondence level, where the
computer thinks for 24 hours.   Which method is best there?  
Could you use hard disk or SSD?   Using some kind of caching

system,  where you relegate the oldest unvisited nodes to the
hard dirve.   It may be that nodes you might normally prune are
unlikely to get used again but if they do you still have the
data.This is no good unless you can guarantee that the disk
is used very infrequently - but with SSD it may be more 
practical.



- Don





   --

   Nur bis 31.05.: GMX FreeDSL Komplettanschluss mit DSL 6.000
Flatrate und
   Telefonanschluss nur 17,95 Euro/mtl.!*
http://portal.gmx.net/de/go/dsl02
   ___
   computer-go mailing list
   computer-go@computer-go.org

>

   http://www.computer-go.org/mailman/listinfo/computer-go/








___
computer-go mailing list
computer-go@computer-go.org 
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org 
http://www.computer-go.org/mailman/listinfo/computer-go/





___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/





___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: Problems with CGOS

2009-06-02 Thread Don Dailey
I'll review what I have when I get home.   I don't believe this is an
issue,  but it is a good thing to double-check.

- Don



On Tue, Jun 2, 2009 at 2:30 PM, Dave Dyer  wrote:

>
> >
> >So I believe this is a design flaw in CGOS itself.   I wrote CGOS without
> having had any experience writing servers.
>
> If there's a problem with larger databases, perhaps it can be
> fixed by adding the right indexes to the sql database.  If
> you add a little time monitoring code around your queries
> to determine which are slowest, you may be able to devise a
> 1 line fix, by added an index to the schema.
>
>
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Re: Problems with CGOS

2009-06-02 Thread Dave Dyer

>
>So I believe this is a design flaw in CGOS itself.   I wrote CGOS without 
>having had any experience writing servers.   

If there's a problem with larger databases, perhaps it can be
fixed by adding the right indexes to the sql database.  If
you add a little time monitoring code around your queries
to determine which are slowest, you may be able to devise a 
1 line fix, by added an index to the schema.



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Problems with CGOS

2009-06-02 Thread Don Dailey
Are we talking about the 9x9 server in this case?

- Don


On Tue, Jun 2, 2009 at 2:20 PM, Brian Sheppard  wrote:

> I was about to report the same issues as Remi. In the last round of games,
> about 50% were losses on time.
>
> I presume that these issues are related to server load. About 30 programs
> are currently running on the 9x9 server. This is great for testing, but
> perhaps more than the expected load.
>
> I will close down my cgosview windows, in case that helps to reduce load.
>
> Brian
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Problems with CGOS

2009-06-02 Thread Don Dailey
Hi Remi,

I noticed that when CGOS first came up,  it was very fast but as the sqlite3
database gets bigger and bigger, it gets slower.

So I believe this is a design flaw in CGOS itself.   I wrote CGOS without
having had any experience writing servers.

I think I know now how to build a super fast server and I plan to do that
very soon.

I would like to do a test where I start a brand new server instance from
scratch to see what happens.   I cannot do that at the moment because I am
at work and do not get payed to do this on company time,  but perhaps
tonight or tomorrow I can test this.

I can build a new server in very short time - I plan to do this in C and I
have all the support routines ready to go - so it's mostly piecing things
together, not redesigning code from scratch.I think the end result will
be an impressively efficient server with low memory usage, which is probably
a big part of the problem.

It would  come with a new ajax style web page.

- Don




On Tue, Jun 2, 2009 at 1:34 PM, Rémi Coulom wrote:

> Hi,
>
> I have just connected Crazy Stone to CGOS and noticed a few problems:
> - pairings seem to take forever (10 minutes or so)
> - there is a lot of lag during games (up to one minute for a move, which
> causes losses on time)
>
> I tried the cgosview-linux-x86_32 client, and got a "could not execute
> error". If I run it with no parameter, it runs fine (except that default
> parameter don't connect to any server). If I run it with parameter, it fails
> with "could not execute". This is not a big problem, because I kept a copy
> of an old version that works very well.
>
> Rémi
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Problems with CGOS

2009-06-02 Thread Brian Sheppard
I was about to report the same issues as Remi. In the last round of games,
about 50% were losses on time.

I presume that these issues are related to server load. About 30 programs
are currently running on the 9x9 server. This is great for testing, but
perhaps more than the expected load.

I will close down my cgosview windows, in case that helps to reduce load.

Brian

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Problems with CGOS

2009-06-02 Thread Rémi Coulom

Hi,

I have just connected Crazy Stone to CGOS and noticed a few problems:
- pairings seem to take forever (10 minutes or so)
- there is a lot of lag during games (up to one minute for a move, 
which causes losses on time)


I tried the cgosview-linux-x86_32 client, and got a "could not execute 
error". If I run it with no parameter, it runs fine (except that default 
parameter don't connect to any server). If I run it with parameter, it 
fails with "could not execute". This is not a big problem, because I 
kept a copy of an old version that works very well.


Rémi
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Liberties in Many Faces

2009-06-02 Thread Peter Drake
Next question: what about captures? Do you have to re-walk the  
neighboring chains when a capture occurs?


Peter Drake
http://www.lclark.edu/~drake/



On May 31, 2009, at 9:27 PM, David Fotland wrote:


1) yes.  I maintain liberty counts during MC playouts.

2) Something else.  I remove one liberty from the adjacent chain,  
then look
at the empty points adjacent to the new stone and check if they are  
also
adjacent to the adjacent chain, and adjust the liberty counts  
accordingly.
At most 3 checks are required.  I only have to walk a full chain  
when a move

merges two or more chains.  I hope this is clear :)

David


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Future KGS bot tournaments

2009-06-02 Thread Jason House

On Jun 2, 2009, at 6:07 AM, Nick Wedd  wrote:


Erik van der Werf  writes


I like events with many (fast) rounds such as the one yesterday.


So do I - they are certainly more interesting for me.  I had been
tending to avoid them, in the belief that most programmers,  
particularly
of UCT-based programs, preferred slow games.  But in view of what  
you, a
successful UCT programmer, say, I shall hold more fast events in  
future.


I know that SlugGo and Go++ prefer slow time limits - if they start to
show interest in these events, I will hold slow tournaments again.  I
may hold another week-long tournament if there is interest - five
rounds, twelve hours each sudden death.

dhillism...@netscape.net writes

One factor is that there seems to be a narrow range between too few
entrants and too many. For any given contest, the potential pool
includes an elite few who have a chance at first place and maybe a
couple who have a new or newly improved bot. There is a larger group,
back in the pack, whose last breakthrough was a while ago. For many  
of

us in that last group, it would be easy enough to enter, but hard to
know if that would help or hinder.


In my view, more is always better, for many reasons.  We get to see  
more
bots perform, we see how bots perform against unfamiliar strategies,  
we

don't get repeat games between the same opponents, if there's an odd
number of players the byes are not too significant.  I can't think of
any convincing reason for preferring small numbers.

I assure you, if antbot wants to play in these events, it will be very
welcome.

Steve Uurtamo and Jason House agree.

Jason House  writes

In the past, I've entered bots and indicated that I would not be
offended if my bot was removed. Don has made use of such offers from
Aloril in the past. Maybe you could make a similar offer?


You have made this offer in the past.  I never took it up, because I  
had

the same offer from Aloril.  Such offers are useful, as they let me
ensure that numbers are even, and avoid byes.  Given the choice, I
preferred to remove his artificially stupid IdiotBot and retain your
HouseBot.  Now that Aloril is less active in computer Go, I will be
grateful to have one of your bots enter on the same basis.


I don't mind entering hb04 along with one of the my modern  
implementations because it uses almost no load (it's a pattern player).


In the same category of pattern players, Remi's pattern player may  
make a good low end bot. It'd probably beat hb04.


Maybe Don Dailey's anchor bot from CGOS would be good too? It has  
tunable strength.


PS: sorry about getting your name wrong :(



David Fotland  writes
I prefer full size boards, since that's a more difficult problem,  
and games
at 19x19 give me more to work with.  Short time limits are fine.   
Perhaps
19x19 with 15 or 20 minutes each?  After all, that's a good time  
limit for

games against people.


Sounds good to me.  The next event will probably be 19x19, 18 minutes
each.

Christian Nentwich  writes
I am hoping that I can join this at some point, at the lower end of  
the

field to start with :)

Is it possible to set a bar at these tournaments? In human McMahon
tournaments, that very successfully allows a top tier of competition
while guaranteeing at least some fun for everybody else.


A bar makes sense in a McMahon tournament, where the number of players
exceeds 2^(number of rounds).  But these events aren't McMahon, they  
are

Swiss.  Also they never have that many players; and now that we have
decided on faster and more rounds, they aren't going to.

The tournament formats supported by KGS are:
 Single elimination
 Double elimination
I don't like elimination tournaments.  Someone who has set up his  
bot to

play wants to see it play, not to see it eliminated.
 Swiss
as used for all these events
 McMahon
McMahon involves the server using the entrants' ratings.  But many  
bots

don't have ratings.  KGS admins are reluctant to allow bots to play as
rated bots.
 Round Robin
I haven't been using Round Robin because it means the length of the
event depends on the number of players.  I am not willing to make an
open-ended commitment of my time.

So these events will continue to be Swiss, unless someone makes a  
strong

case for a change.

After the first few rounds, the Swiss system achieves the same  
effect as

McMahon: the strong players are paired against each other, as are the
weaker players.  (In fact, when there are fewer players than rounds,  
all
the players end up playing all possible opponents anyway.  This  
happens

with both Swiss and McMahon).



To summarise - time limits will generally be faster than formerly.   
Lots

of entrants, lots of weak entrants, are strongly encouraged.  There is
nothing wrong with entering a bot that loses all its games.  I was  
very

pleased to see Rango play on Sunday, and hope it will compete again.

Nick
--
Nick Weddn...@maproom.co.uk
___

Re: [computer-go] Congratulations to Steenvreter!

2009-06-02 Thread Isaac Deutsch
Congrats to stv.


> But I would prefer more, and would like to know what I 
> might do to attract more entrants.
> 
> Nick

What about a Rengo tournament? :) I don't know how feasible that would be,
but it could be fun to have programs cope with someone else on their team.
-- 
GMX FreeDSL mit DSL 6.000 Flatrate und Telefonanschluss nur 17,95 Euro/mtl.!
http://dslspecial.gmx.de/freedsl-aktionspreis/?ac=OM.AD.PD003K11308T4569a
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Future KGS bot tournaments

2009-06-02 Thread Magnus Persson
Actually, MCTS-programmers should be happy with any timeconstraints  
that does not make the program run out of memory, since a proper  
MCTS-program should scale nicely no matter the time constraint. Maybe  
an ultrafast tournament with a tenth of a second would favor Valkyria  
on small boards but we do not want to play that fast...


I think all programmers should participate whatever strength their  
programs have.


-Magnus




___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Future KGS bot tournaments

2009-06-02 Thread Nick Wedd
Erik van der Werf  writes

>I like events with many (fast) rounds such as the one yesterday.

So do I - they are certainly more interesting for me.  I had been
tending to avoid them, in the belief that most programmers, particularly
of UCT-based programs, preferred slow games.  But in view of what you, a
successful UCT programmer, say, I shall hold more fast events in future.

I know that SlugGo and Go++ prefer slow time limits - if they start to
show interest in these events, I will hold slow tournaments again.  I
may hold another week-long tournament if there is interest - five
rounds, twelve hours each sudden death.

dhillism...@netscape.net writes
>One factor is that there seems to be a narrow range between too few
>entrants and too many. For any given contest, the potential pool
>includes an elite few who have a chance at first place and maybe a
>couple who have a new or newly improved bot. There is a larger group,
>back in the pack, whose last breakthrough was a while ago. For many of
>us in that last group, it would be easy enough to enter, but hard to
>know if that would help or hinder.

In my view, more is always better, for many reasons.  We get to see more
bots perform, we see how bots perform against unfamiliar strategies, we
don't get repeat games between the same opponents, if there's an odd
number of players the byes are not too significant.  I can't think of
any convincing reason for preferring small numbers.

I assure you, if antbot wants to play in these events, it will be very
welcome.

Steve Uurtamo and Jason House agree.

Jason House  writes
>In the past, I've entered bots and indicated that I would not be
>offended if my bot was removed. Don has made use of such offers from
>Aloril in the past. Maybe you could make a similar offer?

You have made this offer in the past.  I never took it up, because I had
the same offer from Aloril.  Such offers are useful, as they let me
ensure that numbers are even, and avoid byes.  Given the choice, I
preferred to remove his artificially stupid IdiotBot and retain your
HouseBot.  Now that Aloril is less active in computer Go, I will be
grateful to have one of your bots enter on the same basis.

David Fotland  writes
>I prefer full size boards, since that's a more difficult problem, and games
>at 19x19 give me more to work with.  Short time limits are fine.  Perhaps
>19x19 with 15 or 20 minutes each?  After all, that's a good time limit for
>games against people.

Sounds good to me.  The next event will probably be 19x19, 18 minutes
each.

Christian Nentwich  writes
>I am hoping that I can join this at some point, at the lower end of the
>field to start with :)
>
>Is it possible to set a bar at these tournaments? In human McMahon
>tournaments, that very successfully allows a top tier of competition
>while guaranteeing at least some fun for everybody else.

A bar makes sense in a McMahon tournament, where the number of players
exceeds 2^(number of rounds).  But these events aren't McMahon, they are
Swiss.  Also they never have that many players; and now that we have
decided on faster and more rounds, they aren't going to.

The tournament formats supported by KGS are:
  Single elimination
  Double elimination
I don't like elimination tournaments.  Someone who has set up his bot to
play wants to see it play, not to see it eliminated.
  Swiss
as used for all these events
  McMahon
McMahon involves the server using the entrants' ratings.  But many bots
don't have ratings.  KGS admins are reluctant to allow bots to play as
rated bots.
  Round Robin
I haven't been using Round Robin because it means the length of the
event depends on the number of players.  I am not willing to make an
open-ended commitment of my time.

So these events will continue to be Swiss, unless someone makes a strong
case for a change.

After the first few rounds, the Swiss system achieves the same effect as
McMahon: the strong players are paired against each other, as are the
weaker players.  (In fact, when there are fewer players than rounds, all
the players end up playing all possible opponents anyway.  This happens
with both Swiss and McMahon).



To summarise - time limits will generally be faster than formerly.  Lots
of entrants, lots of weak entrants, are strongly encouraged.  There is
nothing wrong with entering a bot that loses all its games.  I was very
pleased to see Rango play on Sunday, and hope it will compete again.

Nick
-- 
Nick Weddn...@maproom.co.uk
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Congratulations to Steenvreter!

2009-06-02 Thread Christian Nentwich

Nick,

I am hoping that I can join this at some point, at the lower end of the 
field to start with :)


Is it possible to set a bar at these tournaments? In human McMahon 
tournaments, that very successfully allows a top tier of competition 
while guaranteeing at least some fun for everybody else.


Christian


Nick Wedd wrote:
In message <8cbb1200f1dffd9-cbc-...@mblk-m02.sysops.aol.com>, 
dhillism...@netscape.net writes

One factor is that there seems to be a narrow range between too few
entrants and too many. For any given contest, the potential pool
includes an elite few who have a chance at first place and maybe a
couple who have a new or newly improved bot. There is a larger group,
back in the pack, whose last breakthrough was a while ago. For many of
us in that last group, it would be easy enough to enter, but hard to
know if that would help or hinder.


My view is that more entrants, including weaker entrants, help.  I 
used to encourage Aloril to enter his deliberately weak bots, not only 
to fill out the numbers, but to provide suitable opponents for first 
time entrants.


I see a purpose of these events as providing a training ground for 
more significant events.  Some programmers concentrate too much on 
trying to get the bot to play well, rather than on doing basic things 
right.  A bot that plays badly but beats IdiotBot shouldn't be too 
hard to achieve - so if a bot plays well but loses to IdiotBot, it is 
doing something wrong which really ought to be fixed.


Nick



--

Christian Nentwich

Director, Model Two Zero Ltd.
+44-(0)7747-061302
http://www.modeltwozero.com

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/