>>>> Now I made some autoplay tests, starting from the end position
>>>> given in the appendix of this mail.
>>>> * one game with Leela 3.16; Black won.
>>>> * four games with MFoG 12.016; two wins each for Black and White.
>>>> So there is some indiciation that the Great Wall works even
>>>> for bots, who are not affected by psychology.
>>>> ...
>>> Have you tried some random Setup for the first 5 stones from Black 
>>> and compared the results? 
>>
>> Yes, with MFoG: first 5 moves by Black on random points - vs -
>> first 4 moves by White on the 4,4-points.
>>
>> Result was clear advantage for White.
>
> So you tested just one game!?

No, 6 autoplay games with MFoG, each one with another random starting
position for Black. Times: 20 min for each side in each game. Score 
was +5 -1 for White.


I forgot to make clear the following:
* I own some go programs, but only with commercial interfaces.
  So I have to start each new test game by hand.
* I know that such small samples do not allow better/worse-statements 
  with 95 % confidence.
* The intention of my posting was/is to "activate" programmers who have
  test environments for their bots which allow simple test series
  (for instance about 100 games in 2-3 days).

I know that some people are hesitant to accept proposals from
others who have not written a go program. But I can tell you from
my experience in computer chess that these "others" can also contribute
something useful.

Ingo.

-- 
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01
_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to