uurtamo .: <cadg0incam-_ih31cbka4mvg2fbdmj3adqwcuyfxtb1vhk7t...@mail.gmail.com>:
>Slow down there, hombre.
>There's no secret sauce to 9x9 other than that it isn't the current focus
>Just like 7x7 isn't immune.
>A computer program for 9x9, funded, backed by halfway serious people, and
>focused on the task, will *destroy* human opponents at any time it needs to.
Why do you think (or believe) so? I'd like to say there
is no evidence so far.
>If you believe that there is a special reason that 9x9 is harder than
>19x19, then I'm super interested to hear that. But it's not harder for
>computers. It's just not what people have been focusing on.
9x9 is not harder than 19x19 as a game. However: (1) Value
networks, the key components to beat human on 19x19, work
fine only on static positions but 9x9 has almost no such
positions. (2) Humans can play much better on 9x9
than 19x19. Top level professionals can read-out at near
end of the middle stage of a game in less than 30 min with
one point accuracy of the score, for example.
Humans are not good at global evaluation of larger boards so
bots can beat top professionals on 19x19 but this does not
apply 9x9. The size of the board is important because
value networks are not universal, ie, approximate the
value function not so presicely, mainly due to
the number of training data is limited in practice (up to
10^8 while the number of possible input positions is greater
than, at least, 10^20). One more reason, there are no
algorithm to solve double ko. This is not so big problem on
19x19 but 9x9.
>On Feb 23, 2018 4:49 PM, "Hideki Kato" <hideki_ka...@ybb.ne.jp> wrote:
>> That's not the point, Petri. 9x9 has almost no "silent"
>> or "static" positons which value networks superb humans.
>> On 9x9 boards, Kos, especially double Kos and two step Kos
>> are important but MCTS still works worse for them, for
>> examples. Human professionals are much better at life&death
>> and complex local fights which dominate small board games
>> because they can read deterministically and deeper than
>> current MCTS bots in standard time settings (not blitz).
>> Also it's well known that MCTS is not good at finding narrow
>> and deep paths to win due to "averaging". Ohashi 6p said
>> that he couldn't lose against statiscal algorithms after the
>> event in 2012.
>> Petri Pitkanen: <CAMp4Doefkp+n16CxDWY9at9OFwdh3V7+
>> >elo-range in 9x9 smaller than 19x19. One just cannot be hugelyl better
>> >the other is such limitted game
>> >2018-02-23 21:15 GMT+02:00 Hiroshi Yamashita <y...@bd.mbn.or.jp>:
>> >> Hi,
>> >> Top 19x19 program reaches 4200 BayesElo on CGOS. But 3100 in 9x9.
>> >> Maybe it is because people don't have much interest in 9x9.
>> >> But it seems value network does not work well in 9x9.
>> >> Weights_33_400 is maybe made by selfplay network. But it is 2946 in
>> >> Weights_31_3200 is 4069 in 19x19 though.
>> >> In year 2012, Zen played 6 games against 3 Japanese Pros, and lost by
>> >> And it seems Zen's 9x9 strength does not change big even now.
>> >> http://computer-go.org/pipermail/computer-go/2012-November/005556.html
>> >> I feel there is still enough chance that human can beat best program in
>> >> 9x9.
>> >> Thanks,
>> >> Hiroshi Yamashita
>> >> _______________________________________________
>> >> Computer-go mailing list
>> >> Computeremail@example.com
>> >> http://computer-go.org/mailman/listinfo/computer-go
>> >---- inline file
>> >Computer-go mailing list
>> Hideki Kato <mailto:hideki_ka...@ybb.ne.jp>
>> Computer-go mailing list
>---- inline file
>Computer-go mailing list
Hideki Kato <mailto:hideki_ka...@ybb.ne.jp>
Computer-go mailing list