I think that's ok: the prediction systems are already used to deal with
a huge number of positions during training, it's just a matter of
changing the quality of these positions. Say instead of training on 100%
good answers to good moves from games, we could take half as many and
train on 50%
...@gmail.com]
Sent: Tuesday, April 18, 2017 9:06 AM
To: computer-go@computer-go.org; Brian Sheppard <sheppar...@aol.com>
Subject: Re: [Computer-go] Patterns and bad shape
Now, I love this idea. A super fast cheap pattern matcher can act as input into
a neural network input layer in sort of
On 17-04-17 15:04, David Wu wrote:
> If you want an example of this actually mattering, here's example where
> Leela makes a big mistake in a game that I think is due to this kind of
> issue.
Ladders have specific treatment in the engine (which also has both known
limitations and actual bugs in
omputer-go.org] *On Behalf
Of *Jim O'Flaherty
*Sent:* Monday, April 17, 2017 7:05 AM
*To:* computer-go@computer-go.org
*Subject:* Re: [Computer-go] Patterns and bad shape
It seems chasing down good moves for bad shapes would be an explosion of
"exception cases", like combinatorially huge
-boun...@computer-go.org] On Behalf Of Jim
O'Flaherty
Sent: Monday, April 17, 2017 7:05 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Patterns and bad shape
It seems chasing down good moves for bad shapes would be an explosion of
"exception cases", like combinatoriall
David Wu wrote:
> Black (X) to move. The screenshot showed that Leela's policy net put about
> 96% probability on a and only 1.03% on b. And that even after nearly 1
> million simulations had basically not searched b at all.
Leela does read out ladders to the end, it doesn't rely on policy
Hmm. Do you know that Leela does something special here? When I look at
Leela's analysis output it seems to the search seems not to consider the
ladder escape because the policy net assigns a low probability to it (and
such a high probability to move in the upper right). Which is the same as
in
On Mon, Apr 17, 2017 at 3:04 PM, David Wu wrote:
> To some degree this maybe means Leela is insufficiently explorative in
> cases like this, but still, why does the policy net not put H5 more than
> 1.03%. After all, it's vastly more likely than 1% that that a good player
Hmmm, screenshot doesn't seem to be getting through to the list, so here's
a textual graphic instead.
A B C D E F G H J K L M N O P Q R S T
+---+
19 | . . . . . . . . . . . . . . . . . . . |
18 | . . . . . . . . . . . . . . . . . . . |
17 | .
It seems chasing down good moves for bad shapes would be an explosion of
"exception cases", like combinatorially huge. So, while you would be saving
some branching in the search space, you would be ballooning up the number
of patterns for which to scan by orders of magnitude.
Wouldn't it be
Hi,
I'm sure the topic must have come up before but i can't seem to find it
right now, i'd appreciate if someone can point me in the right direction.
I'm looking into MM, LFR and similar cpu-based pattern approaches for
generating priors, and was wondering about basic bad shape:
Let's say we
11 matches
Mail list logo