> ladders, not just liberties. In that case, yes! If you outright tell the
> neural net as an input whether each ladder works or not (doing a short
> tactical search to determine this), or something equivalent to it, then the
> net will definitely make use of that information, ...

Each convolutional layer should spread the information across the board.
I think alpha zero used 20 layers? So even 3x3 filters would tell you
about the whole board - though the signal from the opposite corner of
the board might end up a bit weak.

I think we can assume it is doing that successfully, because otherwise
we'd hear about it losing lots of games in ladders.

> something the first version of AlphaGo did (before they tried to make it
> "zero") and something that many other bots do as well. But Leela Zero and
> ELF do not do this, because of attempting to remain "zero", ...

I know that zero-ness was very important to DeepMind, but I thought the
open source dedicated go bots that have copied it did so because AlphaGo
Zero was stronger than AlphaGo Master after 21-40 days of training.
I.e. in the rarefied atmosphere of super-human play that starter package
of human expert knowledge was considered a weight around its neck.

BTW, I agree that feeding the results of tactical search in would make
stronger programs, all else being equal. But it is branching code, so
much slower to parallelize.

Darren
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to