Maybe I'm wrong but both curves for alphago zero looks pretty similar except than the figure 3 is the zoom in of figure 6
Le 27 oct. 2017 04:31, "Gian-Carlo Pascutto" <[email protected]> a écrit : > Figure 6 has the same graph as Figure 3 but for 40 blocks. You can compare > the Elo. > > On Thu, Oct 26, 2017, 23:35 Xavier Combelle <[email protected]> > wrote: > >> Unless I mistake figure 3 shows the plot of supervised learning to >> reinforcement learning, not 20 bloc/40 block >> >> For searching mention of the 20 blocks I search for 20 in the whole >> paper and did not found any other mention >> >> than of the kifu thing. >> >> >> Le 26/10/2017 à 15:10, Gian-Carlo Pascutto a écrit : >> > On 26-10-17 10:55, Xavier Combelle wrote: >> >> It is just wild guesses based on reasonable arguments but without >> >> evidence. >> > David Silver said they used 40 layers for AlphaGo Master. That's more >> > evidence than there is for the opposite argument that you are trying to >> > make. The paper certainly doesn't talk about a "small" and a "big" >> Master. >> > >> > You seem to be arguing from a bunch of misreadings and >> > misunderstandings. For example, Figure 3 in the paper shows the Elo plot >> > for the 20 block/40 layer version, and it compares to Alpha Go Lee, not >> > Alpha Go Master. The Alpha Go Master line would be above the flattening >> > part of the 20 block/40 layer AlphaGo Zero. I guess you missed this when >> > you say that they "only mention it to compare on kifu prediction"? >> > >> >> _______________________________________________ >> Computer-go mailing list >> [email protected] >> http://computer-go.org/mailman/listinfo/computer-go > > -- > > GCP > > _______________________________________________ > Computer-go mailing list > [email protected] > http://computer-go.org/mailman/listinfo/computer-go >
_______________________________________________ Computer-go mailing list [email protected] http://computer-go.org/mailman/listinfo/computer-go
