[Computer-go] What was the final score after the counting of AlphaGo-vs-Ke Jie Game #1?

2017-05-23 Thread Jim O'Flaherty
The announcer didn't have her mic on, so I couldn't hear the final score announced... So, what was the final score after the counting of AlphaGo-vs-Ke Jie Game #1? ___ Computer-go mailing list Computer-go@computer-go.org

Re: [Computer-go] What was the final score after the counting of AlphaGo-vs-Ke Jie Game #1?

2017-05-23 Thread Jim O'Flaherty
I have now heard that AlphaGo one by 0.5 points. On Tue, May 23, 2017 at 2:00 AM, Jim O'Flaherty wrote: > The announcer didn't have her mic on, so I couldn't hear the final score > announced... > > So, what was the final score after the counting of AlphaGo-vs-Ke Jie

Re: [Computer-go] mini-max with Policy and Value network

2017-05-23 Thread Gian-Carlo Pascutto
On 22-05-17 21:01, Marc Landgraf wrote: > But what you should really look at here is Leelas evaluation of the game. Note that this is completely irrelevant for the discussion about tactical holes and the position I posted. You could literally plug any evaluation into it (save for a static oracle,

Re: [Computer-go] mini-max with Policy and Value network

2017-05-23 Thread Gian-Carlo Pascutto
On 23-05-17 03:39, David Wu wrote: > Leela playouts are definitely extremely bad compared to competitors like > Crazystone. The deep-learning version of Crazystone has no value net as > far as I know, only a policy net, which means it's going on MC playouts > alone to produce its evaluations.

Re: [Computer-go] mini-max with Policy and Value network

2017-05-23 Thread Erik van der Werf
On Mon, May 22, 2017 at 4:54 PM, Gian-Carlo Pascutto wrote: > On 22-05-17 15:46, Erik van der Werf wrote: > > Anyway, LMR seems like a good idea, but last time I tried it (in Migos) > > it did not help. In Magog I had some good results with fractional depth > > reductions (like

Re: [Computer-go] mini-max with Policy and Value network

2017-05-23 Thread Gian-Carlo Pascutto
On 23-05-17 10:51, Hideki Kato wrote: > (2) The number of possible positions (input of the value net) in > real games is at least 10^30 (10^170 in theory). If the value > net can recognize all? L depend on very small difference of > the placement of stones or liberties. Can we provide

Re: [Computer-go] What was the final score after the counting of AlphaGo-vs-Ke Jie Game #1?

2017-05-23 Thread Erik van der Werf
The Chinese counting looked so confusing :-) On Tue, May 23, 2017 at 9:02 AM, Jim O'Flaherty wrote: > I have now heard that AlphaGo one by 0.5 points. > > > On Tue, May 23, 2017 at 2:00 AM, Jim O'Flaherty < > jim.oflaherty...@gmail.com> wrote: > >> The announcer

Re: [Computer-go] mini-max with Policy and Value network

2017-05-23 Thread Hideki Kato
Erik van der Werf: : >On Tue, May 23, 2017 at 10:51 AM, Hideki Kato >wrote: > >> Agree. >> >> (1) To solve L, some search is necessary in practice. So, the >> value net cannot solve some of them. >> (2)

Re: [Computer-go] What was the final score after the counting of AlphaGo-vs-Ke Jie Game #1?

2017-05-23 Thread Michael Alford
Half point is the result using Japanese komi 6.5, Chinese komi is 7.5, end result is W +1.5 On 5/23/17 12:02 AM, Jim O'Flaherty wrote: I have now heard that AlphaGo one by 0.5 points. On Tue, May 23, 2017 at 2:00 AM, Jim O'Flaherty

Re: [Computer-go] mini-max with Policy and Value network

2017-05-23 Thread Hideki Kato
Gian-Carlo Pascutto: <0357614a-98b8-6949-723e-e1a849c75...@sjeng.org>: >Now, even the original AlphaGo played moves that surprised human pros >and were contrary to established sequences. So where did those come >from? Enough computation power to overcome the low probability? >Synthesized by

Re: [Computer-go] mini-max with Policy and Value network

2017-05-23 Thread Gian-Carlo Pascutto
On 23-05-17 17:19, Hideki Kato wrote: > Gian-Carlo Pascutto: <0357614a-98b8-6949-723e-e1a849c75...@sjeng.org>: > >> Now, even the original AlphaGo played moves that surprised human pros >> and were contrary to established sequences. So where did those come >> from? Enough computation power to

Re: [Computer-go] mini-max with Policy and Value network

2017-05-23 Thread Hideki Kato
Gian-Carlo Pascutto: : >On 23-05-17 10:51, Hideki Kato wrote: >> (2) The number of possible positions (input of the value net) in >> real games is at least 10^30 (10^170 in theory). If the value >> net can recognize all? L depend on very small

Re: [Computer-go] mini-max with Policy and Value network

2017-05-23 Thread Álvaro Begué
On Tue, May 23, 2017 at 4:51 AM, Hideki Kato wrote: > (3) CNN cannot learn exclusive-or function due to the ReLU > activation function, instead of traditional sigmoid (tangent > hyperbolic). CNN is good at approximating continuous (analog) > functions but Boolean

Re: [Computer-go] mini-max with Policy and Value network

2017-05-23 Thread valkyria
(3) CNN cannot learn exclusive-or function due to the ReLU activation function, instead of traditional sigmoid (tangent hyperbolic). CNN is good at approximating continuous (analog) functions but Boolean (digital) ones. Are you sure about that? I can imagine using two ReLU units to construct

Re: [Computer-go] mini-max with Policy and Value network

2017-05-23 Thread Hideki Kato
Agree. (1) To solve L, some search is necessary in practice. So, the value net cannot solve some of them. (2) The number of possible positions (input of the value net) in real games is at least 10^30 (10^170 in theory). If the value net can recognize all? L depend on very small difference

Re: [Computer-go] What was the final score after the counting of AlphaGo-vs-Ke Jie Game #1?

2017-05-23 Thread Álvaro Begué
AlphaGo as white won by 0.5 points. On Tue, May 23, 2017 at 3:00 AM, Jim O'Flaherty wrote: > The announcer didn't have her mic on, so I couldn't hear the final score > announced... > > So, what was the final score after the counting of AlphaGo-vs-Ke Jie Game > #1?