On 23-05-17 17:19, Hideki Kato wrote:
> Gian-Carlo Pascutto: <0357614a-98b8-6949-723e-e1a849c75...@sjeng.org>:
>
>> Now, even the original AlphaGo played moves that surprised human pros
>> and were contrary to established sequences. So where did those come
>> from? Enough computation power to over
(3) CNN cannot learn exclusive-or function due to the ReLU
activation function, instead of traditional sigmoid (tangent
hyperbolic). CNN is good at approximating continuous (analog)
functions but Boolean (digital) ones.
Are you sure about that? I can imagine using two ReLU units to
construct
On Tue, May 23, 2017 at 4:51 AM, Hideki Kato wrote:
> (3) CNN cannot learn exclusive-or function due to the ReLU
> activation function, instead of traditional sigmoid (tangent
> hyperbolic). CNN is good at approximating continuous (analog)
> functions but Boolean (digital) ones.
>
Oh, not this
Gian-Carlo Pascutto: :
>On 23-05-17 10:51, Hideki Kato wrote:
>> (2) The number of possible positions (input of the value net) in
>> real games is at least 10^30 (10^170 in theory). If the value
>> net can recognize all? L&Ds depend on very small difference of
>> the placement of stones or lib
Gian-Carlo Pascutto: <0357614a-98b8-6949-723e-e1a849c75...@sjeng.org>:
>Now, even the original AlphaGo played moves that surprised human pros
>and were contrary to established sequences. So where did those come
>from? Enough computation power to overcome the low probability?
>Synthesized by infere
Erik van der Werf:
:
>On Tue, May 23, 2017 at 10:51 AM, Hideki Kato
>wrote:
>
>> Agree.
>>
>> (1) To solve L&D, some search is necessary in practice. So, the
>> value net cannot solve some of them.
>> (2) The number of possible positions (input of the value net) in
>> real games is at least 10^3
Half point is the result using Japanese komi 6.5, Chinese komi is 7.5,
end result is W +1.5
On 5/23/17 12:02 AM, Jim O'Flaherty wrote:
I have now heard that AlphaGo one by 0.5 points.
On Tue, May 23, 2017 at 2:00 AM, Jim O'Flaherty
mailto:jim.oflaherty...@gmail.com>> wrote:
The announ
On Tue, May 23, 2017 at 10:51 AM, Hideki Kato
wrote:
> Agree.
>
> (1) To solve L&D, some search is necessary in practice. So, the
> value net cannot solve some of them.
> (2) The number of possible positions (input of the value net) in
> real games is at least 10^30 (10^170 in theory). If the v
On 23-05-17 10:51, Hideki Kato wrote:
> (2) The number of possible positions (input of the value net) in
> real games is at least 10^30 (10^170 in theory). If the value
> net can recognize all? L&Ds depend on very small difference of
> the placement of stones or liberties. Can we provide nece
On 22-05-17 21:01, Marc Landgraf wrote:
> But what you should really look at here is Leelas evaluation of the game.
Note that this is completely irrelevant for the discussion about
tactical holes and the position I posted. You could literally plug any
evaluation into it (save for a static oracle,
On Mon, May 22, 2017 at 4:54 PM, Gian-Carlo Pascutto wrote:
> On 22-05-17 15:46, Erik van der Werf wrote:
> > Anyway, LMR seems like a good idea, but last time I tried it (in Migos)
> > it did not help. In Magog I had some good results with fractional depth
> > reductions (like in Realization Pro
On 23-05-17 03:39, David Wu wrote:
> Leela playouts are definitely extremely bad compared to competitors like
> Crazystone. The deep-learning version of Crazystone has no value net as
> far as I know, only a policy net, which means it's going on MC playouts
> alone to produce its evaluations. Nonet
The Chinese counting looked so confusing :-)
On Tue, May 23, 2017 at 9:02 AM, Jim O'Flaherty
wrote:
> I have now heard that AlphaGo one by 0.5 points.
>
>
> On Tue, May 23, 2017 at 2:00 AM, Jim O'Flaherty <
> jim.oflaherty...@gmail.com> wrote:
>
>> The announcer didn't have her mic on, so I coul
Agree.
(1) To solve L&D, some search is necessary in practice. So, the
value net cannot solve some of them.
(2) The number of possible positions (input of the value net) in
real games is at least 10^30 (10^170 in theory). If the value
net can recognize all? L&Ds depend on very small differe
AlphaGo as white won by 0.5 points.
On Tue, May 23, 2017 at 3:00 AM, Jim O'Flaherty
wrote:
> The announcer didn't have her mic on, so I couldn't hear the final score
> announced...
>
> So, what was the final score after the counting of AlphaGo-vs-Ke Jie Game
> #1?
>
>
> ___
I have now heard that AlphaGo one by 0.5 points.
On Tue, May 23, 2017 at 2:00 AM, Jim O'Flaherty
wrote:
> The announcer didn't have her mic on, so I couldn't hear the final score
> announced...
>
> So, what was the final score after the counting of AlphaGo-vs-Ke Jie Game
> #1?
>
>
_
The announcer didn't have her mic on, so I couldn't hear the final score
announced...
So, what was the final score after the counting of AlphaGo-vs-Ke Jie Game
#1?
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/li
AlphaGo won first game against Ke Jie.
(;GM[1]SZ[19]
PB[Ke Jie]
PW[AlphaGo]
DT[2017-05-23]RE[W+0.5]KM[7.5]RU[Chinese]
;B[qd];W[pp];B[cc];W[cp];B[nc];W[fp];B[qq];W[pq];B[qp];W[qn]
;B[qo];W[po];B[rn];W[qr];B[rr];W[rm];B[pr];W[or];B[pn];W[qm]
;B[qs];W[on];B[dj];W[nk];B[ph];W[ch];B[cf];W[eh];B[ci];W[
18 matches
Mail list logo