Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Robert Jasiek

On 14.03.2016 03:17, Horace Ho wrote:

According this analysis, move 78 is not a "miracle" move ...

http://card.weibo.com/article/h5/s#cid=23041853a2e03d0102w6rl;


I have not had time to verify the tactics by reading yet but suppose 
this webpage's sequences are right, move 78 and the preceding sequence 
is a well-timed, cute trick play and the Alphago teams needs to 
understand why the trick worked. I'd guess that it would have found a 
correct reply if the moyo defense had been a local, short term problem.


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Horace Ho
According this analysis, move 78 is not a "miracle" move ...

http://card.weibo.com/article/h5/s#cid=23041853a2e03d0102w6rl;

On Mon, Mar 14, 2016 at 4:08 AM, Martin Mueller 
wrote:

> On Mar 13, 2016, at 6:00 AM, computer-go-requ...@computer-go.org wrote:
>
>
> So, what would be Lee's best effort to exploit this? Complicating
> and playing hopefully-unexpected-tesuji moves?
>
>
> Judging from this game, setting up multiple interrelated tactical fights,
> such that no subset of them works, but all together they work to capture or
> kill something.
>
> For tactical fights, I would expect the value network to be relatively
> weaker than for quiet territorial positions.
> So it comes down to solving the problem by search.
>
> Aja and me wrote a paper a few years back that showed that even on a 9x9
> board, having two safe but not entirely safe-in-playouts groups on the
> board confuses most Go programs and can push the “bad news” over the search
> horizon. Now imagine having 3, 4, 5 or more simultaneous tactics. The
> combinatorics of searching through all of those by brute force are
> enormous. But humans know exactly what they are looking for.
> Martin
>
> Reference:
> http://webdocs.cs.ualberta.ca/~mmueller/publications.html#2013
>
> *S.-C. Huang* and M. Müller. Investigating the Limits of Monte Carlo Tree
> Search Methods in Computer Go
> .
> Computers and Games 2013, p. 39-48.
> *Erratum
> *
>  for
> this paper - in test case 2 Black wins.
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Martin Mueller
On Mar 13, 2016, at 6:00 AM, computer-go-requ...@computer-go.org wrote:
> 
>> So, what would be Lee's best effort to exploit this? Complicating
>> and playing hopefully-unexpected-tesuji moves?

Judging from this game, setting up multiple interrelated tactical fights, such 
that no subset of them works, but all together they work to capture or kill 
something.

For tactical fights, I would expect the value network to be relatively weaker 
than for quiet territorial positions.
So it comes down to solving the problem by search.

Aja and me wrote a paper a few years back that showed that even on a 9x9 board, 
having two safe but not entirely safe-in-playouts groups on the board confuses 
most Go programs and can push the “bad news” over the search horizon. Now 
imagine having 3, 4, 5 or more simultaneous tactics. The combinatorics of 
searching through all of those by brute force are enormous. But humans know 
exactly what they are looking for.
Martin

Reference:
http://webdocs.cs.ualberta.ca/~mmueller/publications.html#2013
S.-C. Huang and M. Müller. Investigating the Limits of Monte Carlo Tree Search 
Methods in Computer Go 
. 
Computers and Games 2013, p. 39-48. 
Erratum 

 for this paper - in test case 2 Black wins.


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Sorin Gherman
There is no way to not know that O10 was dead after white plays O9, since
AlphaGo handled much more complicated fights even in the games in October.

My only guess from looking at the sequence around O10, where black makes
its own big group bigger is that it was preparing for a ko-fight, and
wanted to have ONE huge ko-threat in that area, something like that - I
don't see any other reasonable explanation.

On Sun, Mar 13, 2016 at 7:55 AM, Olivier Teytaud 
wrote:

> Should we understand that AlphaGo had not understood that O10 was dead ?
> (sorry for Go beginner question :-) )
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Brian Sheppard
I have the impression that the value network is used to initialize the score of 
a node to, say, 70% out of N trials. Then the MCTS is trial N+1, N+2, etc. 
Still asymptotically optimal, but if the value network is accurate then you 
have a big acceleration in accuracy because the scores start from a higher 
point instead of wobbling unstably for a while.

But then I didn't follow the back-up policy. That is, if you do a search, and 
the color to move loses, but the evaluation at the leaf node was winning by 
70%, then what update is made to this node?

In MCTS, you only use the W/L value. But if you are using a value network then 
it seems inconsistent not to use the 70% in some way.

So I also have to go back to read the paper again...

-Original Message-
From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Darren Cook
Sent: Sunday, March 13, 2016 2:20 PM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Game 4: a rare insight

> You are right, but from fig 2 of the paper can see, that mc and value 
> network should give similar results:
> 
> 70% value network should be comparable to 60-65% MC winrate from this 
> paper, usually expected around move 140 in a "human expert game" (what 
> ever this means in this figure :)

Thanks, that makes sense.

>>> Assuming that is an MCTS estimate of winning probability, that 70% 
>>> sounds high (i.e. very confident);
> 
>> That tweet says 70% is from value net, not from MCTS estimate.

I guess I need to go back and read the AlphaGo papers again; I thought it was 
still an MCTS program at the top-level, and the value network was being used to 
influence the moves the tree explores. But from this, and some other comments 
I've seen, I have the feeling I've misunderstood.

Darren




___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Darren Cook
> You are right, but from fig 2 of the paper can see, that mc and value
> network should give similar results:
> 
> 70% value network should be comparable to 60-65% MC winrate from this
> paper, usually expected around move 140 in a "human expert game" (what
> ever this means in this figure :)

Thanks, that makes sense.

>>> Assuming that is an MCTS estimate of winning probability, that
>>> 70% sounds high (i.e. very confident);
> 
>> That tweet says 70% is from value net, not from MCTS estimate.

I guess I need to go back and read the AlphaGo papers again; I thought
it was still an MCTS program at the top-level, and the value network was
being used to influence the moves the tree explores. But from this, and
some other comments I've seen, I have the feeling I've misunderstood.

Darren





signature.asc
Description: OpenPGP digital signature
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Richard Lorentz
And a related question from a fellow "beginner": At what point was that 
group actually dead?


On 03/13/2016 07:55 AM, Olivier Teytaud wrote:

Should we understand that AlphaGo had not understood that O10 was dead ?
(sorry for Go beginner question :-) )

On Sun, Mar 13, 2016 at 1:42 PM, Detlef Schmicker > wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

You are right, but from fig 2 of the paper can see, that mc and value
network should give similar results:

70% value network should be comparable to 60-65% MC winrate from this
paper, usually expected around move 140 in a "human expert game" (what
ever this means in this figure :)

Am 13.03.2016 um 12:48 schrieb Seo Sanghyeon:
> 2016-03-13 17:54 GMT+09:00 Darren Cook >:
>> From Demis Hassabis: When I say 'thought' and 'realisation' I
>> just mean the output of #AlphaGo value net. It was around 70% at
>> move 79 and then dived on move 87
>>
>> https://twitter.com/demishassabis/status/708934687926804482


>>
>> Assuming that is an MCTS estimate of winning probability, that
>> 70% sounds high (i.e. very confident);
>
> That tweet says 70% is from value net, not from MCTS estimate.
>
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJW5WAmAAoJEInWdHg+Znf4HSAQAI1e9iyGlSrJ0QdsmheuGDiz
09vK+mWGHZ+QAcIEkJEQ7wciKYc2IuRejAZrF6lQTQcV9GsAcVKVoqTmuJ7BnFTI
ZEZzN8nk1DKilTGs6P9BALwk0V3zvCI0Mo9AdMpequ6LV+D7vbY9gjkJgKU6O9td
zXrQhP9tdK8M/BEy5caz6uzsbP+5ESorK4X9Xt84bgv3p7aSIaCwVrkTjOPSQAZw
smErgmxAkOIvGABtexkcfgxyYXtYSfHoMsM80d9APoK9fY32TkGYSHvSvHCZks6x
LUcVnEBu7WHHekV6genv6GS2cbM+La8ghIRQrJwuN2lS+q1YEYy8lHbNmEEY0A2s
FeYIIVA9i8HmMJmw6g0XYVt1ODsOl8sJfMzxUxcG9qgppica/Xbkrq/yFe7ktm2l
lD2WxbRlny9s/ZJSUtoM12OvYmXRmvoFRMRooJ1eydI25Weld2dpHNvT6L3BMLlt
E5HyE46QU+lzFLCrpOQTHGRoWyx1ShmfNuBat3XWm0nHMg+SARJ95795agSjGbNt
tO0kaXqK5GkY2HtOSZcMistLIqiOBde8iDNXWJqQ5kKndguBQ7mXArEG0t10AT6g
yQQ6/6ffi5WM92HEUy5R/wPzNaLUCsAzzSu7hrRxlg//vj7oKf93DlhTnHaigdFj
/u+/r+j6IuJboAQtAYcf
=ZdLw
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org 
http://computer-go.org/mailman/listinfo/computer-go






--
=
Olivier Teytaud, olivier.teyt...@inria.fr 
, TAO, LRI, UMR 8623(CNRS - Univ. 
Paris-Sud),
bat 490 Univ. Paris-Sud F-91405 Orsay Cedex France 
http://www.slideshare.net/teytaud 





___
Computer-go mailing list
Computer-go@computer-go.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__computer-2Dgo.org_mailman_listinfo_computer-2Dgo=CwIGaQ=Oo8bPJf7k7r_cPTz1JF7vEiFxvFRfQtp-j14fFwh71U=i0hg-cKH69CA5MsdosvezQ=bCa6ZoxrPyiVJPWNlpQRhI2F2_LgX66Dp0Hwn3dIU6s=lkYMCm336Fngbi-lvXvuAHEbNBPdtUE29O4I7g23wbY=


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Olivier Teytaud
Should we understand that AlphaGo had not understood that O10 was dead ?
(sorry for Go beginner question :-) )

On Sun, Mar 13, 2016 at 1:42 PM, Detlef Schmicker  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> You are right, but from fig 2 of the paper can see, that mc and value
> network should give similar results:
>
> 70% value network should be comparable to 60-65% MC winrate from this
> paper, usually expected around move 140 in a "human expert game" (what
> ever this means in this figure :)
>
> Am 13.03.2016 um 12:48 schrieb Seo Sanghyeon:
> > 2016-03-13 17:54 GMT+09:00 Darren Cook :
> >> From Demis Hassabis: When I say 'thought' and 'realisation' I
> >> just mean the output of #AlphaGo value net. It was around 70% at
> >> move 79 and then dived on move 87
> >>
> >> https://twitter.com/demishassabis/status/708934687926804482
> >>
> >> Assuming that is an MCTS estimate of winning probability, that
> >> 70% sounds high (i.e. very confident);
> >
> > That tweet says 70% is from value net, not from MCTS estimate.
> >
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2.0.22 (GNU/Linux)
>
> iQIcBAEBAgAGBQJW5WAmAAoJEInWdHg+Znf4HSAQAI1e9iyGlSrJ0QdsmheuGDiz
> 09vK+mWGHZ+QAcIEkJEQ7wciKYc2IuRejAZrF6lQTQcV9GsAcVKVoqTmuJ7BnFTI
> ZEZzN8nk1DKilTGs6P9BALwk0V3zvCI0Mo9AdMpequ6LV+D7vbY9gjkJgKU6O9td
> zXrQhP9tdK8M/BEy5caz6uzsbP+5ESorK4X9Xt84bgv3p7aSIaCwVrkTjOPSQAZw
> smErgmxAkOIvGABtexkcfgxyYXtYSfHoMsM80d9APoK9fY32TkGYSHvSvHCZks6x
> LUcVnEBu7WHHekV6genv6GS2cbM+La8ghIRQrJwuN2lS+q1YEYy8lHbNmEEY0A2s
> FeYIIVA9i8HmMJmw6g0XYVt1ODsOl8sJfMzxUxcG9qgppica/Xbkrq/yFe7ktm2l
> lD2WxbRlny9s/ZJSUtoM12OvYmXRmvoFRMRooJ1eydI25Weld2dpHNvT6L3BMLlt
> E5HyE46QU+lzFLCrpOQTHGRoWyx1ShmfNuBat3XWm0nHMg+SARJ95795agSjGbNt
> tO0kaXqK5GkY2HtOSZcMistLIqiOBde8iDNXWJqQ5kKndguBQ7mXArEG0t10AT6g
> yQQ6/6ffi5WM92HEUy5R/wPzNaLUCsAzzSu7hrRxlg//vj7oKf93DlhTnHaigdFj
> /u+/r+j6IuJboAQtAYcf
> =ZdLw
> -END PGP SIGNATURE-
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>



-- 
=
Olivier Teytaud, olivier.teyt...@inria.fr, TAO, LRI, UMR 8623(CNRS - Univ.
Paris-Sud),
bat 490 Univ. Paris-Sud F-91405 Orsay Cedex France
http://www.slideshare.net/teytaud
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

You are right, but from fig 2 of the paper can see, that mc and value
network should give similar results:

70% value network should be comparable to 60-65% MC winrate from this
paper, usually expected around move 140 in a "human expert game" (what
ever this means in this figure :)

Am 13.03.2016 um 12:48 schrieb Seo Sanghyeon:
> 2016-03-13 17:54 GMT+09:00 Darren Cook :
>> From Demis Hassabis: When I say 'thought' and 'realisation' I
>> just mean the output of #AlphaGo value net. It was around 70% at
>> move 79 and then dived on move 87
>> 
>> https://twitter.com/demishassabis/status/708934687926804482
>> 
>> Assuming that is an MCTS estimate of winning probability, that
>> 70% sounds high (i.e. very confident);
> 
> That tweet says 70% is from value net, not from MCTS estimate.
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJW5WAmAAoJEInWdHg+Znf4HSAQAI1e9iyGlSrJ0QdsmheuGDiz
09vK+mWGHZ+QAcIEkJEQ7wciKYc2IuRejAZrF6lQTQcV9GsAcVKVoqTmuJ7BnFTI
ZEZzN8nk1DKilTGs6P9BALwk0V3zvCI0Mo9AdMpequ6LV+D7vbY9gjkJgKU6O9td
zXrQhP9tdK8M/BEy5caz6uzsbP+5ESorK4X9Xt84bgv3p7aSIaCwVrkTjOPSQAZw
smErgmxAkOIvGABtexkcfgxyYXtYSfHoMsM80d9APoK9fY32TkGYSHvSvHCZks6x
LUcVnEBu7WHHekV6genv6GS2cbM+La8ghIRQrJwuN2lS+q1YEYy8lHbNmEEY0A2s
FeYIIVA9i8HmMJmw6g0XYVt1ODsOl8sJfMzxUxcG9qgppica/Xbkrq/yFe7ktm2l
lD2WxbRlny9s/ZJSUtoM12OvYmXRmvoFRMRooJ1eydI25Weld2dpHNvT6L3BMLlt
E5HyE46QU+lzFLCrpOQTHGRoWyx1ShmfNuBat3XWm0nHMg+SARJ95795agSjGbNt
tO0kaXqK5GkY2HtOSZcMistLIqiOBde8iDNXWJqQ5kKndguBQ7mXArEG0t10AT6g
yQQ6/6ffi5WM92HEUy5R/wPzNaLUCsAzzSu7hrRxlg//vj7oKf93DlhTnHaigdFj
/u+/r+j6IuJboAQtAYcf
=ZdLw
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Seo Sanghyeon
2016-03-13 17:54 GMT+09:00 Darren Cook :
> From Demis Hassabis:
>   When I say 'thought' and 'realisation' I just mean the output of
>   #AlphaGo value net. It was around 70% at move 79 and then dived
>   on move 87
>
>   https://twitter.com/demishassabis/status/708934687926804482
>
> Assuming that is an MCTS estimate of winning probability, that 70%
> sounds high (i.e. very confident);

That tweet says 70% is from value net, not from MCTS estimate.

-- 
Seo Sanghyeon
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Brian Sheppard
I would not place too much confidence in the observers. Even though they are 
pro players, they don't have the same degree of concentration as the game's 
participants, and they have an obligation to speak about the game on a regular 
basis which further deteriorates analytic skills. Figuring out where a player 
went wrong will take much longer than it takes to play the game.

If you really want good insight, better to use several programs to play out 
games based on moves that they collectively propose, with the additional 
ability to take moves from a human. Over a period of days you can get quite 
strong analysis, even if the players are not that strong. There was a great 
paper describing the method from a few years ago.

-Original Message-
From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Marc Landgraf
Sent: Sunday, March 13, 2016 5:26 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Game 4: a rare insight

What is the most interesting part is, that at this point many pro commentators 
found a lot of aji, but did not find a "solution" for Lee Sedol that broke 
AlphaGos position. So the question remains: Did AlphaGo find a hole in it's own 
position and tried to dodge that? Was it too strong for its own good? Or was it 
a misevaluation due to the immense amounts of aji, which would not result in 
harm, if played properly?


2016-03-13 9:54 GMT+01:00 Darren Cook <dar...@dcook.org>:
> From Demis Hassabis:
>   When I say 'thought' and 'realisation' I just mean the output of
>   #AlphaGo value net. It was around 70% at move 79 and then dived
>   on move 87
>
>   https://twitter.com/demishassabis/status/708934687926804482
>
> Assuming that is an MCTS estimate of winning probability, that 70% 
> sounds high (i.e. very confident); when I was doing the computer-human 
> team experiments, on 9x9, with three MCTS programs, I generally knew 
> I'd found a winning move when the percentages moved from the 48-52% 
> range to, say, 55%.
>
> I really hope they reveal the win estimates for each move of the 5 
> games. It will especially be interesting to then compare that to the 
> other leading MCTS programs.
>
> Darren
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



Am 13.03.2016 um 11:28 schrieb Josef Moudrik:
> How well do you think the mcts-weakness we have witnessed today is
> hidden in AG? Or, how can one go about exploiting it
> systematically?
> 
> I think it might be well hidden by the value network being very
> strong and true most of the time - it is much harder to get AG to
> this state, than traditional mcts bots with much less truthful
> evaluations.
> 
> So, what would be Lee's best effort to exploit this? Complicating
> and playing hopefully-unexpected-tesuji moves?
> 
> Detlef: Demmis tweeted that the w78 caused b79 mistake, that only
> surfaced out some ten moves later. Ca you share the development of
> the value evals during these moves? Did your net fall down right
> after move 78?

My net is very unstable in the sequence, jumping around a lot. But I
am in a very early state of value network, just wanted to state:

This is probably more a problem of the value network, than of the MC
playouts, they are quite stable in the sequence, but don't put to much
into this, I am not really convinced of my net at the moment :(


> 
> What an interesting times we live in :-)
> 
> Regards, Josef
> 
> Dne ne 13. 3. 2016 10:33 uživatel Marc Landgraf
>  napsal:
> 
>> Oh, is it possible to provide those variants? Or is there a
>> recording of the broadcast, reading the board is probably enough
>> to roughly understand it.
>> 
>> 2016-03-13 10:32 GMT+01:00 Chun Sun :
>>> Hi Marc,
>>> 
>>> "but did not find a "solution" for Lee Sedol that broke
>>> AlphaGos
>> position"
>>> -- this is not true. Ke Jie and Gu Li both found more than one
>>> way to
>> break
>>> the position :)
>>> 
>>> On Sun, Mar 13, 2016 at 5:26 AM, Marc Landgraf
>>> 
>> wrote:
 
 What is the most interesting part is, that at this point many
 pro commentators found a lot of aji, but did not find a
 "solution" for Lee Sedol that broke AlphaGos position. So the
 question remains: Did AlphaGo find a hole in it's own
 position and tried to dodge that? Was it too strong for its
 own good? Or was it a misevaluation due to the immense
 amounts of aji, which would not result in harm, if played 
 properly?
 
 
 2016-03-13 9:54 GMT+01:00 Darren Cook :
> From Demis Hassabis: When I say 'thought' and 'realisation'
> I just mean the output of #AlphaGo value net. It was around
> 70% at move 79 and then dived on move 87
> 
> https://twitter.com/demishassabis/status/708934687926804482
>
>
> 
Assuming that is an MCTS estimate of winning probability, that 70%
> sounds high (i.e. very confident); when I was doing the
> computer-human team experiments, on 9x9, with three MCTS
> programs, I generally knew
>> I'd
> found a winning move when the percentages moved from the
> 48-52% range to, say, 55%.
> 
> I really hope they reveal the win estimates for each move
> of the 5 games. It will especially be interesting to then
> compare that to the other leading MCTS programs.
> 
> Darren
> 
> ___ Computer-go
> mailing list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
 ___ Computer-go
 mailing list Computer-go@computer-go.org 
 http://computer-go.org/mailman/listinfo/computer-go
>>> 
>>> 
>>> 
>>> ___ Computer-go
>>> mailing list Computer-go@computer-go.org 
>>> http://computer-go.org/mailman/listinfo/computer-go
>> ___ Computer-go
>> mailing list Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
> 
> 
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJW5UNlAAoJEInWdHg+Znf4pWcQAJ7A/9lEwLVizz9Hc7JuUrPN
/fRXN39kMJrgww+QYLDMfG0PLJgO4Y5yh+HNot7e4TQN/H5dDEFE2u4PkdQCDc7O
c/uuQGg8Rlh089tZt5JNhHTokz7yGJ2Abg8lvZirAWue0vlySpehj4xzJbaJ5yPJ
sXEIrFG8REPb3u6DJe5Mdi615CQDwdtZeLehK+tG71UN1P1a2RJBGUv6pb7hVNEh
jN/zVfiwTJXp/eSvRmDuLaFyFmS29pOzFstqtOTLN8xu8D5XkBgQcAJYqd8eoGTU
iPFlWwLNosoOh5DBWIYAdhbFBOOrKI0YWPsiCjWOYjXiwx5+ShCNxDPyTgQfKCki
wvdxGnoDPfQDODWExfwPvx/N37LvW2zcL/tnfQ20L8WYLmauY8iCDSsttD79JpBx
Ll0UUmBv1V75+Dr3M/hIFm5/Zb8A8rjv3HRiZH/cIytM3T5qjb0W6UfgblVSHhCL
2DCE9tGIlQXwfj/8jVCuYja9t7O9+K8mlpZ1hKwkngSufPPta9rcw9xOvIXRyRwE
4bdYuPFGjvZjS5n4AuY8f3DCrmWhXbk4rJlqnBLsf0UdFaXoV6wVApqc5ekyAePH
5fholIb5/PVBXsNmBHNnU4QBDr2pavDzoIq1l1t7tieAdD14hxdORYwbZW+nYWCl
2w+j1WtdDkxFIi5BNyH+
=LeQE
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Josef Moudrik
How well do you think the mcts-weakness we have witnessed today is hidden
in AG? Or, how can one go about exploiting it systematically?

I think it might be well hidden by the value network being very strong and
true most of the time - it is much harder to get AG to this state, than
traditional mcts bots with much less truthful evaluations.

So, what would be Lee's best effort to exploit this? Complicating and
playing hopefully-unexpected-tesuji moves?

Detlef: Demmis tweeted that the w78 caused b79 mistake, that only surfaced
out some ten moves later. Ca you share the development of the value evals
during these moves? Did your net fall down right after move 78?

What an interesting times we live in :-)

Regards,
Josef

Dne ne 13. 3. 2016 10:33 uživatel Marc Landgraf 
napsal:

> Oh, is it possible to provide those variants? Or is there a recording
> of the broadcast, reading the board is probably enough to roughly
> understand it.
>
> 2016-03-13 10:32 GMT+01:00 Chun Sun :
> > Hi Marc,
> >
> > "but did not find a "solution" for Lee Sedol that broke AlphaGos
> position"
> > -- this is not true. Ke Jie and Gu Li both found more than one way to
> break
> > the position :)
> >
> > On Sun, Mar 13, 2016 at 5:26 AM, Marc Landgraf 
> wrote:
> >>
> >> What is the most interesting part is, that at this point many pro
> >> commentators found a lot of aji, but did not find a "solution" for Lee
> >> Sedol that broke AlphaGos position. So the question remains: Did
> >> AlphaGo find a hole in it's own position and tried to dodge that? Was
> >> it too strong for its own good? Or was it a misevaluation due to the
> >> immense amounts of aji, which would not result in harm, if played
> >> properly?
> >>
> >>
> >> 2016-03-13 9:54 GMT+01:00 Darren Cook :
> >> > From Demis Hassabis:
> >> >   When I say 'thought' and 'realisation' I just mean the output of
> >> >   #AlphaGo value net. It was around 70% at move 79 and then dived
> >> >   on move 87
> >> >
> >> >   https://twitter.com/demishassabis/status/708934687926804482
> >> >
> >> > Assuming that is an MCTS estimate of winning probability, that 70%
> >> > sounds high (i.e. very confident); when I was doing the computer-human
> >> > team experiments, on 9x9, with three MCTS programs, I generally knew
> I'd
> >> > found a winning move when the percentages moved from the 48-52% range
> >> > to, say, 55%.
> >> >
> >> > I really hope they reveal the win estimates for each move of the 5
> >> > games. It will especially be interesting to then compare that to the
> >> > other leading MCTS programs.
> >> >
> >> > Darren
> >> >
> >> > ___
> >> > Computer-go mailing list
> >> > Computer-go@computer-go.org
> >> > http://computer-go.org/mailman/listinfo/computer-go
> >> ___
> >> Computer-go mailing list
> >> Computer-go@computer-go.org
> >> http://computer-go.org/mailman/listinfo/computer-go
> >
> >
> >
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Chun Sun
Hi Marc,

"but did not find a "solution" for Lee Sedol that broke AlphaGos position"
-- this is not true. Ke Jie and Gu Li both found more than one way to break
the position :)

On Sun, Mar 13, 2016 at 5:26 AM, Marc Landgraf  wrote:

> What is the most interesting part is, that at this point many pro
> commentators found a lot of aji, but did not find a "solution" for Lee
> Sedol that broke AlphaGos position. So the question remains: Did
> AlphaGo find a hole in it's own position and tried to dodge that? Was
> it too strong for its own good? Or was it a misevaluation due to the
> immense amounts of aji, which would not result in harm, if played
> properly?
>
>
> 2016-03-13 9:54 GMT+01:00 Darren Cook :
> > From Demis Hassabis:
> >   When I say 'thought' and 'realisation' I just mean the output of
> >   #AlphaGo value net. It was around 70% at move 79 and then dived
> >   on move 87
> >
> >   https://twitter.com/demishassabis/status/708934687926804482
> >
> > Assuming that is an MCTS estimate of winning probability, that 70%
> > sounds high (i.e. very confident); when I was doing the computer-human
> > team experiments, on 9x9, with three MCTS programs, I generally knew I'd
> > found a winning move when the percentages moved from the 48-52% range
> > to, say, 55%.
> >
> > I really hope they reveal the win estimates for each move of the 5
> > games. It will especially be interesting to then compare that to the
> > other leading MCTS programs.
> >
> > Darren
> >
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Marc Landgraf
What is the most interesting part is, that at this point many pro
commentators found a lot of aji, but did not find a "solution" for Lee
Sedol that broke AlphaGos position. So the question remains: Did
AlphaGo find a hole in it's own position and tried to dodge that? Was
it too strong for its own good? Or was it a misevaluation due to the
immense amounts of aji, which would not result in harm, if played
properly?


2016-03-13 9:54 GMT+01:00 Darren Cook :
> From Demis Hassabis:
>   When I say 'thought' and 'realisation' I just mean the output of
>   #AlphaGo value net. It was around 70% at move 79 and then dived
>   on move 87
>
>   https://twitter.com/demishassabis/status/708934687926804482
>
> Assuming that is an MCTS estimate of winning probability, that 70%
> sounds high (i.e. very confident); when I was doing the computer-human
> team experiments, on 9x9, with three MCTS programs, I generally knew I'd
> found a winning move when the percentages moved from the 48-52% range
> to, say, 55%.
>
> I really hope they reveal the win estimates for each move of the 5
> games. It will especially be interesting to then compare that to the
> other leading MCTS programs.
>
> Darren
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Game 4: a rare insight

2016-03-13 Thread Detlef Schmicker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Interesting, my value net does the same, even it was trained totally
different from 7d+ games :)

Am 13.03.2016 um 09:54 schrieb Darren Cook:
> From Demis Hassabis: When I say 'thought' and 'realisation' I just
> mean the output of #AlphaGo value net. It was around 70% at move 79
> and then dived on move 87
> 
> https://twitter.com/demishassabis/status/708934687926804482
> 
> Assuming that is an MCTS estimate of winning probability, that 70% 
> sounds high (i.e. very confident); when I was doing the
> computer-human team experiments, on 9x9, with three MCTS programs,
> I generally knew I'd found a winning move when the percentages
> moved from the 48-52% range to, say, 55%.
> 
> I really hope they reveal the win estimates for each move of the 5 
> games. It will especially be interesting to then compare that to
> the other leading MCTS programs.
> 
> Darren
> 
> ___ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJW5SyFAAoJEInWdHg+Znf4M/4P/1k1h6nc7t/6qk2wNJRp8dNL
8ZNgCyJ84d22V9V6ODcnreliM7NXhyg006tOz0MIF4F95Pgtp4ZJ9A9Bt55832a0
phIYh+FQpmm5zq/roDUxMUTPp7rlYIvgt0ngiIoFLGGrfTugDrylU9aI3Ij62Sww
eKs33DS7OMfNp9e4gY/Pek0TpiFY/Mjt/+cvQ8vKMy4p13FhfX4VqRbZV1fEnb6E
7LTQvJQp6nnQZm2frCuwmo6HkBZuyDnFC17CfPgxLGXjtU4QSxsYtehpBovBLjgb
iScwyEs/XRQEBEtKotIcXZHxWucHkA88jqzM/W9HfMDPKLKQ8stplAZVglBDkG9o
mhxM2y0Ds7/A6AM9l7F4eYHCuAzU9K2eDYITZIAyTFw7VByvOzvENgQsNCuvsxqj
BZzMP8NCVQfnRbmRQCIwSdbeFEF2c6uMvzj49Bpn3Z9Ntsp1rnJYV4TOXL7qbit3
we5q4T4xoYLkdrKAfBAjA/7oxpb9HIUsehYsOQPFSnOF3hPN4k9WYD1ytxk8hGQe
5PUNYVOOCVAfiRU1HSsHfpOi87/81/YYBNmrC1oBOxuXdnBYsAv8UIR5/QQBXp7W
vol/g5Rllh+lbAyxCDBjo6StripfILp43ByFNdatE4z7mXflzKsXMCVT3Fawx4lt
spYMb9+lL0myi8n40lKS
=wDwA
-END PGP SIGNATURE-
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Game 4: a rare insight

2016-03-13 Thread Darren Cook
From Demis Hassabis:
  When I say 'thought' and 'realisation' I just mean the output of
  #AlphaGo value net. It was around 70% at move 79 and then dived
  on move 87

  https://twitter.com/demishassabis/status/708934687926804482

Assuming that is an MCTS estimate of winning probability, that 70%
sounds high (i.e. very confident); when I was doing the computer-human
team experiments, on 9x9, with three MCTS programs, I generally knew I'd
found a winning move when the percentages moved from the 48-52% range
to, say, 55%.

I really hope they reveal the win estimates for each move of the 5
games. It will especially be interesting to then compare that to the
other leading MCTS programs.

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go