Re: [Computer-go] Nice graph

2016-03-25 Thread Igor Polyakov
AGA ratings (not ranks, which only go up to 7d, but ratings) go up 
infinitely, so some players have had above 10.0 rating. Mark Lee could 
have given a 7.0 three stones and still won possibly, despite them both 
being 7d amateur. Mark Lee also is favored against weaker professionals, 
too. So 10d > 1p usually.


On 2016-03-25 22:15, Robert Jasiek wrote:

On 26.03.2016 01:23, Rémi Coulom wrote:

http://i.imgur.com/ylQTErVl.jpg


9d does not exist.
8d is rare and may as well be translated to the very strongest 7d.
EGF 7d means up to ca. 5p. Korean 7d might be stronger.
EGF 6d means up to ca. 1p. Korean 6d might be stronger.
Korean 5d means ca. EGF 5d - 6d.

9p has a great range in itself.

Rating systems can have a problem of running away ratings at the top. 
Self-played ratings might not be significant.


5 games are not enough to assign a secure rank to AlphaGo v. 18.

So, no, the graph is not nice.



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Nice graph

2016-03-25 Thread Robert Jasiek

On 26.03.2016 06:15, Robert Jasiek wrote:

9d does not exist.


I mean as a real world rank. Of course, there are servers in which ranks 
are derived from ratings. E.g., KGS 9d can mean everything from real 
world 3d [sic!] to 9p.


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Nice graph

2016-03-25 Thread Robert Jasiek

On 26.03.2016 01:23, Rémi Coulom wrote:

http://i.imgur.com/ylQTErVl.jpg


9d does not exist.
8d is rare and may as well be translated to the very strongest 7d.
EGF 7d means up to ca. 5p. Korean 7d might be stronger.
EGF 6d means up to ca. 1p. Korean 6d might be stronger.
Korean 5d means ca. EGF 5d - 6d.

9p has a great range in itself.

Rating systems can have a problem of running away ratings at the top. 
Self-played ratings might not be significant.


5 games are not enough to assign a secure rank to AlphaGo v. 18.

So, no, the graph is not nice.

--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Nice graph

2016-03-25 Thread Aja Huang
2016-03-26 2:48 GMT+00:00 Petr Baudis :
>
> The word covered by the speaker's head is "self".  Bot results in
> self-play are always(?) massively exaggerated.  It's not uncommon to see
> a 75% self-play winrate in selfplay to translate to 52% winrate against
> a third-party reference opponent.  c.f. fig 7&8 in
> http://pasky.or.cz/go/pachi-tr.pdf . Intuitively, I'd expect the effect
> to be less pronounced with very strong programs, but we don't know
> anything precise about the mechanics here and experiments are difficult.
>

Note that recently for Crazy Stone and Zen improvements in self-play also
transferred to playing strength against human players. According to Remi
and Hideki, Crazy Stone and Zen are both >=80% stronger with a policy net
and they both reach 7d on KGS (1 stone stronger).

But generally I agree that we should be cautious about self-play results.

Aja

It's no doubt today's AlphaGo is much stronger than the Nature version.
> But how much?  We'll have a better idea when they pit it in more matches
> with humans, and ideally when other programs catch up further.  Without
> knowing more (like the rest of the slides or a statement by someone from
> Deepmind), I wouldn't personally read much into this graph.
>
> --
> Petr Baudis
> If you have good ideas, good data and fast computers,
> you can do almost anything. -- Geoffrey Hinton
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Nice graph

2016-03-25 Thread terry mcintyre
 blockquote, div.yahoo_quoted { margin-left: 0 !important; border-left:1px 
#715FFA solid !important;  padding-left:1ex !important; background-color:white 
!important; }  It's never wise to generalize too much from one data point.  
AlphaGo 2.0 is very very good at defeating AlphaGo 1.0. 
This does not make 2.0 a 10 dan pro. 
If it beat a succession of 9 dan pros, the claim would be credible.

I think David Silver knows this. I think he and his team are studying Game 4 
very closely, and are working on the 3.0 version. And I notice that all the top 
programs are taking a close look at DCNN now. 

Sent from Yahoo Mail for iPad


On Friday, March 25, 2016, 7:48 PM, Petr Baudis  wrote:

The thread is

    http://www.lifein19x19.com/forum/viewtopic.php?f=18=12922=201695

On Fri, Mar 25, 2016 at 09:16:07PM -0400, Brian Sheppard wrote:
> Hmm, seems to imply a 1000-Elo edge over human 9p. But such a player would 
> literally never lose a game to a human.
> 
> I take this as an example of the difficulty of extrapolating based on games 
> against computers. (and the slide seems to have a disclaimer to this effect, 
> if I am reading the text on the left hand side correctly). Computers have 
> structural similarities that exaggerates strength differences in head-to-head 
> comparisons. But against opponents that have different playing 
> characteristics, such as human 9p, then the strength distribution is 
> different.

I agree.  Well, computer vs. computer may be still (somewhat) fine as long
as it's different programs.  What I wrote in that thread:

The word covered by the speaker's head is "self".  Bot results in
self-play are always(?) massively exaggerated.  It's not uncommon to see
a 75% self-play winrate in selfplay to translate to 52% winrate against
a third-party reference opponent.  c.f. fig 7&8 in
http://pasky.or.cz/go/pachi-tr.pdf . Intuitively, I'd expect the effect
to be less pronounced with very strong programs, but we don't know
anything precise about the mechanics here and experiments are difficult.

It's no doubt today's AlphaGo is much stronger than the Nature version.
But how much?  We'll have a better idea when they pit it in more matches
with humans, and ideally when other programs catch up further.  Without
knowing more (like the rest of the slides or a statement by someone from
Deepmind), I wouldn't personally read much into this graph.

-- 
                Petr Baudis
    If you have good ideas, good data and fast computers,
    you can do almost anything. -- Geoffrey Hinton
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go
 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Nice graph

2016-03-25 Thread Petr Baudis
The thread is

http://www.lifein19x19.com/forum/viewtopic.php?f=18=12922=201695

On Fri, Mar 25, 2016 at 09:16:07PM -0400, Brian Sheppard wrote:
> Hmm, seems to imply a 1000-Elo edge over human 9p. But such a player would 
> literally never lose a game to a human.
> 
> I take this as an example of the difficulty of extrapolating based on games 
> against computers. (and the slide seems to have a disclaimer to this effect, 
> if I am reading the text on the left hand side correctly). Computers have 
> structural similarities that exaggerates strength differences in head-to-head 
> comparisons. But against opponents that have different playing 
> characteristics, such as human 9p, then the strength distribution is 
> different.

I agree.  Well, computer vs. computer may be still (somewhat) fine as long
as it's different programs.  What I wrote in that thread:

The word covered by the speaker's head is "self".  Bot results in
self-play are always(?) massively exaggerated.  It's not uncommon to see
a 75% self-play winrate in selfplay to translate to 52% winrate against
a third-party reference opponent.  c.f. fig 7&8 in
http://pasky.or.cz/go/pachi-tr.pdf . Intuitively, I'd expect the effect
to be less pronounced with very strong programs, but we don't know
anything precise about the mechanics here and experiments are difficult.

It's no doubt today's AlphaGo is much stronger than the Nature version.
But how much?  We'll have a better idea when they pit it in more matches
with humans, and ideally when other programs catch up further.  Without
knowing more (like the rest of the slides or a statement by someone from
Deepmind), I wouldn't personally read much into this graph.

-- 
Petr Baudis
If you have good ideas, good data and fast computers,
you can do almost anything. -- Geoffrey Hinton
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Nice graph

2016-03-25 Thread Dan
How did they do it ? Is there a video of the presentation somewhere ?
Thanks

On Fri, Mar 25, 2016 at 5:59 PM, David Ongaro 
wrote:

> That would mean 3 stones if the "4 stone handicap" has the same definition
> as in the paper (7.5 Komi for white and 3 extra moves for black after the
> first move. Yes that implies that a traditional 4 stone handicap (without
> Komi for white) is in fact 3.5 Stones).
>
>
> > On 25 Mar 2016, at 17:23, Rémi Coulom  wrote:
> >
> > AlphaGo improved 3-4 stones:
> >
> > http://i.imgur.com/ylQTErVl.jpg
> >
> > (Found in the Life in 19x19 forum)
> >
> > Rémi
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Nice graph

2016-03-25 Thread Brian Sheppard
Hmm, seems to imply a 1000-Elo edge over human 9p. But such a player would 
literally never lose a game to a human.

I take this as an example of the difficulty of extrapolating based on games 
against computers. (and the slide seems to have a disclaimer to this effect, if 
I am reading the text on the left hand side correctly). Computers have 
structural similarities that exaggerates strength differences in head-to-head 
comparisons. But against opponents that have different playing characteristics, 
such as human 9p, then the strength distribution is different.

-Original Message-
From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Rémi Coulom
Sent: Friday, March 25, 2016 8:23 PM
To: computer-go 
Subject: [Computer-go] Nice graph

AlphaGo improved 3-4 stones:

http://i.imgur.com/ylQTErVl.jpg

(Found in the Life in 19x19 forum)

Rémi
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Nice graph

2016-03-25 Thread David Ongaro
That would mean 3 stones if the "4 stone handicap" has the same definition as 
in the paper (7.5 Komi for white and 3 extra moves for black after the first 
move. Yes that implies that a traditional 4 stone handicap (without Komi for 
white) is in fact 3.5 Stones).


> On 25 Mar 2016, at 17:23, Rémi Coulom  wrote:
> 
> AlphaGo improved 3-4 stones:
> 
> http://i.imgur.com/ylQTErVl.jpg
> 
> (Found in the Life in 19x19 forum)
> 
> Rémi
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go