[Computer-go] Densei-sen

2016-03-22 Thread Hiroshi Yamashita

Hi,

darkforest lost against Koichi Kobayashi with 3 handicaps.
Next game, Zen vs Kobayashi will be played also with 3 handicaps.

Hiroshi Yamashita

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Programs?

2016-03-22 Thread unic (Ola Mikael Hansson)
What are good programs for playing go at different sizes?

Many Faces has 7x7 - 19x19,  including the even-sized sizes, so that covers
me for a lot.  Whereas when buying CrazyStone HD on the iPad, I was
disappointed that only 9x9, 13x13, and 19x19 were in there - would it have
been that difficult to have 7x7, 11x11, 15x15 and 17x17 too at least?

Are there reasonably strong programs supporting bigger boards, or
rectangular boards?  (Or other shapes of board, for that matter? - I know
Othello, due to the importance of corners in that game, did some
experimentation with octagon-shaped boards - cut off a triangular region
from each corner on a square board and you have it.)

Toroidal Go, TriGo, over Go variants - any programs that play them at all?
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Would a larger board (25x25) dramatically reduce AlphaGos skill?

2016-03-22 Thread Yuancheng Luo
Conv net should be robust. From image processing domian, these are feature
detectors (shape in case of go) that are invariant to translations (moving
shape left right up down along board). Enlarging board wouldnt put bot at
 disadvantage in evaluating local positions.


On Tuesday, March 22, 2016, Ray Tayek  wrote:

> On 3/22/2016 5:21 PM, Lukas van de Wiel wrote:
>
> It would reduce Alphago, because there is less training material in the
> form of high-dan-games, to train the policy network.
>
> It would also reduce the skill of a human opponent, because (s)he would
> have less experience on a larger board, just as AlphaGo.
>
> It would be fun to see which can adapt better.
>
>
> human would adapt quickly after a few games (say 10 or so).
>
> thanks
>
>
> On Wed, Mar 23, 2016 at 1:18 PM, Ray Tayek  > wrote:
>
>> On 3/22/2016 11:25 AM, Tom M wrote:
>>
>>> I suspect that even with a similarly large training sample for
>>> initialization that AlphaGo would suffer a major reduction in apparent
>>> skill level.
>>>
>> i think a human would also.
>>
>>>The CNN would require many more layers of convolution;
>>> the valuation of positions would be much more uncertain; play in the
>>> corner, edges, and center would all be more complicated patterns, and
>>> there would be far more good candidates to consider at each ply and
>>> rollouts would be much less stable and less accurate.
>>>
>> yes.
>>
>> the normal board size is 19x19 because the amount of territory in the
>> sides and corners is slightly larger than the amount of territory in the
>> middle.
>>
>> thanks
>>
>> --
>> Honesty is a very expensive gift. So, don't expect it from cheap people -
>> Warren Buffett
>> http://tayek.com/
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> 
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
>
> ___
> Computer-go mailing listcomputer...@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
>
>
>
> --
> Honesty is a very expensive gift. So, don't expect it from cheap people - 
> Warren Buffetthttp://tayek.com/
>
>

-- 
Yuancheng [Mike] Luo
www.umiacs.umd.edu/~yluo1 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Would a larger board (25x25) dramatically reduce AlphaGos skill?

2016-03-22 Thread Ray Tayek

On 3/22/2016 5:21 PM, Lukas van de Wiel wrote:
It would reduce Alphago, because there is less training material in 
the form of high-dan-games, to train the policy network.


It would also reduce the skill of a human opponent, because (s)he 
would have less experience on a larger board, just as AlphaGo.


It would be fun to see which can adapt better.


human would adapt quickly after a few games (say 10 or so).

thanks



On Wed, Mar 23, 2016 at 1:18 PM, Ray Tayek > wrote:


On 3/22/2016 11:25 AM, Tom M wrote:

I suspect that even with a similarly large training sample for
initialization that AlphaGo would suffer a major reduction in
apparent
skill level.

i think a human would also.

   The CNN would require many more layers of convolution;
the valuation of positions would be much more uncertain; play
in the
corner, edges, and center would all be more complicated
patterns, and
there would be far more good candidates to consider at each
ply and
rollouts would be much less stable and less accurate.

yes.

the normal board size is 19x19 because the amount of territory in
the sides and corners is slightly larger than the amount of
territory in the middle.

thanks

-- 
Honesty is a very expensive gift. So, don't expect it from cheap

people - Warren Buffett
http://tayek.com/


___
Computer-go mailing list
Computer-go@computer-go.org 
http://computer-go.org/mailman/listinfo/computer-go




___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



--
Honesty is a very expensive gift. So, don't expect it from cheap people - 
Warren Buffett
http://tayek.com/

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Would a larger board (25x25) dramatically reduce AlphaGos skill?

2016-03-22 Thread Lukas van de Wiel
It would reduce Alphago, because there is less training material in the
form of high-dan-games, to train the policy network.

It would also reduce the skill of a human opponent, because (s)he would
have less experience on a larger board, just as AlphaGo.

It would be fun to see which can adapt better.

Cheers
Lukas

On Wed, Mar 23, 2016 at 1:18 PM, Ray Tayek  wrote:

> On 3/22/2016 11:25 AM, Tom M wrote:
>
>> I suspect that even with a similarly large training sample for
>> initialization that AlphaGo would suffer a major reduction in apparent
>> skill level.
>>
> i think a human would also.
>
>>The CNN would require many more layers of convolution;
>> the valuation of positions would be much more uncertain; play in the
>> corner, edges, and center would all be more complicated patterns, and
>> there would be far more good candidates to consider at each ply and
>> rollouts would be much less stable and less accurate.
>>
> yes.
>
> the normal board size is 19x19 because the amount of territory in the
> sides and corners is slightly larger than the amount of territory in the
> middle.
>
> thanks
>
> --
> Honesty is a very expensive gift. So, don't expect it from cheap people -
> Warren Buffett
> http://tayek.com/
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Would a larger board (25x25) dramatically reduce AlphaGos skill?

2016-03-22 Thread Ray Tayek

On 3/22/2016 11:25 AM, Tom M wrote:

I suspect that even with a similarly large training sample for
initialization that AlphaGo would suffer a major reduction in apparent
skill level.

i think a human would also.

   The CNN would require many more layers of convolution;
the valuation of positions would be much more uncertain; play in the
corner, edges, and center would all be more complicated patterns, and
there would be far more good candidates to consider at each ply and
rollouts would be much less stable and less accurate.

yes.

the normal board size is 19x19 because the amount of territory in the 
sides and corners is slightly larger than the amount of territory in the 
middle.


thanks

--
Honesty is a very expensive gift. So, don't expect it from cheap people - 
Warren Buffett
http://tayek.com/

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Chun Sun
FYI. We have translated 3 posts by Li Zhe 6p into English.

https://massgoblog.wordpress.com/2016/03/11/lee-sedols-strategy-and-alphagos-weakness/
https://massgoblog.wordpress.com/2016/03/11/game-2-a-nobody-could-have-done-a-better-job-than-lee-sedol/
https://massgoblog.wordpress.com/2016/03/15/before-game-5/

These may provide a slightly different perspective than many other pros.


On Tue, Mar 22, 2016 at 6:21 PM, Darren Cook  wrote:

> > ...
> > Pro players who are not familiar with MCTS bot behavior will not see
> this.
>
> I stand by this:
>
> >> If you want to argue that "their opinion" was wrong because they don't
> >> understand the game at the level AlphaGo was playing at, then you can't
> >> use their opinion in a positive way either.
>
> Darren
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Darren Cook
> ...
> Pro players who are not familiar with MCTS bot behavior will not see this.

I stand by this:

>> If you want to argue that "their opinion" was wrong because they don't
>> understand the game at the level AlphaGo was playing at, then you can't
>> use their opinion in a positive way either.

Darren

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Ingo Althöfer
Hi Darren,

"Darren Cook" 
> ... But, there were also numerous moves where
> the 9-dan pros said, that in *their* opinion, the moves were weak/wrong.
> E.g. wasting ko threats for no reason. Moves even a 1p would never make.
>
> If you want to argue that "their opinion" was wrong because they don't
> understand the game at the level AlphaGo was playing at, then you can't
> use their opinion in a positive way either.

For these situations there is a very natural explanation, at least for
computer go insiders: The seemingly weak moves happened bcause AlphaGo simply 
tried to maximize the winning probability and not the expected score. 

Pro players who are not familiar with MCTS bot behavior will not see this.


Ingo.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Ingo Althöfer
"Lucas, Simon M" 
> my point is that I *think* we can say more (for example
> by not treating the outcome as a black-box event,
> but by appreciating the skill of the individual moves)


* Human professional players were full of praise for some of
AlphaGo's moves, for instance move 37 in game 2.


* Although the bots Zen and Crazy are not independent witnesses:
they both saw AlphaGo on the winning path early on in all four won games.


* The score order 1-0, 2-0, 3-0, 3-1, 4-1 with the Sedol win in the
second match half is an indicator that he may have learned something
about the opponent during the early games. 

Ingo.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Darren Cook
> ... we witnessed hundreds of moves vetted by 9dan players, especially
> Michael Redmond's, where each move was vetted. 

This is a promising approach. But, there were also numerous moves where
the 9-dan pros said, that in *their* opinion, the moves were weak/wrong.
E.g. wasting ko threats for no reason. Moves even a 1p would never make.

If you want to argue that "their opinion" was wrong because they don't
understand the game at the level AlphaGo was playing at, then you can't
use their opinion in a positive way either.

> nearly all sporting events, given the small sample size involved) of
> statistical significance - suggesting that on another week the result
> might have been 4-1 to Lee Sedol.

If his 2nd game had been the one where he created vaguely alive/dead
groups and forced a mistake, and given that we were told the computer
was not being changed during the match, he might have created 2 wins
just by playing exactly the same.

And if he had known this in advance he might then have realized that
creating multiple weak groups and some large complicated kos are the way
to beat it, and so it could well have gone 4-1 to Lee Sedol in "another
week".

C'mon DeepMind, put that same version on KGS, set to only play 9p
players, with the same time controls, and let's get 40 games to give it
a proper ranking. (If 5 games against Lee Sedol are useful, 40 games
against a range of players with little to lose, who are systematically
trying to find its weaknesses, are going to be amazing.)

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Would a larger board (25x25) dramatically reduce AlphaGos skill?

2016-03-22 Thread uurtamo .
Ko is what makes this game difficult, from a theoretical point of view.

I suspect ko+unresolved groups is where it's at.

s.
On Mar 22, 2016 11:25 AM, "Tom M"  wrote:

> I suspect that even with a similarly large training sample for
> initialization that AlphaGo would suffer a major reduction in apparent
> skill level.  The CNN would require many more layers of convolution;
> the valuation of positions would be much more uncertain; play in the
> corner, edges, and center would all be more complicated patterns, and
> there would be far more good candidates to consider at each ply and
> rollouts would be much less stable and less accurate.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread uurtamo .
This is somewhat moot - if any moves had been significantly and obviously
weak to any observers, the results wouldn't have been 4-1.

I.e. One bad move out of 5 games would give roughly the same strength
information as one loss out of 5 games; consider that the kibitzing was
being done in real time.

s.
On Mar 22, 2016 11:08 AM, "Jim O'Flaherty" 
wrote:

> I think you are reinforcing Simon's original point; i.e. using a more fine
> grained approach to statically approximate AlphaGo's ELO where fine grained
> is degree of vetting per move and/or a series of moves. That is a
> substantially larger sample size and each sample will have a pretty high
> degree of quality (given the vetting is being done by top level
> professionals).
> On Mar 22, 2016 1:04 PM, "Jeffrey Greenberg" 
> wrote:
>
>> Given the minimal sample size, bothering over this question won't amount
>> to much. I think the proper response is that no one thought we'd see this
>> level of play at this point in our AI efforts and point to the fact that we
>> witnessed hundreds of moves vetted by 9dan players, especially Michael
>> Redmond's, where each move was vetted. In other words "was the level of
>> play very high?" versus the question "have we beat all humans". The answer
>> is more or less, yes.
>>
>> On Tuesday, March 22, 2016, Lucas, Simon M  wrote:
>>
>>> Hi all,
>>>
>>> I was discussing the results with a colleague outside
>>> of the Game AI area the other day when he raised
>>> the question (which applies to nearly all sporting events,
>>> given the small sample size involved)
>>> of statistical significance - suggesting that on another week
>>> the result might have been 4-1 to Lee Sedol.
>>>
>>> I pointed out that in games of skill there's much more to judge than
>>> just the final
>>> outcome of each game, but wondered if anyone had any better (or worse :)
>>> arguments - or had even engaged in the same type of
>>> conversation.
>>>
>>> With AlphaGo winning 4 games to 1, from a simplistic
>>> stats point of view (with the prior assumption of a fair
>>> coin toss) you'd not be able to claim much statistical
>>> significance, yet most (me included) believe that
>>> AlphaGo is a genuinely better Go player than Lee Sedol.
>>>
>>> From a stats viewpoint you can use this approach:
>>> http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
>>> (see section 3.2 on page 51)
>>>
>>> but given even priors it won't tell you much.
>>>
>>> Anyone know any good references for refuting this
>>> type of argument - the fact is of course that a game of Go
>>> is nothing like a coin toss.  Games of skill tend to base their
>>> outcomes on the result of many (in the case of Go many hundreds of)
>>> individual actions.
>>>
>>> Best wishes,
>>>
>>>   Simon
>>>
>>>
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Would a larger board (25x25) dramatically reduce AlphaGos skill?

2016-03-22 Thread Tom M
I suspect that even with a similarly large training sample for
initialization that AlphaGo would suffer a major reduction in apparent
skill level.  The CNN would require many more layers of convolution;
the valuation of positions would be much more uncertain; play in the
corner, edges, and center would all be more complicated patterns, and
there would be far more good candidates to consider at each ply and
rollouts would be much less stable and less accurate.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Jim O'Flaherty
I think you are reinforcing Simon's original point; i.e. using a more fine
grained approach to statically approximate AlphaGo's ELO where fine grained
is degree of vetting per move and/or a series of moves. That is a
substantially larger sample size and each sample will have a pretty high
degree of quality (given the vetting is being done by top level
professionals).
On Mar 22, 2016 1:04 PM, "Jeffrey Greenberg"  wrote:

> Given the minimal sample size, bothering over this question won't amount
> to much. I think the proper response is that no one thought we'd see this
> level of play at this point in our AI efforts and point to the fact that we
> witnessed hundreds of moves vetted by 9dan players, especially Michael
> Redmond's, where each move was vetted. In other words "was the level of
> play very high?" versus the question "have we beat all humans". The answer
> is more or less, yes.
>
> On Tuesday, March 22, 2016, Lucas, Simon M  wrote:
>
>> Hi all,
>>
>> I was discussing the results with a colleague outside
>> of the Game AI area the other day when he raised
>> the question (which applies to nearly all sporting events,
>> given the small sample size involved)
>> of statistical significance - suggesting that on another week
>> the result might have been 4-1 to Lee Sedol.
>>
>> I pointed out that in games of skill there's much more to judge than just
>> the final
>> outcome of each game, but wondered if anyone had any better (or worse :)
>> arguments - or had even engaged in the same type of
>> conversation.
>>
>> With AlphaGo winning 4 games to 1, from a simplistic
>> stats point of view (with the prior assumption of a fair
>> coin toss) you'd not be able to claim much statistical
>> significance, yet most (me included) believe that
>> AlphaGo is a genuinely better Go player than Lee Sedol.
>>
>> From a stats viewpoint you can use this approach:
>> http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
>> (see section 3.2 on page 51)
>>
>> but given even priors it won't tell you much.
>>
>> Anyone know any good references for refuting this
>> type of argument - the fact is of course that a game of Go
>> is nothing like a coin toss.  Games of skill tend to base their
>> outcomes on the result of many (in the case of Go many hundreds of)
>> individual actions.
>>
>> Best wishes,
>>
>>   Simon
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Thomas Wolf

I am sorry, but I think this discussion is a bit pointless.
While I write these 3 lines and you read them, AlphGo got 20 ELO 
points stronger. :-)


Thomas

On Tue, 22 Mar 2016, Lucas, Simon M wrote:



Still an interesting question is how one could make

more powerful inferences by observing the skill of

the players in each action they take rather than just

the final outcome of each game.

 

If you saw me play a single game of tennis against Federer

you’d have no doubt as to which way the next 100 games would go.

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Álvaro Begué
Sent: 22 March 2016 17:21
To: computer-go 
Subject: Re: [Computer-go] Congratulations to AlphaGo (Statistical significance 
of results)

 

A very simple-minded analysis is that, if the null hypothesis is that AlphaGo 
and Lee Sedol are
equally strong, AlphaGo would do as well as we observed or better 15.625% of 
the time. That's a
p-value that even social scientists don't get excited about. :)

Álvaro.

 

On Tue, Mar 22, 2016 at 12:48 PM, Jason House  
wrote:

  Statistical significance requires a null hypothesis... I think it's 
probably easiest to
  ask the question of if I assume an ELO difference of x, how likely it's a 
4-1 result?
  Turns out that 220 to 270 ELO has a 41% chance of that result.
  >= 10% is -50 to 670 ELO
  >= 1% is -250 to 1190 ELO
  My numbers may be slightly off from eyeballing things in a simple excel 
sheet. The idea
  and ranges should be clear though

  On Mar 22, 2016 12:00 PM, "Lucas, Simon M"  wrote:

Hi all,

I was discussing the results with a colleague outside
of the Game AI area the other day when he raised
the question (which applies to nearly all sporting events,
given the small sample size involved)
of statistical significance - suggesting that on another week
the result might have been 4-1 to Lee Sedol.

I pointed out that in games of skill there's much more to judge 
than just the
final
outcome of each game, but wondered if anyone had any better (or 
worse :)
arguments - or had even engaged in the same type of
conversation.

With AlphaGo winning 4 games to 1, from a simplistic
stats point of view (with the prior assumption of a fair
coin toss) you'd not be able to claim much statistical
significance, yet most (me included) believe that
AlphaGo is a genuinely better Go player than Lee Sedol.

From a stats viewpoint you can use this approach:
http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
(see section 3.2 on page 51)

but given even priors it won't tell you much.

Anyone know any good references for refuting this
type of argument - the fact is of course that a game of Go
is nothing like a coin toss.  Games of skill tend to base their
outcomes on the result of many (in the case of Go many hundreds of)
individual actions.

Best wishes,

  Simon


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

 


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Jeffrey Greenberg
Given the minimal sample size, bothering over this question won't amount to
much. I think the proper response is that no one thought we'd see this
level of play at this point in our AI efforts and point to the fact that we
witnessed hundreds of moves vetted by 9dan players, especially Michael
Redmond's, where each move was vetted. In other words "was the level of
play very high?" versus the question "have we beat all humans". The answer
is more or less, yes.

On Tuesday, March 22, 2016, Lucas, Simon M  wrote:

> Hi all,
>
> I was discussing the results with a colleague outside
> of the Game AI area the other day when he raised
> the question (which applies to nearly all sporting events,
> given the small sample size involved)
> of statistical significance - suggesting that on another week
> the result might have been 4-1 to Lee Sedol.
>
> I pointed out that in games of skill there's much more to judge than just
> the final
> outcome of each game, but wondered if anyone had any better (or worse :)
> arguments - or had even engaged in the same type of
> conversation.
>
> With AlphaGo winning 4 games to 1, from a simplistic
> stats point of view (with the prior assumption of a fair
> coin toss) you'd not be able to claim much statistical
> significance, yet most (me included) believe that
> AlphaGo is a genuinely better Go player than Lee Sedol.
>
> From a stats viewpoint you can use this approach:
> http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
> (see section 3.2 on page 51)
>
> but given even priors it won't tell you much.
>
> Anyone know any good references for refuting this
> type of argument - the fact is of course that a game of Go
> is nothing like a coin toss.  Games of skill tend to base their
> outcomes on the result of many (in the case of Go many hundreds of)
> individual actions.
>
> Best wishes,
>
>   Simon
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Ryan Hayward
another interesting question is to judge the bot's strength
by watching the facial gestures and body language of Lee Sedol
with each move...

On Tue, Mar 22, 2016 at 11:46 AM, Álvaro Begué 
wrote:

>
>
> On Tue, Mar 22, 2016 at 1:40 PM, Nick Wedd  wrote:
>
>> On 22 March 2016 at 17:20, Álvaro Begué  wrote:
>>
>>> A very simple-minded analysis is that, if the null hypothesis is that
>>> AlphaGo and Lee Sedol are equally strong, AlphaGo would do as well as we
>>> observed or better 15.625% of the time. That's a p-value that even social
>>> scientists don't get excited about. :)
>>>
>>>
>> "For "as well ... or better", I make it 18.75%.
>>
>
> I obviously can't count. :)
>
> Thanks for the correction.
>
> Álvaro.
>
>
>
>
>>
>> Nick
>>
>>
>>
>>> Álvaro.
>>>
>>>
>>> On Tue, Mar 22, 2016 at 12:48 PM, Jason House <
>>> jason.james.ho...@gmail.com> wrote:
>>>
 Statistical significance requires a null hypothesis... I think it's
 probably easiest to ask the question of if I assume an ELO difference of x,
 how likely it's a 4-1 result?
 Turns out that 220 to 270 ELO has a 41% chance of that result.
 >= 10% is -50 to 670 ELO
 >= 1% is -250 to 1190 ELO
 My numbers may be slightly off from eyeballing things in a simple excel
 sheet. The idea and ranges should be clear though
 On Mar 22, 2016 12:00 PM, "Lucas, Simon M"  wrote:

> Hi all,
>
> I was discussing the results with a colleague outside
> of the Game AI area the other day when he raised
> the question (which applies to nearly all sporting events,
> given the small sample size involved)
> of statistical significance - suggesting that on another week
> the result might have been 4-1 to Lee Sedol.
>
> I pointed out that in games of skill there's much more to judge than
> just the final
> outcome of each game, but wondered if anyone had any better (or worse
> :)
> arguments - or had even engaged in the same type of
> conversation.
>
> With AlphaGo winning 4 games to 1, from a simplistic
> stats point of view (with the prior assumption of a fair
> coin toss) you'd not be able to claim much statistical
> significance, yet most (me included) believe that
> AlphaGo is a genuinely better Go player than Lee Sedol.
>
> From a stats viewpoint you can use this approach:
> http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
> (see section 3.2 on page 51)
>
> but given even priors it won't tell you much.
>
> Anyone know any good references for refuting this
> type of argument - the fact is of course that a game of Go
> is nothing like a coin toss.  Games of skill tend to base their
> outcomes on the result of many (in the case of Go many hundreds of)
> individual actions.
>
> Best wishes,
>
>   Simon
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go


 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

>>>
>>>
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>>
>>
>>
>>
>> --
>> Nick Wedd  mapr...@gmail.com
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>



-- 
Ryan B Hayward
Professor and Director (Outreach+Diversity)
Computing Science,  UAlberta
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Álvaro Begué
On Tue, Mar 22, 2016 at 1:40 PM, Nick Wedd  wrote:

> On 22 March 2016 at 17:20, Álvaro Begué  wrote:
>
>> A very simple-minded analysis is that, if the null hypothesis is that
>> AlphaGo and Lee Sedol are equally strong, AlphaGo would do as well as we
>> observed or better 15.625% of the time. That's a p-value that even social
>> scientists don't get excited about. :)
>>
>>
> "For "as well ... or better", I make it 18.75%.
>

I obviously can't count. :)

Thanks for the correction.

Álvaro.




>
> Nick
>
>
>
>> Álvaro.
>>
>>
>> On Tue, Mar 22, 2016 at 12:48 PM, Jason House <
>> jason.james.ho...@gmail.com> wrote:
>>
>>> Statistical significance requires a null hypothesis... I think it's
>>> probably easiest to ask the question of if I assume an ELO difference of x,
>>> how likely it's a 4-1 result?
>>> Turns out that 220 to 270 ELO has a 41% chance of that result.
>>> >= 10% is -50 to 670 ELO
>>> >= 1% is -250 to 1190 ELO
>>> My numbers may be slightly off from eyeballing things in a simple excel
>>> sheet. The idea and ranges should be clear though
>>> On Mar 22, 2016 12:00 PM, "Lucas, Simon M"  wrote:
>>>
 Hi all,

 I was discussing the results with a colleague outside
 of the Game AI area the other day when he raised
 the question (which applies to nearly all sporting events,
 given the small sample size involved)
 of statistical significance - suggesting that on another week
 the result might have been 4-1 to Lee Sedol.

 I pointed out that in games of skill there's much more to judge than
 just the final
 outcome of each game, but wondered if anyone had any better (or worse :)
 arguments - or had even engaged in the same type of
 conversation.

 With AlphaGo winning 4 games to 1, from a simplistic
 stats point of view (with the prior assumption of a fair
 coin toss) you'd not be able to claim much statistical
 significance, yet most (me included) believe that
 AlphaGo is a genuinely better Go player than Lee Sedol.

 From a stats viewpoint you can use this approach:
 http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
 (see section 3.2 on page 51)

 but given even priors it won't tell you much.

 Anyone know any good references for refuting this
 type of argument - the fact is of course that a game of Go
 is nothing like a coin toss.  Games of skill tend to base their
 outcomes on the result of many (in the case of Go many hundreds of)
 individual actions.

 Best wishes,

   Simon


 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go
>>>
>>>
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>>
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
>
> --
> Nick Wedd  mapr...@gmail.com
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Lucas, Simon M
Still an interesting question is how one could make
more powerful inferences by observing the skill of
the players in each action they take rather than just
the final outcome of each game.

If you saw me play a single game of tennis against Federer
you’d have no doubt as to which way the next 100 games would go.

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Álvaro Begué
Sent: 22 March 2016 17:21
To: computer-go 
Subject: Re: [Computer-go] Congratulations to AlphaGo (Statistical significance 
of results)

A very simple-minded analysis is that, if the null hypothesis is that AlphaGo 
and Lee Sedol are equally strong, AlphaGo would do as well as we observed or 
better 15.625% of the time. That's a p-value that even social scientists don't 
get excited about. :)

Álvaro.

On Tue, Mar 22, 2016 at 12:48 PM, Jason House 
mailto:jason.james.ho...@gmail.com>> wrote:

Statistical significance requires a null hypothesis... I think it's probably 
easiest to ask the question of if I assume an ELO difference of x, how likely 
it's a 4-1 result?
Turns out that 220 to 270 ELO has a 41% chance of that result.
>= 10% is -50 to 670 ELO
>= 1% is -250 to 1190 ELO
My numbers may be slightly off from eyeballing things in a simple excel sheet. 
The idea and ranges should be clear though
On Mar 22, 2016 12:00 PM, "Lucas, Simon M" 
mailto:s...@essex.ac.uk>> wrote:
Hi all,

I was discussing the results with a colleague outside
of the Game AI area the other day when he raised
the question (which applies to nearly all sporting events,
given the small sample size involved)
of statistical significance - suggesting that on another week
the result might have been 4-1 to Lee Sedol.

I pointed out that in games of skill there's much more to judge than just the 
final
outcome of each game, but wondered if anyone had any better (or worse :)
arguments - or had even engaged in the same type of
conversation.

With AlphaGo winning 4 games to 1, from a simplistic
stats point of view (with the prior assumption of a fair
coin toss) you'd not be able to claim much statistical
significance, yet most (me included) believe that
AlphaGo is a genuinely better Go player than Lee Sedol.

From a stats viewpoint you can use this approach:
http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
(see section 3.2 on page 51)

but given even priors it won't tell you much.

Anyone know any good references for refuting this
type of argument - the fact is of course that a game of Go
is nothing like a coin toss.  Games of skill tend to base their
outcomes on the result of many (in the case of Go many hundreds of)
individual actions.

Best wishes,

  Simon


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Nick Wedd
On 22 March 2016 at 17:20, Álvaro Begué  wrote:

> A very simple-minded analysis is that, if the null hypothesis is that
> AlphaGo and Lee Sedol are equally strong, AlphaGo would do as well as we
> observed or better 15.625% of the time. That's a p-value that even social
> scientists don't get excited about. :)
>
>
"For "as well ... or better", I make it 18.75%.

Nick



> Álvaro.
>
>
> On Tue, Mar 22, 2016 at 12:48 PM, Jason House  > wrote:
>
>> Statistical significance requires a null hypothesis... I think it's
>> probably easiest to ask the question of if I assume an ELO difference of x,
>> how likely it's a 4-1 result?
>> Turns out that 220 to 270 ELO has a 41% chance of that result.
>> >= 10% is -50 to 670 ELO
>> >= 1% is -250 to 1190 ELO
>> My numbers may be slightly off from eyeballing things in a simple excel
>> sheet. The idea and ranges should be clear though
>> On Mar 22, 2016 12:00 PM, "Lucas, Simon M"  wrote:
>>
>>> Hi all,
>>>
>>> I was discussing the results with a colleague outside
>>> of the Game AI area the other day when he raised
>>> the question (which applies to nearly all sporting events,
>>> given the small sample size involved)
>>> of statistical significance - suggesting that on another week
>>> the result might have been 4-1 to Lee Sedol.
>>>
>>> I pointed out that in games of skill there's much more to judge than
>>> just the final
>>> outcome of each game, but wondered if anyone had any better (or worse :)
>>> arguments - or had even engaged in the same type of
>>> conversation.
>>>
>>> With AlphaGo winning 4 games to 1, from a simplistic
>>> stats point of view (with the prior assumption of a fair
>>> coin toss) you'd not be able to claim much statistical
>>> significance, yet most (me included) believe that
>>> AlphaGo is a genuinely better Go player than Lee Sedol.
>>>
>>> From a stats viewpoint you can use this approach:
>>> http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
>>> (see section 3.2 on page 51)
>>>
>>> but given even priors it won't tell you much.
>>>
>>> Anyone know any good references for refuting this
>>> type of argument - the fact is of course that a game of Go
>>> is nothing like a coin toss.  Games of skill tend to base their
>>> outcomes on the result of many (in the case of Go many hundreds of)
>>> individual actions.
>>>
>>> Best wishes,
>>>
>>>   Simon
>>>
>>>
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>



-- 
Nick Wedd  mapr...@gmail.com
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Álvaro Begué
A very simple-minded analysis is that, if the null hypothesis is that
AlphaGo and Lee Sedol are equally strong, AlphaGo would do as well as we
observed or better 15.625% of the time. That's a p-value that even social
scientists don't get excited about. :)

Álvaro.


On Tue, Mar 22, 2016 at 12:48 PM, Jason House 
wrote:

> Statistical significance requires a null hypothesis... I think it's
> probably easiest to ask the question of if I assume an ELO difference of x,
> how likely it's a 4-1 result?
> Turns out that 220 to 270 ELO has a 41% chance of that result.
> >= 10% is -50 to 670 ELO
> >= 1% is -250 to 1190 ELO
> My numbers may be slightly off from eyeballing things in a simple excel
> sheet. The idea and ranges should be clear though
> On Mar 22, 2016 12:00 PM, "Lucas, Simon M"  wrote:
>
>> Hi all,
>>
>> I was discussing the results with a colleague outside
>> of the Game AI area the other day when he raised
>> the question (which applies to nearly all sporting events,
>> given the small sample size involved)
>> of statistical significance - suggesting that on another week
>> the result might have been 4-1 to Lee Sedol.
>>
>> I pointed out that in games of skill there's much more to judge than just
>> the final
>> outcome of each game, but wondered if anyone had any better (or worse :)
>> arguments - or had even engaged in the same type of
>> conversation.
>>
>> With AlphaGo winning 4 games to 1, from a simplistic
>> stats point of view (with the prior assumption of a fair
>> coin toss) you'd not be able to claim much statistical
>> significance, yet most (me included) believe that
>> AlphaGo is a genuinely better Go player than Lee Sedol.
>>
>> From a stats viewpoint you can use this approach:
>> http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
>> (see section 3.2 on page 51)
>>
>> but given even priors it won't tell you much.
>>
>> Anyone know any good references for refuting this
>> type of argument - the fact is of course that a game of Go
>> is nothing like a coin toss.  Games of skill tend to base their
>> outcomes on the result of many (in the case of Go many hundreds of)
>> individual actions.
>>
>> Best wishes,
>>
>>   Simon
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Jason House
Statistical significance requires a null hypothesis... I think it's
probably easiest to ask the question of if I assume an ELO difference of x,
how likely it's a 4-1 result?
Turns out that 220 to 270 ELO has a 41% chance of that result.
>= 10% is -50 to 670 ELO
>= 1% is -250 to 1190 ELO
My numbers may be slightly off from eyeballing things in a simple excel
sheet. The idea and ranges should be clear though
On Mar 22, 2016 12:00 PM, "Lucas, Simon M"  wrote:

> Hi all,
>
> I was discussing the results with a colleague outside
> of the Game AI area the other day when he raised
> the question (which applies to nearly all sporting events,
> given the small sample size involved)
> of statistical significance - suggesting that on another week
> the result might have been 4-1 to Lee Sedol.
>
> I pointed out that in games of skill there's much more to judge than just
> the final
> outcome of each game, but wondered if anyone had any better (or worse :)
> arguments - or had even engaged in the same type of
> conversation.
>
> With AlphaGo winning 4 games to 1, from a simplistic
> stats point of view (with the prior assumption of a fair
> coin toss) you'd not be able to claim much statistical
> significance, yet most (me included) believe that
> AlphaGo is a genuinely better Go player than Lee Sedol.
>
> From a stats viewpoint you can use this approach:
> http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
> (see section 3.2 on page 51)
>
> but given even priors it won't tell you much.
>
> Anyone know any good references for refuting this
> type of argument - the fact is of course that a game of Go
> is nothing like a coin toss.  Games of skill tend to base their
> outcomes on the result of many (in the case of Go many hundreds of)
> individual actions.
>
> Best wishes,
>
>   Simon
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread uurtamo .
> I'm not sure if we can say with certainty that AlphaGo is significantly
> better Go player than Lee Sedol at this point.  What we can say with
> certainty is that AlphaGo is in the same ballpark and at least roughly
> as strong as Lee Sedol.  To me, that's enough to be really huge on its
> own accord!

Agreed, and exactly what I'm telling my friends who have asked the same
question.

s.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Lucas, Simon M
my point is that I *think* we can say more (for example
by not treating the outcome as a black-box event,
but by appreciating the skill of the individual moves)

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
uurtamo .
Sent: 22 March 2016 16:25
To: computer-go 
Subject: Re: [Computer-go] Congratulations to AlphaGo (Statistical significance 
of results)


> I'm not sure if we can say with certainty that AlphaGo is significantly
> better Go player than Lee Sedol at this point.  What we can say with
> certainty is that AlphaGo is in the same ballpark and at least roughly
> as strong as Lee Sedol.  To me, that's enough to be really huge on its
> own accord!

Agreed, and exactly what I'm telling my friends who have asked the same 
question.

s.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Petr Baudis
On Tue, Mar 22, 2016 at 04:00:41PM +, Lucas, Simon M wrote:
> With AlphaGo winning 4 games to 1, from a simplistic
> stats point of view (with the prior assumption of a fair
> coin toss) you'd not be able to claim much statistical 
> significance, yet most (me included) believe that
> AlphaGo is a genuinely better Go player than Lee Sedol.
> 
> From a stats viewpoint you can use this approach:
> http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
> (see section 3.2 on page 51)
> 
> but given even priors it won't tell you much.

What complicates things further is that the coin distribution is
non-stationary; definitely from the point of the human performance
against a fixed program (how much AlphaGo with its novel RL component
is fixed is of course another matter).  In fact, as anyone watching bots
playing on KGS knows, initially the non-stationarity is actually very
extreme as the human gets "used to" the computer's style and soon are
able to beat even a program that's formally quite stronger than the
human player.  At least that's the case for the weaker programs.

I'm not sure if we can say with certainty that AlphaGo is significantly
better Go player than Lee Sedol at this point.  What we can say with
certainty is that AlphaGo is in the same ballpark and at least roughly
as strong as Lee Sedol.  To me, that's enough to be really huge on its
own accord!

-- 
Petr Baudis
If you have good ideas, good data and fast computers,
you can do almost anything. -- Geoffrey Hinton
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread uurtamo .
Simon,

There's no argument better than evidence, and no evidence available to us
other than *all* of the games that alphago has played publicly.

Among two humans, a 4-1 result wouldn't indicate any more or less than this
4-1 result, but we'd already have very strong elo-type information about
both humans because they both would have publicly played hundreds of games
to get to such a match.

I believe alphago played another match earlier in public, correct?  Then we
now have double the evidence, or a slight (50% or so) improvement in our
confidence bounds.

s.
On Mar 22, 2016 9:00 AM, "Lucas, Simon M"  wrote:

> Hi all,
>
> I was discussing the results with a colleague outside
> of the Game AI area the other day when he raised
> the question (which applies to nearly all sporting events,
> given the small sample size involved)
> of statistical significance - suggesting that on another week
> the result might have been 4-1 to Lee Sedol.
>
> I pointed out that in games of skill there's much more to judge than just
> the final
> outcome of each game, but wondered if anyone had any better (or worse :)
> arguments - or had even engaged in the same type of
> conversation.
>
> With AlphaGo winning 4 games to 1, from a simplistic
> stats point of view (with the prior assumption of a fair
> coin toss) you'd not be able to claim much statistical
> significance, yet most (me included) believe that
> AlphaGo is a genuinely better Go player than Lee Sedol.
>
> From a stats viewpoint you can use this approach:
> http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
> (see section 3.2 on page 51)
>
> but given even priors it won't tell you much.
>
> Anyone know any good references for refuting this
> type of argument - the fact is of course that a game of Go
> is nothing like a coin toss.  Games of skill tend to base their
> outcomes on the result of many (in the case of Go many hundreds of)
> individual actions.
>
> Best wishes,
>
>   Simon
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo (Statistical significance of results)

2016-03-22 Thread Lucas, Simon M
Hi all,

I was discussing the results with a colleague outside
of the Game AI area the other day when he raised
the question (which applies to nearly all sporting events,
given the small sample size involved)
of statistical significance - suggesting that on another week
the result might have been 4-1 to Lee Sedol.

I pointed out that in games of skill there's much more to judge than just the 
final
outcome of each game, but wondered if anyone had any better (or worse :) 
arguments - or had even engaged in the same type of
conversation.

With AlphaGo winning 4 games to 1, from a simplistic
stats point of view (with the prior assumption of a fair
coin toss) you'd not be able to claim much statistical 
significance, yet most (me included) believe that
AlphaGo is a genuinely better Go player than Lee Sedol.

From a stats viewpoint you can use this approach:
http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
(see section 3.2 on page 51)

but given even priors it won't tell you much.

Anyone know any good references for refuting this
type of argument - the fact is of course that a game of Go
is nothing like a coin toss.  Games of skill tend to base their
outcomes on the result of many (in the case of Go many hundreds of)
individual actions.

Best wishes,

  Simon


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] UEC cup 2nd day

2016-03-22 Thread 甲斐徳本
sgf files have been made available on the 2nd day Finals games:
http://jsb.cs.uec.ac.jp/~igo/results_2ndday/final.zip

Tokumoto


On Sun, Mar 20, 2016 at 11:27 PM, Hideki Kato 
wrote:

> Dear Ingo,
>
> >Hi Hiroshi,
> >
> >thanks for the many updates.
> >
> >On another site I read that the bits on rank 1 and 2 will
> >play exhibition matches against a pro player on Wednesday.
>
> Yes, Koichi Kobayashi 9p .
> #Nick's h-c page: http://www.computer-go.info/h-c/index.html
>
> >Will those games be transmitted on KGS?
>
> I guess it's very difficult because all broadcast rights must
> exclusively be owned by the sponsor, "Igo Shogi Channel."
>
> >Has it been decided alreay which handicap?
>
> Basically 3 stones.  If Kobayashi 9p lost the first game (vs
> Darkforest), the second game (vs Zen) will be played with 2
> stones.  #Not announced yet.
>
> Hideki
>
> >Thanks in advance, Ingo.
> >
> >
> >> Gesendet: Sonntag, 20. März 2016 um 07:41 Uhr
> >> Von: "Hiroshi Yamashita" 
> >> An: computer-go@computer-go.org
> >> Betreff: Re: [Computer-go] UEC cup 2nd day
> >>
> >> Zen won against darkforest
> >>
> >> 1st Zen
> >> 2nd darkforest
> >> 3rd CrazyStone
> >> 4th Aya
> >>
> >> Hiroshi Yamashita
> >>
> >> ___
> >> Computer-go mailing list
> >> Computer-go@computer-go.org
> >> http://computer-go.org/mailman/listinfo/computer-go
> >___
> >Computer-go mailing list
> >Computer-go@computer-go.org
> >http://computer-go.org/mailman/listinfo/computer-go
> --
> Hideki Kato 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go