Re: [Computer-go] EGC2015 Events

2015-08-03 Thread Xavier Combelle
Just curious, why in the statistics it is mentioned  1475 players and in
the list only 602. Does the list mention only players having playing
recently ?


2015-07-29 21:32 GMT+02:00 Rémi Coulom remi.cou...@free.fr:

 Lee Hajin is also quite a bit weaker than Yoda Norimoto or Cho Chikun.

 BTW, this gives me the opportunity to advertise my new web site that rates
 go professionals with the WHR rating algorithm and go4go.net data:

 http://www.goratings.org/

 Rank/Name/Elo

 108 Yoda Norimoto 2274
 183 Cho Chikun 2188
 448 Lee Hajin 1957

 Rémi


 On 07/29/2015 09:22 PM, Petr Baudis wrote:

Indeed.  We (well, mainly I) thought that since Aya is running on
 a weaker computer, 5 stones might be about right, but now I'm a bit
 worried that I made the game too tough for white after all.

Still, there's a big audience (surprised us a bit), maybe 150 people,
 and they seem to be enjoying it!

 On Wed, Jul 29, 2015 at 09:14:14PM +0200, Rémi Coulom wrote:

 Great! Thanks. 5 stones against Aya is brave.

 On 07/29/2015 08:21 PM, Petr Baudis wrote:

Hi!

There are several Computer Go events on EGC2015.  There was a small
 tournament of programs, played out on identical hardware by each,
 won by Aya:

 https://www.gokgs.com/tournEntrants.jsp?sort=sid=981

Then, one of the games, Aya vs. Many Faces, was reviewed by Lukas
 Podpera 6d:

 https://www.youtube.com/watch?v=_3Lk1qVoiYM

Right now, Hajin Lee 3p (known for her live commentaries on Youtube
 as Haylee) is playing Aya (giving 5 stones) and commenting live:

 https://www.youtube.com/watch?v=Ka2ilmu7Eo4

 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go


 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mental Imagery in Go - playlist

2015-08-03 Thread Steven Clark
RE: CNNs: They can be, and have been, successfully applied to movies as
well. See http://www.cs.cmu.edu/~rahuls/pub/cvpr2014-deepvideo-rahuls.pdf
Also, in the first .pdf I linked you, the input layer has a notion of age
of the stones. For example, this stone was played 5 moves ago, this one 3
moves ago, etc. So, it is not a strictly static snapshot of a board.
In any event, the best performance will probably not come ONLY from CNNs
(although its prediction accuracy is surprisingly high), but the marriage
of CNNs to monte-carlo tree search, etc.

My sense is that we will continue clinging to romantic notions of human
intelligence (shapes, proverbs, etc.) until we eventually get ground to
dust in a Deep-Blue style competition. Not too long now :)

On Sun, Aug 2, 2015 at 9:33 PM, djhbrown . djhbr...@gmail.com wrote:

 Thanks for the replies to my first message; i looked at the links you
 supplied and comment on them later in this email.

 I noticed that Google does not show you the playlist when you look at
 episode 1 of the series (of currently 3 videos), so you may have missed the
 second two episodes which are more significant than the first.  Here is a
 link to the playlist:

 https://www.youtube.com/playlist?list=PL4y5WtsvtduqNW0AKlSsOdea3Hl1X_v-S

 episode 2 introduces mental images and episode 3 is a conversation between
 Hajin Lee and me about her thoughts on a couple of moves early in one of
 her games.  It includes my first attempt at picturing her thoughts, both
 as symbolic information structures and as paint overlays on the game board.

 My hope is that the former might one day become the basis of symbolic
 generic heuristic rules that could be used to generate and evaluate move
 candidates and the latter could evolve into useful instructional materials
 for people learning the game - so that they can, so to speak, look through
 the eyes of an expert like Hajin.

 To these ends, i need the assistance of people with better skills than me
 at (a) drawing pictures, (b) software and (c) Go.  I think that programming
 is like gymnastics - best done by the young, with their abundance of
 enthusiasm and energy.  I enjoyed programming 50 years ago, but i'm too old
 in the tooth now to burn midnight oil.

 Now to your replies:

 Folkert: Stop is a good start but as you already know, there's a long
 way to go yet :)

 Steven:  I expect there is a future for CNN's in recognising static
 images, but my gut feel is that a position in a Go game is more like one
 frame of a movie; as such, it requires a technology that can interpret
 dynamic images - maybe work being done in automatous car driving can
 contribute something useful to Go playing?  Nevertheless, I was surprised
 by the many humanlike moves of DCNNigo on KGS (until it revealed its
 brittleness).  To be sure, drawing upon the moves of experts is one way of
 gaining expertise, but my feeling is that one should try to abstract the
 position - to generalise from the examples - so that general knowledge can
 be formed and applied to novel situations.  It may be that a CNN arguably
 does do some kind of generalisation - but can it, for example, characterise
 something as basic as the waist of a keima?

 Ingo:  Tanja may be the kind of artist who could produce nice drawings of
 Hajin's mental images, perhaps based on my own crude sketches?  It would be
 unpaid work though...  I liked Fuego's and Jonathan's territory pictures,
 which reminded me of Zobrist's early work on computing influence.  [Albert
 Zobrist (*1969*). *A Model of Visual Organisation for the Game of Go*.
 Proceedings of the Spring Joint Computer Conference, Vol. 34, pp. 103-112.]
 However, whereas being able to picture influence and territory is one of my
 objectives, i want to try to picture the richness of what Hajin (aka
 Haylee) sees rather than the result of a primitive computation.  For
 example, at 10:24 in episode 3, she points out that when black is on J4
 instead of K4, there is an opening in black's lower side for white to
 invade.  This tiny gap makes all the difference to the dynamic meaning of
 the position a few moves prior (ie whether it is sensible for white to
 approach Q3 at Q5).

 One of the major influences on my own thinking about Go programming is the
 seminal work Thought and Choice in Chess by Adriaan de Groot  which i
 reckon is well worth a read by anyone interested in programming Go
 https://books.google.com.au/books?id=b2G1CRfNqFYCpg=PA99

 ---
 ​personal website http://sites.google.com/site/djhbrown2/home







 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mental Imagery in Go - playlist

2015-08-03 Thread djhbrown .
​Thanks for the link to the CMU CNN paper, Steven, which ​was very
interesting.  I noted with some pleasure that they included a fovea stream
- although maybe that is a bit of a misnomer, as whereas animal foveas roam
around the image, building (i think) a symbolic structural description of
the picture, theirs was fixed in the middle.

I wonder whether a roaming fovea CNN could be a successful group
connectedness classifier?  I can envisage the fovea being moved around by
a higher-level routine that uses a symbolic description of the game
situation to identify which areas/groups it wants it to investigate.

Incidentally, i'm unconvinced that including an age of stone feature is
valuable, because although the future is dynamic, the past is set in stone
(sic);  Go teachers sometimes talk about tewari analysis to demonstrate
when an old stone becomes inefficiently placed by a certain line of play.

As to romantic notions of human superiority, i personally feel that such
opinions are not so much romantic as hubristic - or perhaps paranoid!
However, i have to admit that in 1979 i was a false prophet when i claimed
the brute-force approach is a no-hoper for Go, even if computers become a
hundred times more powerful than they are now [Brown, D and S. Dowsey, S.
The Challenge of Go. *New Scientist* 81, 303-305, 1979.].  Back in those
days, i never imagined that something so blind as Monte-Carlo would become
more perceptive than even my weak eye, let alone being able to defeat a pro
(albeit with a 5-stone handicap), as Zen just did on KGS.

By the way, i've long since lost my paper copy of my paper; you have access
to an academic library - would you be able to retrieve and scan a copy of
it, just for my nostalgia?



-- 
​personal website http://sites.google.com/site/djhbrown2/home
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] EGC2015 Events

2015-08-03 Thread Rémi Coulom
Yes. The list contains only players that have at least one win, one 
loss, and one game in the past year.


I will also produce historical rating lists for each month of the past. 
That will be online soon.


Rémi

On 08/03/2015 04:44 PM, Xavier Combelle wrote:
Just curious, why in the statistics it is mentioned  1475 players and 
in the list only 602. Does the list mention only players having 
playing recently ?



2015-07-29 21:32 GMT+02:00 Rémi Coulom remi.cou...@free.fr 
mailto:remi.cou...@free.fr:


Lee Hajin is also quite a bit weaker than Yoda Norimoto or Cho Chikun.

BTW, this gives me the opportunity to advertise my new web site
that rates go professionals with the WHR rating algorithm and
go4go.net http://go4go.net data:

http://www.goratings.org/

Rank/Name/Elo

108 Yoda Norimoto 2274
183 Cho Chikun 2188
448 Lee Hajin 1957

Rémi


On 07/29/2015 09:22 PM, Petr Baudis wrote:

   Indeed.  We (well, mainly I) thought that since Aya is
running on
a weaker computer, 5 stones might be about right, but now I'm
a bit
worried that I made the game too tough for white after all.

   Still, there's a big audience (surprised us a bit), maybe
150 people,
and they seem to be enjoying it!

On Wed, Jul 29, 2015 at 09:14:14PM +0200, Rémi Coulom wrote:

Great! Thanks. 5 stones against Aya is brave.

On 07/29/2015 08:21 PM, Petr Baudis wrote:

   Hi!

   There are several Computer Go events on EGC2015. 
There was a small

tournament of programs, played out on identical
hardware by each,
won by Aya:

https://www.gokgs.com/tournEntrants.jsp?sort=sid=981

   Then, one of the games, Aya vs. Many Faces, was
reviewed by Lukas
Podpera 6d:

https://www.youtube.com/watch?v=_3Lk1qVoiYM

   Right now, Hajin Lee 3p (known for her live
commentaries on Youtube
as Haylee) is playing Aya (giving 5 stones) and
commenting live:

https://www.youtube.com/watch?v=Ka2ilmu7Eo4

___
Computer-go mailing list
Computer-go@computer-go.org
mailto:Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org mailto:Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go




___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Rating systems in thin/sparsely connected populations of players

2015-08-03 Thread Rémi Coulom

Hi,

The problem is not wheter to use Elo ratings or not, but rather how to 
compute them. The intro of my WHR paper gives an overview of different 
possible approaches:


http://www.remi-coulom.fr/WHR/WHR.pdf

The algorithms I compare in my paper all have a major flaw: they 
consider the variability of ratings is the same for the whole 
population. In practice, the ratings of beginners tend to vary much 
faster than the rating of experts. A good rating system must take this 
into consideration if it has to be applied to a population that contains 
beginners and experts.


If you population has strongly connected groups that are sparsely 
connected to each other, then you should avoid incremental rating systems.


If you want academic papers, Mark Glickman's web page has many more:
http://www.glicko.net/index.html

Rémi


On 08/03/2015 02:22 AM, Aguido Davis wrote:

Good morning.

We're looking at replacing the Australian national ranking system, and 
the question has come up: how many players and how many recent 
games/player are needed for ELO to generate good strength ratings?


(Questions begged: what does a good set of ratings even mean? does 
it matter if the play graph (edges = games, vertices = players) is 
well-connected or quite cliquey? is ELO the last word in rating 
algorithms? do humans behave differently from bots when they know 
they're being rated?)


Does anybody know of a good academic paper, or ideally, someone's thesis?

My apologies if this is off-topic, but it's an interesting computation 
related to go...


Cheers,

Horatio


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go