Re: [Computer-go] Darkforest (Facebook) AI estimating

2016-03-14 Thread James Tauber
I hope DeepMind publish similar per-move value network assessments for each of the games (and even better would be policy network "heat maps") James On Mon, Mar 14, 2016 at 6:05 PM, Paweł Morawiecki < pawel.morawie...@gmail.com> wrote: > Hi, > > It looks like this

Re: [Computer-go] Darkforest (Facebook) AI estimating

2016-03-14 Thread daniel rich
Nice find! On Mon, Mar 14, 2016 at 4:05 PM, Paweł Morawiecki < pawel.morawie...@gmail.com> wrote: > Hi, > > It looks like this http://zhuanlan.zhihu.com/yuandong/20639694 > is an analysis of Facebook AI of Game 1 and 2. > > I just looked at it very briefly but, interestingly, in Game 2

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread uurtamo .
It's pretty incredible for sure. s. On Mar 14, 2016 2:20 PM, "Jim O'Flaherty" wrote: > Whatever the case, a huge turn has been made and the next 5 years in Go > are going to be surprising and absolutely fascinating. For a game that > +2,500 years old, I'm beyond

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Jim O'Flaherty
Whatever the case, a huge turn has been made and the next 5 years in Go are going to be surprising and absolutely fascinating. For a game that +2,500 years old, I'm beyond euphoric to be alive to get to witness this. On Mon, Mar 14, 2016 at 4:15 PM, Darren Cook wrote: > > You

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Darren Cook
> You can also look at the score differentials. If the game is perfect, > then the game ends up on 7 points every time. If players made one > small error (2 points), then the distribution would be much narrower > than it is. I was with you up to this point, but players (computer and strong

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Brian Sheppard
Maybe 10 years ago I extrapolated from the results of title matches to arrive at an estimate of perfect play. My memory is hazy about the conclusions (sorry), so I would love if someone is curious enough to build models and draw conclusions and share them. The basis of the estimate is that

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Petri Pitkanen
I would second this. computers in chess do not teach anything. Computer can show you the great move but cannot explain it. Probably as hard problem to crack as was making a good computer go 2016-03-14 16:22 GMT+02:00 Robert Jasiek : > On 14.03.2016 08:59, Jim O'Flaherty wrote:

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Petri Pitkanen
For Go there is no way to estimate how much stonger one get. But in chess it can be estimated (not proven) based on what is evident currently, like top programs never lose on white even against other top program. There is no way anyone is going to make chess program that beats current top program

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread uurtamo .
Watching games 1-2 stones over you is helpful. There's some limit (9 stones? ) where it's hard to learn much, but computers aren't (apparently) there yet. s. On Mar 14, 2016 9:36 AM, "Jim O'Flaherty" wrote: > I'm using the term "teacher" loosely. Any player who is

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Jim O'Flaherty
I'm using the term "teacher" loosely. Any player who is better than me is an opportunity to learn. Being able to interact with the superior AI player strictly through actual play in a repeatable and undo-able form allows me to experiment and explore independently, in a way not achievable with a

[Computer-go] what would $gtp-program do

2016-03-14 Thread folkert
I was curious what gnugo would do when presented with the moves from alpha go. To be honest I did not google for software that can compare this. I wrote a small python-script (only tested on Linux) which loads in an .sgf-file and then "asks" a GTP protocol compatible Go-program what it would to in

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread uurtamo .
There's a _whole_ lot of philosophizing going on on the basis of four games. Just saying. steve On Mar 14, 2016 7:41 AM, "Josef Moudrik" wrote: > Moreover, it might not be possible to explain the strong play in human > understandable terms anyway; human rationalization

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Josef Moudrik
Moreover, it might not be possible to explain the strong play in human understandable terms anyway; human rationalization might simply be a heuristic not strong enough to describe/capture it succinctly. On Mon, Mar 14, 2016 at 3:21 PM Robert Jasiek wrote: > On 14.03.2016 08:59,

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Robert Jasiek
On 14.03.2016 08:59, Jim O'Flaherty wrote: an AI player who becomes a better and better teacher. But you are aware that becoming a stronger AI player does not equal becoming a stronger teacher? Teachers also need to (translate to and) convey human knowledge and reasoning, and adapt to the

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Robert Jasiek
On 14.03.2016 09:33, Petri Pitkanen wrote: And being 600 elo points above best human you are pretty close to best possible play. You do not have any evidence for such a limit. -- robert jasiek ___ Computer-go mailing list

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Petri Pitkanen
And being 600 elo points above best human you are pretty close to best possible play. I think it is alreasy possible in chess to estimate max elo and they are pretty close to it. All of them. In chess there is no real incentive to get better, if draw rate is around 97% what would be the point?

[Computer-go] New application for dynamic komi

2016-03-14 Thread Ingo Althöfer
First of all congratulations to Lee Sedol for his wonderful win in round 4 against AlphaGo! The last part of the game was a bit disappointing for spectators: AlphaGo, in the expectation of losing, started making 15-kyu threads to avoid the unavoidable. One of the leading German players (FJ

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Petri Pitkanen
Even though the chess SW analyzes less positions/second than earlier it does not mean it is less dependent of good HW. Complex move selection and smart evaluation do need more CPU as well. There are advances like NULL move pruning that reduce the amount of CPU required but still it is very much HW