Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-15 Thread Brian Sheppard
for championship caliber. -Original Message- From: Brian Sheppard [mailto:sheppar...@aol.com] Sent: Tuesday, March 15, 2016 6:20 PM To: 'computer-go@computer-go.org' <computer-go@computer-go.org> Subject: RE: [Computer-go] AlphaGo & DCNN: Handling long-range dependency >So

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-15 Thread Brian Sheppard
Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of Darren Cook Sent: Monday, March 14, 2016 5:15 PM To: computer-go@computer-go.org Subject: Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency > You can also look at the score differentials. If the game is perfect, &

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread uurtamo .
It's pretty incredible for sure. s. On Mar 14, 2016 2:20 PM, "Jim O'Flaherty" wrote: > Whatever the case, a huge turn has been made and the next 5 years in Go > are going to be surprising and absolutely fascinating. For a game that > +2,500 years old, I'm beyond

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Jim O'Flaherty
Whatever the case, a huge turn has been made and the next 5 years in Go are going to be surprising and absolutely fascinating. For a game that +2,500 years old, I'm beyond euphoric to be alive to get to witness this. On Mon, Mar 14, 2016 at 4:15 PM, Darren Cook wrote: > > You

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Darren Cook
> You can also look at the score differentials. If the game is perfect, > then the game ends up on 7 points every time. If players made one > small error (2 points), then the distribution would be much narrower > than it is. I was with you up to this point, but players (computer and strong

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Brian Sheppard
gt;= 3000 Elo, and it turns out that is a significant underestimate. Best, Brian -Original Message- From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of Robert Jasiek Sent: Monday, March 14, 2016 10:11 AM To: computer-go@computer-go.org Subject: Re: [Computer-go] A

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Petri Pitkanen
I would second this. computers in chess do not teach anything. Computer can show you the great move but cannot explain it. Probably as hard problem to crack as was making a good computer go 2016-03-14 16:22 GMT+02:00 Robert Jasiek : > On 14.03.2016 08:59, Jim O'Flaherty wrote:

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Petri Pitkanen
For Go there is no way to estimate how much stonger one get. But in chess it can be estimated (not proven) based on what is evident currently, like top programs never lose on white even against other top program. There is no way anyone is going to make chess program that beats current top program

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread uurtamo .
Watching games 1-2 stones over you is helpful. There's some limit (9 stones? ) where it's hard to learn much, but computers aren't (apparently) there yet. s. On Mar 14, 2016 9:36 AM, "Jim O'Flaherty" wrote: > I'm using the term "teacher" loosely. Any player who is

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Jim O'Flaherty
I'm using the term "teacher" loosely. Any player who is better than me is an opportunity to learn. Being able to interact with the superior AI player strictly through actual play in a repeatable and undo-able form allows me to experiment and explore independently, in a way not achievable with a

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread uurtamo .
There's a _whole_ lot of philosophizing going on on the basis of four games. Just saying. steve On Mar 14, 2016 7:41 AM, "Josef Moudrik" wrote: > Moreover, it might not be possible to explain the strong play in human > understandable terms anyway; human rationalization

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Josef Moudrik
Moreover, it might not be possible to explain the strong play in human understandable terms anyway; human rationalization might simply be a heuristic not strong enough to describe/capture it succinctly. On Mon, Mar 14, 2016 at 3:21 PM Robert Jasiek wrote: > On 14.03.2016 08:59,

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Robert Jasiek
On 14.03.2016 08:59, Jim O'Flaherty wrote: an AI player who becomes a better and better teacher. But you are aware that becoming a stronger AI player does not equal becoming a stronger teacher? Teachers also need to (translate to and) convey human knowledge and reasoning, and adapt to the

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Robert Jasiek
On 14.03.2016 09:33, Petri Pitkanen wrote: And being 600 elo points above best human you are pretty close to best possible play. You do not have any evidence for such a limit. -- robert jasiek ___ Computer-go mailing list

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Petri Pitkanen
And being 600 elo points above best human you are pretty close to best possible play. I think it is alreasy possible in chess to estimate max elo and they are pretty close to it. All of them. In chess there is no real incentive to get better, if draw rate is around 97% what would be the point?

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-14 Thread Petri Pitkanen
Even though the chess SW analyzes less positions/second than earlier it does not mean it is less dependent of good HW. Complex move selection and smart evaluation do need more CPU as well. There are advances like NULL move pruning that reduce the amount of CPU required but still it is very much HW

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-13 Thread Brian Cloutier
> not because of a new better algorithm but because the Deep Blue's 11.38 GFLOP power is available on desktop from about 2006F This isn't true, modern chess engines look at far fewer positions than Deep Blue did. >From wikipedia : "Chess engines

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-11 Thread Brian Sheppard
[mailto:computer-go-boun...@computer-go.org] On Behalf Of ?? ??? Sent: Friday, March 11, 2016 7:18 AM To: computer-go@computer-go.org Subject: Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency I think that a desktop computer's calculating power appear to dev

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-11 Thread Рождественский Дмитрий
I think that a desktop computer's calculating power appear to develop to a necessary level sooner then the algorithm may be optimized to use the power nowdays available. For example, I belive that chess programs run on a desktop well not because of a new better algotrithm but because the Deep

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-11 Thread Darren Cook
>> global, more long-term planning. A rumour so far suggests to have used the >> time for more learning, but I'd be surprised if this should have sufficed. > > My personal hypothesis so far is that it might - the REINFORCE might > scale amazingly well and just continuous application of it...

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-11 Thread Petr Baudis
On Fri, Mar 11, 2016 at 09:33:52AM +0100, Robert Jasiek wrote: > On 11.03.2016 08:24, Huazuo Gao wrote: > >Points at the center of the board indeed depends on the full board, but > >points near the edge does not. > > I have been wondering why AlphaGo could improve a lot between the Fan Hui > and

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-11 Thread Robert Jasiek
On 11.03.2016 08:24, Huazuo Gao wrote: Points at the center of the board indeed depends on the full board, but points near the edge does not. I have been wondering why AlphaGo could improve a lot between the Fan Hui and Lee Sedol matches incl. learning sente and showing greater signs of more

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-10 Thread Huazuo Gao
Points at the center of the board indeed depends on the full board, but points near the edge does not. On Fri, Mar 11, 2016 at 3:03 PM Vincent Zhuang wrote: > A stack of 11 3x3 convolutional layers and a single 5x5 layer with no > pooling actually corresponds to effectively

Re: [Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-10 Thread Vincent Zhuang
A stack of 11 3x3 convolutional layers and a single 5x5 layer with no pooling actually corresponds to effectively a 27x27 kernel, which is obviously large enough to cover the entire board. (Your value of 13 is only the distance from the center of the filter to the edge). On Thu, Mar 10, 2016 at

[Computer-go] AlphaGo & DCNN: Handling long-range dependency

2016-03-10 Thread Huazuo Gao
According to the paper *Mastering the Game of Go with Deep Neural Networks and **Tree Search*, the main part of both the policy and value network is a 5*5 conv layer followed by eleven 3*3 conv layer. Therefore, after the last conv layer, the maximum "information propagation length" is (5-1)/2 +