Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-27 Thread David Wu
I suspect the reason they were able to reasonably train a value net with
multiple komi at the same time was because the training games they used in
that paper were generated by a pure policy net, rather than by a MCTS
player, where the policy net was trained from human games.

Although humans give up points for safety when ahead, in practice it seems
like they do so less than MCTS players of the same strength, so the policy
net trained on human games would not be expected to be as strongly feature
that tendency as it would if it were MCTS games, leading to less of a bias
when adjusting the komi. Plus it might be somewhat hard for a pure policy
net to learn to evaluate the board to, say, within +/- 3 points during the
macro and micro endgame to determine when it should predict moves to become
more conservative, if the policy net was never directly trained to
simultaneously predict the value. Particularly if the data set included
many 0.5 komi games too and the policy net was not told the komi. So one
might guess that the pure policy net would less tend to give up points for
safety, even less than the human games it was trained on.

All of this might help make it so that the data set they used for training
the value net could reasonably be used without introducing too much bias
when rescoring the same games with different komi .



On Thu, Oct 26, 2017 at 6:33 PM, Shawn Ligocki  wrote:

> On Thu, Oct 26, 2017 at 2:02 PM, Gian-Carlo Pascutto 
> wrote:
>
>> On 26-10-17 15:55, Roel van Engelen wrote:
>> > @Gian-Carlo Pascutto
>> >
>> > Since training uses a ridiculous amount of computing power i wonder
>> > if it would be useful to make certain changes for future research,
>> > like training the value head with multiple komi values
>> > 
>>
>> Given that the game data will be available, it will be trivial for
>> anyone to train a different network architecture on the result and see
>> if they get better results, or a program that handles multiple komi
>> values, etc.
>>
>> The problem is getting the *data*, not the training.
>>
>
> But the data should be different for different komi values, right?
> Iteratively producing self-play games and training with the goal of
> optimizing for komi 7 should converge to a different optimal player than
> optimizing for komi 5. But maybe having high quality data for komi 7 will
> still save a lot of the work for training a komi 5 (or komi agnostic)
> network?
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-27 Thread terry mcintyre via Computer-go
I'm sorry, did I miss soemthing here? 
On Friday, October 27, 2017, 5:46:19 AM EDT, Gian-Carlo Pascutto 
 wrote:  
 
 On 27-10-17 00:33, Shawn Ligocki wrote:
> But the data should be different for different komi values, right? 
> Iteratively producing self-play games and training with the goal of 
> optimizing for komi 7 should converge to a different optimal player 
> than optimizing for komi 5.

"For the policy (head) network, yes, definitely. It makes no difference
to the value (head) network."


The value network indicates whether the board leads to a win or not. This would 
certainly depend on komi, especially in the half-point games which seem to be 
the natural end result of how it selects moves? 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go  ___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-27 Thread Gian-Carlo Pascutto
On 27-10-17 00:33, Shawn Ligocki wrote:
> But the data should be different for different komi values, right? 
> Iteratively producing self-play games and training with the goal of 
> optimizing for komi 7 should converge to a different optimal player 
> than optimizing for komi 5.

For the policy (head) network, yes, definitely. It makes no difference
to the value (head) network.

> But maybe having high quality data for komi 7 will still save a lot
> of the work for training a komi 5 (or komi agnostic) network?

I'd suspect so.

-- 
GCP
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-26 Thread Shawn Ligocki
On Thu, Oct 26, 2017 at 2:02 PM, Gian-Carlo Pascutto  wrote:

> On 26-10-17 15:55, Roel van Engelen wrote:
> > @Gian-Carlo Pascutto
> >
> > Since training uses a ridiculous amount of computing power i wonder
> > if it would be useful to make certain changes for future research,
> > like training the value head with multiple komi values
> > 
>
> Given that the game data will be available, it will be trivial for
> anyone to train a different network architecture on the result and see
> if they get better results, or a program that handles multiple komi
> values, etc.
>
> The problem is getting the *data*, not the training.
>

But the data should be different for different komi values, right?
Iteratively producing self-play games and training with the goal of
optimizing for komi 7 should converge to a different optimal player than
optimizing for komi 5. But maybe having high quality data for komi 7 will
still save a lot of the work for training a komi 5 (or komi agnostic)
network?
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-26 Thread Detlef Schmicker
This is a quite natural approach, I think every go program which needs
to play with different komi does it in one way.

At least oakfoam does :)


Detlef

Am 26.10.2017 um 15:55 schrieb Roel van Engelen:
> @Gian-Carlo Pascutto
> 
> Since training uses a ridiculous amount of computing power i wonder if it
> would
> be useful to make certain changes for future research, like training the
> value head
> with multiple komi values <https://arxiv.org/pdf/1705.10701.pdf>
> 
> On 26 October 2017 at 03:02, Brian Sheppard via Computer-go <
> computer-go@computer-go.org> wrote:
> 
>> I think it uses the champion network. That is, the training periodically
>> generates a candidate, and there is a playoff against the current champion.
>> If the candidate wins by more than 55% then a new champion is declared.
>>
>>
>>
>> Keeping a champion is an important mechanism, I believe. That creates the
>> competitive coevolution dynamic, where the network is evolving to learn how
>> to beat the best, and not just most recent. Without that dynamic, the
>> training process can go up and down.
>>
>>
>>
>> *From:* Computer-go [mailto:computer-go-boun...@computer-go.org] *On
>> Behalf Of *uurtamo .
>> *Sent:* Wednesday, October 25, 2017 6:07 PM
>> *To:* computer-go <computer-go@computer-go.org>
>> *Subject:* Re: [Computer-go] Source code (Was: Reducing network size?
>> (Was: AlphaGo Zero))
>>
>>
>>
>> Does the self-play step use the most recent network for each move?
>>
>>
>>
>> On Oct 25, 2017 2:23 PM, "Gian-Carlo Pascutto" <g...@sjeng.org> wrote:
>>
>> On 25-10-17 17:57, Xavier Combelle wrote:
>>> Is there some way to distribute learning of a neural network ?
>>
>> Learning as in training the DCNN, not really unless there are high
>> bandwidth links between the machines (AFAIK - unless the state of the
>> art changed?).
>>
>> Learning as in generating self-play games: yes. Especially if you update
>> the network only every 25 000 games.
>>
>> My understanding is that this task is much more bottlenecked on game
>> generation than on DCNN training, until you get quite a bit of machines
>> that generate games.
>>
>> --
>> GCP
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
> 
> 
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
> 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-26 Thread Erik van der Werf
Good point, Roel. Perhaps in the final layers one could make it predict a
model of the expected score distribution (before combining with the komi
and other rules specific adjustments for handicap stones, pass stones,
last-play parity, etc.). Should be easy enough to back-propagate win/loss
information (and perhaps even more) through such a model.


On Thu, Oct 26, 2017 at 3:55 PM, Roel van Engelen <gosuba...@gmail.com>
wrote:

> @Gian-Carlo Pascutto
>
> Since training uses a ridiculous amount of computing power i wonder if it
> would
> be useful to make certain changes for future research, like training the
> value head
> with multiple komi values <https://arxiv.org/pdf/1705.10701.pdf>
>
> On 26 October 2017 at 03:02, Brian Sheppard via Computer-go <
> computer-go@computer-go.org> wrote:
>
>> I think it uses the champion network. That is, the training periodically
>> generates a candidate, and there is a playoff against the current champion.
>> If the candidate wins by more than 55% then a new champion is declared.
>>
>>
>>
>> Keeping a champion is an important mechanism, I believe. That creates the
>> competitive coevolution dynamic, where the network is evolving to learn how
>> to beat the best, and not just most recent. Without that dynamic, the
>> training process can go up and down.
>>
>>
>>
>> *From:* Computer-go [mailto:computer-go-boun...@computer-go.org] *On
>> Behalf Of *uurtamo .
>> *Sent:* Wednesday, October 25, 2017 6:07 PM
>> *To:* computer-go <computer-go@computer-go.org>
>> *Subject:* Re: [Computer-go] Source code (Was: Reducing network size?
>> (Was: AlphaGo Zero))
>>
>>
>>
>> Does the self-play step use the most recent network for each move?
>>
>>
>>
>> On Oct 25, 2017 2:23 PM, "Gian-Carlo Pascutto" <g...@sjeng.org> wrote:
>>
>> On 25-10-17 17:57, Xavier Combelle wrote:
>> > Is there some way to distribute learning of a neural network ?
>>
>> Learning as in training the DCNN, not really unless there are high
>> bandwidth links between the machines (AFAIK - unless the state of the
>> art changed?).
>>
>> Learning as in generating self-play games: yes. Especially if you update
>> the network only every 25 000 games.
>>
>> My understanding is that this task is much more bottlenecked on game
>> generation than on DCNN training, until you get quite a bit of machines
>> that generate games.
>>
>> --
>> GCP
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-26 Thread Roel van Engelen
@Gian-Carlo Pascutto

Since training uses a ridiculous amount of computing power i wonder if it
would
be useful to make certain changes for future research, like training the
value head
with multiple komi values <https://arxiv.org/pdf/1705.10701.pdf>

On 26 October 2017 at 03:02, Brian Sheppard via Computer-go <
computer-go@computer-go.org> wrote:

> I think it uses the champion network. That is, the training periodically
> generates a candidate, and there is a playoff against the current champion.
> If the candidate wins by more than 55% then a new champion is declared.
>
>
>
> Keeping a champion is an important mechanism, I believe. That creates the
> competitive coevolution dynamic, where the network is evolving to learn how
> to beat the best, and not just most recent. Without that dynamic, the
> training process can go up and down.
>
>
>
> *From:* Computer-go [mailto:computer-go-boun...@computer-go.org] *On
> Behalf Of *uurtamo .
> *Sent:* Wednesday, October 25, 2017 6:07 PM
> *To:* computer-go <computer-go@computer-go.org>
> *Subject:* Re: [Computer-go] Source code (Was: Reducing network size?
> (Was: AlphaGo Zero))
>
>
>
> Does the self-play step use the most recent network for each move?
>
>
>
> On Oct 25, 2017 2:23 PM, "Gian-Carlo Pascutto" <g...@sjeng.org> wrote:
>
> On 25-10-17 17:57, Xavier Combelle wrote:
> > Is there some way to distribute learning of a neural network ?
>
> Learning as in training the DCNN, not really unless there are high
> bandwidth links between the machines (AFAIK - unless the state of the
> art changed?).
>
> Learning as in generating self-play games: yes. Especially if you update
> the network only every 25 000 games.
>
> My understanding is that this task is much more bottlenecked on game
> generation than on DCNN training, until you get quite a bit of machines
> that generate games.
>
> --
> GCP
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread Brian Sheppard via Computer-go
I think it uses the champion network. That is, the training periodically 
generates a candidate, and there is a playoff against the current champion. If 
the candidate wins by more than 55% then a new champion is declared.

 

Keeping a champion is an important mechanism, I believe. That creates the 
competitive coevolution dynamic, where the network is evolving to learn how to 
beat the best, and not just most recent. Without that dynamic, the training 
process can go up and down.

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
uurtamo .
Sent: Wednesday, October 25, 2017 6:07 PM
To: computer-go <computer-go@computer-go.org>
Subject: Re: [Computer-go] Source code (Was: Reducing network size? (Was: 
AlphaGo Zero))

 

Does the self-play step use the most recent network for each move?

 

On Oct 25, 2017 2:23 PM, "Gian-Carlo Pascutto" <g...@sjeng.org 
<mailto:g...@sjeng.org> > wrote:

On 25-10-17 17:57, Xavier Combelle wrote:
> Is there some way to distribute learning of a neural network ?

Learning as in training the DCNN, not really unless there are high
bandwidth links between the machines (AFAIK - unless the state of the
art changed?).

Learning as in generating self-play games: yes. Especially if you update
the network only every 25 000 games.

My understanding is that this task is much more bottlenecked on game
generation than on DCNN training, until you get quite a bit of machines
that generate games.

--
GCP
___
Computer-go mailing list
Computer-go@computer-go.org <mailto:Computer-go@computer-go.org> 
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread uurtamo .
I ask because there are (nearly) bus-speed networks that could make
multiple evaluation quick, especially if the various versions didn't differ
by more than a fixed fraction of nodes.

s.

On Oct 25, 2017 3:03 PM, uurt...@gmail.com wrote:

Does the self-play step use the most recent network for each move?

On Oct 25, 2017 2:23 PM, "Gian-Carlo Pascutto"  wrote:

On 25-10-17 17:57, Xavier Combelle wrote:
> Is there some way to distribute learning of a neural network ?

Learning as in training the DCNN, not really unless there are high
bandwidth links between the machines (AFAIK - unless the state of the
art changed?).

Learning as in generating self-play games: yes. Especially if you update
the network only every 25 000 games.

My understanding is that this task is much more bottlenecked on game
generation than on DCNN training, until you get quite a bit of machines
that generate games.

--
GCP
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread uurtamo .
Does the self-play step use the most recent network for each move?

On Oct 25, 2017 2:23 PM, "Gian-Carlo Pascutto"  wrote:

> On 25-10-17 17:57, Xavier Combelle wrote:
> > Is there some way to distribute learning of a neural network ?
>
> Learning as in training the DCNN, not really unless there are high
> bandwidth links between the machines (AFAIK - unless the state of the
> art changed?).
>
> Learning as in generating self-play games: yes. Especially if you update
> the network only every 25 000 games.
>
> My understanding is that this task is much more bottlenecked on game
> generation than on DCNN training, until you get quite a bit of machines
> that generate games.
>
> --
> GCP
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread Xavier Combelle
Nice to know. I wrongly believe that training such a big neural network
would need considerable hardware.

Le 25/10/2017 à 19:54, Álvaro Begué a écrit :
> There are ways to do it, but it might be messy. However, the vast
> majority of the computational effort will be in playing games to
> generate a training database, and that part is trivial to distribute.
> Testing if the new version is better than the old version is also very
> easy to distribute.
>
> Álvaro.
>
>
> On Wed, Oct 25, 2017 at 11:57 AM, Xavier Combelle
> > wrote:
>
> Is there some way to distribute learning of a neural network ?
>
>
> Le 25/10/2017 à 05:43, Andy a écrit :
>> Gian-Carlo, I didn't realize at first that you were planning to
>> create a crowd-sourced project. I hope this project can get off
>> the ground and running!
>>
>> I'll look into installing this but I always find it hard to get
>> all the tool chain stuff going.
>>
>>
>>
>> 2017-10-24 15:02 GMT-05:00 Gian-Carlo Pascutto > >:
>>
>> On 23-10-17 10:39, Darren Cook wrote:
>> >> The source of AlphaGo Zero is really of zero interest (pun
>> intended).
>> >
>> > The source code is the first-hand account of how it works,
>> whereas an
>> > academic paper is a second-hand account. So, definitely not
>> zero use.
>>
>> This should be fairly accurate:
>>
>> https://github.com/gcp/leela-zero
>> 
>>
>> --
>> GCP
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
>> 
>>
>>
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
>> 
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
>
>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread Shawn Ligocki
My guess is that they want to distribute playing millions of self-play
games. Then the learning would be comparatively much faster. Is that right?

On Wed, Oct 25, 2017 at 11:57 AM, Xavier Combelle  wrote:

> Is there some way to distribute learning of a neural network ?
>
> Le 25/10/2017 à 05:43, Andy a écrit :
>
> Gian-Carlo, I didn't realize at first that you were planning to create a
> crowd-sourced project. I hope this project can get off the ground and
> running!
>
> I'll look into installing this but I always find it hard to get all the
> tool chain stuff going.
>
>
>
> 2017-10-24 15:02 GMT-05:00 Gian-Carlo Pascutto :
>
>> On 23-10-17 10:39, Darren Cook wrote:
>> >> The source of AlphaGo Zero is really of zero interest (pun intended).
>> >
>> > The source code is the first-hand account of how it works, whereas an
>> > academic paper is a second-hand account. So, definitely not zero use.
>>
>> This should be fairly accurate:
>>
>> https://github.com/gcp/leela-zero
>>
>> --
>> GCP
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
>
>
>
> ___
> Computer-go mailing 
> listComputer-go@computer-go.orghttp://computer-go.org/mailman/listinfo/computer-go
>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread Álvaro Begué
There are ways to do it, but it might be messy. However, the vast majority
of the computational effort will be in playing games to generate a training
database, and that part is trivial to distribute. Testing if the new
version is better than the old version is also very easy to distribute.

Álvaro.


On Wed, Oct 25, 2017 at 11:57 AM, Xavier Combelle  wrote:

> Is there some way to distribute learning of a neural network ?
>
> Le 25/10/2017 à 05:43, Andy a écrit :
>
> Gian-Carlo, I didn't realize at first that you were planning to create a
> crowd-sourced project. I hope this project can get off the ground and
> running!
>
> I'll look into installing this but I always find it hard to get all the
> tool chain stuff going.
>
>
>
> 2017-10-24 15:02 GMT-05:00 Gian-Carlo Pascutto :
>
>> On 23-10-17 10:39, Darren Cook wrote:
>> >> The source of AlphaGo Zero is really of zero interest (pun intended).
>> >
>> > The source code is the first-hand account of how it works, whereas an
>> > academic paper is a second-hand account. So, definitely not zero use.
>>
>> This should be fairly accurate:
>>
>> https://github.com/gcp/leela-zero
>>
>> --
>> GCP
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
>
>
>
> ___
> Computer-go mailing 
> listComputer-go@computer-go.orghttp://computer-go.org/mailman/listinfo/computer-go
>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread Xavier Combelle
Is there some way to distribute learning of a neural network ?


Le 25/10/2017 à 05:43, Andy a écrit :
> Gian-Carlo, I didn't realize at first that you were planning to create
> a crowd-sourced project. I hope this project can get off the ground
> and running!
>
> I'll look into installing this but I always find it hard to get all
> the tool chain stuff going.
>
>
>
> 2017-10-24 15:02 GMT-05:00 Gian-Carlo Pascutto  >:
>
> On 23-10-17 10:39, Darren Cook wrote:
> >> The source of AlphaGo Zero is really of zero interest (pun
> intended).
> >
> > The source code is the first-hand account of how it works,
> whereas an
> > academic paper is a second-hand account. So, definitely not zero
> use.
>
> This should be fairly accurate:
>
> https://github.com/gcp/leela-zero 
>
> --
> GCP
> ___
> Computer-go mailing list
> Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
>
>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread Gian-Carlo Pascutto
On 25-10-17 05:43, Andy wrote:
> Gian-Carlo, I didn't realize at first that you were planning to create a
> crowd-sourced project. I hope this project can get off the ground and
> running!
> 
> I'll look into installing this but I always find it hard to get all the
> tool chain stuff going.

I will provide pre-made packages for common operating systems. Right now
we (Jonathan Roy is helping with the server) are exploring what's
possible for such a crowd-sourced effort, and testing the server. I'll
provide an update here when there's something to play with.

-- 
GCP
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread fotland
Sadly, this is GPL v3, so it's not safe for me to look at it.

David

PS even though Robert's posts are slightly off topic for the AlphaGo 
discussion, I respect that he has thought far more deeply than I have about go, 
and I support his inclusion in the list.

-Original Message-
From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Gian-Carlo Pascutto
Sent: Tuesday, October 24, 2017 1:02 PM
To: computer-go@computer-go.org
Subject: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo 
Zero))

On 23-10-17 10:39, Darren Cook wrote:
>> The source of AlphaGo Zero is really of zero interest (pun intended).
> 
> The source code is the first-hand account of how it works, whereas an 
> academic paper is a second-hand account. So, definitely not zero use.

This should be fairly accurate:

https://github.com/gcp/leela-zero

--
GCP
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-24 Thread Andy
Gian-Carlo, I didn't realize at first that you were planning to create a
crowd-sourced project. I hope this project can get off the ground and
running!

I'll look into installing this but I always find it hard to get all the
tool chain stuff going.



2017-10-24 15:02 GMT-05:00 Gian-Carlo Pascutto :

> On 23-10-17 10:39, Darren Cook wrote:
> >> The source of AlphaGo Zero is really of zero interest (pun intended).
> >
> > The source code is the first-hand account of how it works, whereas an
> > academic paper is a second-hand account. So, definitely not zero use.
>
> This should be fairly accurate:
>
> https://github.com/gcp/leela-zero
>
> --
> GCP
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-24 Thread Gian-Carlo Pascutto
On 23-10-17 10:39, Darren Cook wrote:
>> The source of AlphaGo Zero is really of zero interest (pun intended).
> 
> The source code is the first-hand account of how it works, whereas an
> academic paper is a second-hand account. So, definitely not zero use.

This should be fairly accurate:

https://github.com/gcp/leela-zero

-- 
GCP
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go