Re: [Computer-go] AlphaGo Zero

2017-10-19 Thread dave.de...@planet.nl
I would like to know how much handicap the Master version needs against the 
Zero version. It could be less than black without komi or more than 3 stones.
Handicap differences cannot be deduced from regular Elo rating differences, 
because it varies depending on skill (a handicap stone is more than 100 regular 
Elo points points for higher dan players and less than 100 regular Elo points 
for kyu players). 
Dave de Vos
 
>Origineel Bericht--
--
>Van : 3-hirn-ver...@gmx.de
>Datum : 19/10/2017 20:53
>Aan : computer-go@computer-go.org
>Onderwerp : Re: [Computer-go] AlphaGo Zero
>
>What shall I say?
>Really impressive.
>My congratulations to the DeepMind team!
>
>> https://deepmind.com/blog/
>> http://www.nature.com/nature/index.html
>
>* Would the same approach also work for integral komi values
>(with the possibility of draws)? If so, what would the likely
>correct komi for 19x19 Go be?
>
>* Or in another way: Looking at Go on NxN board:
>For which values of N would the DeepMind be confident
>to find the correct komi value?
>
>
>* How often are there ko-fights in autoplay games of
>AlphaGo Zero?
>
>Ingo.
>
>PS(a fitting song). The opening theme of
>Djan-Go Unchained (with a march through a desert of stones):
>https://www.youtube.com/watch?v=R1hqn8kKZ_M
>___
>Computer-go mailing list
>Computer-go@computer-go.org
>http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AI Driving Cars

2017-01-08 Thread dave.de...@planet.nl
Talking about Drawin, I suppose evolution would make humans safer drivers 
eventually, but the process would take many millennia (assuming that humans 
keep driving themselves for many millenia to come).
Origineel Bericht
Van : terrymcint...@yahoo.com
Datum : 08/01/2017 15:16
Aan : computer-go@computer-go.org
Onderwerp : Re: [Computer-go] AI Driving Cars
 The thing about sel-drivifng AI: it can be improved en masse. Tesla can fix a 
problem and send the fix to a million cars. Can't say the same about a million 
human drivers. Each must independently pass Darwin's test. 
 
 
   
 
 
   
 
 
  Terry McIntyre  Unix/Linux Systems Administration 
Taking time to do it right saves having to do it twice.
 
 
  
  
 
 
  
   

  On Saturday, January 7, 2017 10:28 PM, Mark Goldfain 
 wrote:




 
  
   
Perhaps you did not hear about the fatal Tesla crash in Florida on 05/07/16?
   
http://www.nytimes.com/interactive/2016/07/01/business/inside-tesla-accident.html
Or the fatal crash in China of a Tesla on 01/16/16, which only got reported in 
the news around September?
   
http://www.nytimes.com/2016/09/15/business/fatal-tesla-crash-in-china-involved-autopilot-government-tv-says.html
Frankly, there has not been a lot of coverage of these 2 events.
 -- Mark
| Message: 6
| Date: Sat, 7 Jan 2017 21:34:27 +
| From: Nick Wedd 
| To: computer-go@computer-go.org
| Subject: Re: [Computer-go] Our Silicon Overlord
| Message-ID:
|   
| Content-Type: text/plain; charset="utf-8"
| 
| The first time someone's killed by an AI-controlled vehicle, you can be
| sure it'll be world news. That's how journalism works.
| 
| Nick
 
  
 
 
___
 
Computer-go mailing list
 
 Computer-go@computer-go.org
 
 http://computer-go.org/mailman/listinfo/computer-go
 
 

   
  
 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move evalution by expected value, as product of expected winrate and expected points?

2016-02-23 Thread dave.de...@planet.nl
If you accumulate end scores of playout results, you can make a histogram by 
plotting the frequency of a score f(s) as a function of the score. The winrate 
is the sum(f(s)) where s > 0. The average score is sum(s * f(s)) / sum(s) 
summed over all s. 
When the distibution can be approximated by a normal distribution, it may not 
matter much whether you choose to maximize winrate or average score. 
But in general, the distribution could be a multimodal distribution (in fact I 
think it always is, unless you solved the game à la Conway). In that case, the 
average score may not be a very reliable representation of the situation. For 
example, having a 99% chance of losing by 0.5 points combined with a 1% chance 
of winning by 100 points might give you the impression that are winning by 0.5 
points (which would be the average score), while in reality you have only a 1% 
of winning (which would be the winrate).
Dave de Vos
Origineel Bericht
Van : alvaro.be...@gmail.com
Datum : 23/02/2016 12:44
Aan : computer-go@computer-go.org
Onderwerp : Re: [Computer-go] Move evalution by expected value, as product of 
expected winrate and expected points?
I have experimented with a CNN that predicts ownership, but I found it to be 
too weak to be useful. The main difference between what Google did and what I 
did is in the dataset used for training: I had tens of thousands of games (I 
did several different experiments) and I used all the positions from each game 
(which is known to be problematic); they used 30M positions from independent 
games. I expect you can learn a lot about ownership and expected number of 
points from a dataset like that. Unfortunately, generating such a dataset is 
infeasible with the resources most of us have.
Here's an idea: Google could make the dataset publicly available for download, 
ideally with the final configurations of the board as well. There is a 
tradition of making interesting datasets for machine learning available, so I 
have some hope this may happen.
The one experiment I would like to make along the lines of your post is to 
train a CNN to compute both the expected number of points and its standard 
deviation. If you assume the distribution of scores is well approximated by a 
normal distribution, maximizing winning probability can be achieved by 
maximizing (expected score) / (standard deviation of the score). I wonder if 
that results in stronger or more natural play than making a direct model for 
winning probability, because you get to learn more about each position.
Álvaro.
On Tue, Feb 23, 2016 at 5:36 AM, Michael Markefka  
wrote:
Hello everyone,
in the wake of AlphaGo using a DCNN to predict expected winrate of a
move, I've been wondering whether one could train a DCNN for expected
territory or points successfully enough to be of some use (leaving the
issue of win by resignation for a more in-depth discussion). And,
whether winrate and expected territory (or points) always run in
parallel or whether there are diverging moments.
Computer Go programs play what are considered slack or slow moves when
ahead, sometimes being too conservative and giving away too much of
their potential advantage. If expected points and expected winrate
diverge, this could be a way to make the programs play in a more
natural way, even if there were no strength increase to be gained.
Then again there might be a parameter configuration that might yield
some advantage and perhaps this configuration would need to be
dynamic, favoring winrate the further the game progresses.
As a general example for the idea, let's assume we have the following
potential moves generated by our program:
#1: Winrate 55%, +5 expected final points
#2: Winrate 53%, +15 expected final points
Is the move with higher winrate always better? Or would there be some
benefit to choosing #2? Would this differ depending on how far along
the game is?
If we knew the winrate prediction to be perfect, then going by that
alone would probably result in the best overall performance. But given
some uncertainty there, expected value could be interesting.
Any takers for some experiments?
-Michael
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Building A Computer Go AI

2015-08-25 Thread dave.de...@planet.nl
Yes, it was in Andy Olsen's original response: https://github.com/pasky/michi
Dave
Origineel Bericht
Van : gengyang...@gmail.com
Datum : 24/08/2015 10:24
Aan : computer-go@computer-go.org
Onderwerp : Re: [Computer-go] Computer-go Digest, Vol 67, Issue 14
Hi,
Is there a download link for the Michi --- Minimalistic Go MCTS Engine? I would 
like to use it to learn how to build a Go engine ...
Gengyang
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go