Re: [Computer-go] Our Silicon Overlord

2017-01-07 Thread Álvaro Begué
If you are killed by an AI-driven car, the manufacturer will use the case
to improve the algorithm and make sure that this type of death never
happens again. Unfortunately a death by a drunk driver doesn't seem to
teach anyone anything and will keep happening as long as people need to
drive and alcoholism exists.



On Sat, Jan 7, 2017 at 10:35 PM, Gonçalo Mendes Ferreira 
wrote:

> Well, I don't know what is the likelihood of being hit by drunk drivers
> or AI driven cars, but if it were the same I'd prefer to have drunk
> drivers. Drunk drivers you can understand: you can improve your chances
> by making yourself more visible, do not jump from beyond obstacles, be
> more careful when crossing or not crossing before they actually stop. A
> failure in an AI car seems much more unpredictable.
>
> Gonçalo
>
> On 07/01/2017 21:24, Xavier Combelle wrote:
> >
> >> ...this is a major objective. E.g., we do not want AI driven cars
> >> working right most of the time but sometimes killing people because
> >> the AI faces situations (such as a local sand storm or a painting on
> >> the street with a fake landscape or fake human being) outside its
> >> current training and reading.
> > currently I don't like to be killed by a drunk driver, and to my opinion
> > it is very more likely to happen than an AI killing me because a mistake
> > in programming (I know, it is not the point of view of most of people
> > which want a perfect AI with zero dead and not an AI which would reduce
> > the death on road by a factor 100)
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
> >
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-07 Thread Xavier Combelle
It already happened
https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk


Le 07/01/2017 à 22:34, Nick Wedd a écrit :
> The first time someone's killed by an AI-controlled vehicle, you can
> be sure it'll be world news. That's how journalism works.
>
> Nick
>
> On 7 January 2017 at 21:24, Xavier Combelle  > wrote:
>
>
> > ...this is a major objective. E.g., we do not want AI driven cars
> > working right most of the time but sometimes killing people because
> > the AI faces situations (such as a local sand storm or a painting on
> > the street with a fake landscape or fake human being) outside its
> > current training and reading.
> currently I don't like to be killed by a drunk driver, and to my
> opinion
> it is very more likely to happen than an AI killing me because a
> mistake
> in programming (I know, it is not the point of view of most of people
> which want a perfect AI with zero dead and not an AI which would
> reduce
> the death on road by a factor 100)
> ___
> Computer-go mailing list
> Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
>
>
>
>
> -- 
> Nick Wedd  mapr...@gmail.com 
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-07 Thread Xavier Combelle
All the point, is that there is very little chance that you are more likely
to dead by an AI driven than a human driven as the expectation set to
AI driven is at least one order of magnitude higher than human one
before there is any hope that AI would be authorized (Actually the real
expectation is AI would be responsible of zero death)

Le 07/01/2017 à 22:35, Gonçalo Mendes Ferreira a écrit :
> Well, I don't know what is the likelihood of being hit by drunk drivers
> or AI driven cars, but if it were the same I'd prefer to have drunk
> drivers. Drunk drivers you can understand: you can improve your chances
> by making yourself more visible, do not jump from beyond obstacles, be
> more careful when crossing or not crossing before they actually stop. A
> failure in an AI car seems much more unpredictable.
>
> Gonçalo
>
> On 07/01/2017 21:24, Xavier Combelle wrote:
>>> ...this is a major objective. E.g., we do not want AI driven cars
>>> working right most of the time but sometimes killing people because
>>> the AI faces situations (such as a local sand storm or a painting on
>>> the street with a fake landscape or fake human being) outside its
>>> current training and reading. 
>> currently I don't like to be killed by a drunk driver, and to my opinion
>> it is very more likely to happen than an AI killing me because a mistake
>> in programming (I know, it is not the point of view of most of people
>> which want a perfect AI with zero dead and not an AI which would reduce
>> the death on road by a factor 100)
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-07 Thread Gonçalo Mendes Ferreira
Well, I don't know what is the likelihood of being hit by drunk drivers
or AI driven cars, but if it were the same I'd prefer to have drunk
drivers. Drunk drivers you can understand: you can improve your chances
by making yourself more visible, do not jump from beyond obstacles, be
more careful when crossing or not crossing before they actually stop. A
failure in an AI car seems much more unpredictable.

Gonçalo

On 07/01/2017 21:24, Xavier Combelle wrote:
> 
>> ...this is a major objective. E.g., we do not want AI driven cars
>> working right most of the time but sometimes killing people because
>> the AI faces situations (such as a local sand storm or a painting on
>> the street with a fake landscape or fake human being) outside its
>> current training and reading. 
> currently I don't like to be killed by a drunk driver, and to my opinion
> it is very more likely to happen than an AI killing me because a mistake
> in programming (I know, it is not the point of view of most of people
> which want a perfect AI with zero dead and not an AI which would reduce
> the death on road by a factor 100)
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
> 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-07 Thread Nick Wedd
The first time someone's killed by an AI-controlled vehicle, you can be
sure it'll be world news. That's how journalism works.

Nick

On 7 January 2017 at 21:24, Xavier Combelle 
wrote:

>
> > ...this is a major objective. E.g., we do not want AI driven cars
> > working right most of the time but sometimes killing people because
> > the AI faces situations (such as a local sand storm or a painting on
> > the street with a fake landscape or fake human being) outside its
> > current training and reading.
> currently I don't like to be killed by a drunk driver, and to my opinion
> it is very more likely to happen than an AI killing me because a mistake
> in programming (I know, it is not the point of view of most of people
> which want a perfect AI with zero dead and not an AI which would reduce
> the death on road by a factor 100)
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>



-- 
Nick Wedd  mapr...@gmail.com
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-07 Thread David Doshay
Yes, standards are high for AI systems … but we digress

Cheers,
David G Doshay

ddos...@mac.com





> On 7, Jan 2017, at 1:24 PM, Xavier Combelle  wrote:
> 
> 
>> ...this is a major objective. E.g., we do not want AI driven cars
>> working right most of the time but sometimes killing people because
>> the AI faces situations (such as a local sand storm or a painting on
>> the street with a fake landscape or fake human being) outside its
>> current training and reading. 
> currently I don't like to be killed by a drunk driver, and to my opinion
> it is very more likely to happen than an AI killing me because a mistake
> in programming (I know, it is not the point of view of most of people
> which want a perfect AI with zero dead and not an AI which would reduce
> the death on road by a factor 100)
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-07 Thread Xavier Combelle

> ...this is a major objective. E.g., we do not want AI driven cars
> working right most of the time but sometimes killing people because
> the AI faces situations (such as a local sand storm or a painting on
> the street with a fake landscape or fake human being) outside its
> current training and reading. 
currently I don't like to be killed by a drunk driver, and to my opinion
it is very more likely to happen than an AI killing me because a mistake
in programming (I know, it is not the point of view of most of people
which want a perfect AI with zero dead and not an AI which would reduce
the death on road by a factor 100)
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-07 Thread Robert Jasiek

On 07.01.2017 16:33, Jim O'Flaherty wrote:

I hope you get access to AlphaGo ASAP.


More realistically, I (we) would need to translate the maths into 
algorithmic strategy then executed by a program module representing the 
human opponent. Such is necessary because no human can remember 
everything to create a legal superko sequence of over 13,500,000 moves 
or have the mere stamina to perform it. (Already just counting to 1 
million is said to take 3 weeks without sleep...)


Anyway,...

> exploring AI weaknesses

...this is a major objective. E.g., we do not want AI driven cars 
working right most of the time but sometimes killing people because the 
AI faces situations (such as a local sand storm or a painting on the 
street with a fake landscape or fake human being) outside its current 
training and reading.


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-07 Thread Jim O'Flaherty
I love your dedication to the principles of logic. I'm looking forward to
hearing and seeing how your explorations in this area pan out. They will be
valuable to everyone interested in exploring AI weaknesses. I hope you get
access to AlphaGo ASAP.

On Jan 6, 2017 11:28 PM, "Robert Jasiek"  wrote:

> On 06.01.2017 23:37, Jim O'Flaherty wrote:
>
>> into a position with superko [...] how do you even get AlphaGo into a the
>> arcane
>> state in the first place,
>>
>
> I can't in practice.
>
> I have not provided a way to beat AlphaGo from the game start at the empty
> board.
>
> All I have shown is that there are positions beyond AlphaGo's capabilities
> to refute your claim that AlphaGo would handle all positions well.
>
> Hui and Lee constructed positions with such aspects: Hui with long-term
> aji, Lee with complex reduction aji. Some versions of AlphaGo mishandled
> the situations locally or locally + globally.
>
> The professional players will be
>> open to all sorts of creative ideas on how to find weaknesses with
>> AlphaGo.
>>
>
> Or the amateur players or theoreticians.
>
> Perhaps you can persuade one of the 9p-s to explore your idea
>> of pushing the AlphaGo AI in this direction.
>>
>
> Rather I'd need playing time against AlphaGo.
>
> IOW, we are now well outside of provable spaces
>>
>
> For certain given positions, proofs of difficulty exist. Since Go is a
> complete-information game, there can never be a proof that AlphaGo could
> never do it. There can only ever be proofs of difficulty.
>
> mathematical proof around a full game
>>
>
> From the empty board? Of course not (today).
>
> We cannot formally prove much simpler models,
>>
>
> Formal proofs for certain types of positions (such as with round_up(n/2)
> n-tuple kos) exist.
>
> --
> robert jasiek
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-06 Thread Robert Jasiek

On 06.01.2017 23:37, Jim O'Flaherty wrote:

into a position with superko [...] how do you even get AlphaGo into a the arcane
state in the first place,


I can't in practice.

I have not provided a way to beat AlphaGo from the game start at the 
empty board.


All I have shown is that there are positions beyond AlphaGo's 
capabilities to refute your claim that AlphaGo would handle all 
positions well.


Hui and Lee constructed positions with such aspects: Hui with long-term 
aji, Lee with complex reduction aji. Some versions of AlphaGo mishandled 
the situations locally or locally + globally.



The professional players will be
open to all sorts of creative ideas on how to find weaknesses with AlphaGo.


Or the amateur players or theoreticians.


Perhaps you can persuade one of the 9p-s to explore your idea
of pushing the AlphaGo AI in this direction.


Rather I'd need playing time against AlphaGo.


IOW, we are now well outside of provable spaces


For certain given positions, proofs of difficulty exist. Since Go is a 
complete-information game, there can never be a proof that AlphaGo could 
never do it. There can only ever be proofs of difficulty.



mathematical proof around a full game


From the empty board? Of course not (today).


We cannot formally prove much simpler models,


Formal proofs for certain types of positions (such as with round_up(n/2) 
n-tuple kos) exist.


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-06 Thread Jim O'Flaherty
Okay. So I will play along. How do you think you would coax AlphaGo into a
position with superko without AlphaGo having already simulated that pathway
as a less probable win space for itself when compared to other playing
trees which avoid it? IOW, how do you even get AlphaGo into a the arcane
state in the first place, especially since uncertainty of outcome is
weighted against wins for itself?

And since I know you cannot definitively answer that, it looks like we'll
just have to wait and see what happens. The professional players will be
open to all sorts of creative ideas on how to find weaknesses with AlphaGo.
And until they get free reign to play as many games as they like against it
so they can begin to get a feel for strategies that do expose probable
weaknesses (we won't know with certainty as it appears AlphaGo is now
generating its own theories where a situation is rated a weakness by a
human turns out to be incorrect and AlphaGo ends up leveraging it to its
advantage). Perhaps you can persuade one of the 9p-s to explore your idea
of pushing the AlphaGo AI in this direction.

IOW, we are now well outside of provable spaces and into probabilistic
spaces. At the scales we are discussing, it is improbable we will ever
directly experience seeing anything approaching a mathematical proof around
a full game of Go between two experts, even if those experts are two
competing AIs. We cannot formally prove much simpler models, much less ones
with the complexity of a game of Go.


On Fri, Jan 6, 2017 at 12:55 AM, Robert Jasiek  wrote:

> On 05.01.2017 17:32, Jim O'Flaherty wrote:
>
>> I don't follow.
>>
>
> 1) "For each arcane position reached, there would now be ample data for
> AlphaGo to train on that particular pathway." is false. See below.
>
> 2) "two strategies. The first would be to avoid the state in the first
> place." Does AlphaGo have any strategy ever? If it does, does it have
> strategies of avoiding certain types of positions?
>
> 3) "the second would be to optimize play in that particular state." If you
> mean optimise play = maximise winning probability.
>
> But... optimising this is hard when (under positional superko) optimal
> play can be ca. 13,500,000 moves long and the tree to that is huge. Even
> TPU sampling can be lost then.
>
> Afterwards, there is still only one position from which to train. For NN
> learning, one position is not enough and cannot replace analysis by
> mathematical proofs ALA the NN does not emulate mathematical proving.
>
>
> --
> robert jasiek
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-05 Thread Robert Jasiek

On 05.01.2017 17:32, Jim O'Flaherty wrote:

I don't follow.


1) "For each arcane position reached, there would now be ample data for 
AlphaGo to train on that particular pathway." is false. See below.


2) "two strategies. The first would be to avoid the state in the first 
place." Does AlphaGo have any strategy ever? If it does, does it have 
strategies of avoiding certain types of positions?


3) "the second would be to optimize play in that particular state." If 
you mean optimise play = maximise winning probability.


But... optimising this is hard when (under positional superko) optimal 
play can be ca. 13,500,000 moves long and the tree to that is huge. Even 
TPU sampling can be lost then.


Afterwards, there is still only one position from which to train. For NN 
learning, one position is not enough and cannot replace analysis by 
mathematical proofs ALA the NN does not emulate mathematical proving.


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-05 Thread Jim O'Flaherty
For each arcane position reached, there would now be ample data for AlphaGo
to train on that particular pathway. And it would emerge two strategies.
The first would be to avoid the state in the first place. And the second
would be to optimize play in that particular state. So, the human advantage
would be very short lived.

On Thu, Jan 5, 2017 at 12:37 AM, Robert Jasiek  wrote:

> On 04.01.2017 22:08, "Ingo Althöfer" wrote:
>
>> humanity's last hope
>>
>
> The "last hope" are theoreticians creating arcane positions far outside
> the NN of AlphaGo so that its deep reading would be insufficient
> compensation! Another chance is long-term, subtle creation and use of aji.
>
> --
> robert jasiek
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-04 Thread Xavier Combelle


Le 05/01/2017 à 07:37, Robert Jasiek a écrit :
> On 04.01.2017 22:08, "Ingo Althöfer" wrote:
>> humanity's last hope
>
> The "last hope" are theoreticians creating arcane positions far
> outside the NN of AlphaGo so that its deep reading would be
> insufficient compensation! Another chance is long-term, subtle
> creation and use of aji.
>
The problem is that you have to find a way to constraint alphago to
reach the position you have prepared it will be very hard because it has
the choice of half of the moves which leads to the position.

From computer science point of view, theoricaly, the best move on an
arbitrary position being a PSPACE hard problem, any problem at least
easier than PSPACE could translate in a go problem. So there is a huge
amount of difficult problems (understand impossible to solve except on
toy size) which could be setup as target go positions but the real
problems is that you have to reach this position which are very unlikely
to happen in a real game.

An easy way to win against Alphago strength level of bot is to make two 
deterministic version of it, make it play one against the other and
replay the moves of the wining side.

--
Xavier

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-04 Thread Robert Jasiek

On 04.01.2017 22:08, "Ingo Althöfer" wrote:

humanity's last hope


The "last hope" are theoreticians creating arcane positions far outside 
the NN of AlphaGo so that its deep reading would be insufficient 
compensation! Another chance is long-term, subtle creation and use of aji.


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Our Silicon Overlord

2017-01-04 Thread Adrian Petrescu
I think it was mentioned that Master was playing all of its moves in almost
exactly 5-second increments, even trivial forcing move responses, which was
one of the things that led people to believe it was an AI. If that's true,
AlphaGo was basically playing under a time-handicap.

On Wed, Jan 4, 2017 at 4:08 PM, "Ingo Althöfer" <3-hirn-ver...@gmx.de>
wrote:

> Hi Richard,
>
> > ... can somebody please update me (us?) a
> > little on these 60 games. How strong were
> > the opponents?
>
> they were all strong pro players. Ke Jie (humanity's last hope
> in the eyes of several go players) also played three of the
> games.
>
> Thinking times were small base times plus 3 byoyomi periods of
> 30 seconds each.
>
> 
> I remember the nice A Capella Song on Alpha Go from
> March 2016 (only the first 30 seconds are relevant):
> https://www.youtube.com/watch?v=dh_mfGo183Y
>
> From the text:
> >> AlphaGo! AlphaGo! ...
> >> Ruler of the board ...
> >> We welcome our silicon overlord.
>
> Originally it was "overlords", but at the moment we have only one.
>
> Ingo.
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Our Silicon Overlord

2017-01-04 Thread Ingo Althöfer
Hi Richard,
 
> ... can somebody please update me (us?) a 
> little on these 60 games. How strong were 
> the opponents? 

they were all strong pro players. Ke Jie (humanity's last hope
in the eyes of several go players) also played three of the 
games. 

Thinking times were small base times plus 3 byoyomi periods of 
30 seconds each.


I remember the nice A Capella Song on Alpha Go from
March 2016 (only the first 30 seconds are relevant):
https://www.youtube.com/watch?v=dh_mfGo183Y

From the text:
>> AlphaGo! AlphaGo! ...
>> Ruler of the board ...
>> We welcome our silicon overlord.

Originally it was "overlords", but at the moment we have only one.

Ingo.

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go