If you are killed by an AI-driven car, the manufacturer will use the case
to improve the algorithm and make sure that this type of death never
happens again. Unfortunately a death by a drunk driver doesn't seem to
teach anyone anything and will keep happening as long as people need to
drive and
It already happened
https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk
Le 07/01/2017 à 22:34, Nick Wedd a écrit :
> The first time someone's killed by an AI-controlled vehicle, you can
> be sure it'll be world news. That's how journalism works.
>
All the point, is that there is very little chance that you are more likely
to dead by an AI driven than a human driven as the expectation set to
AI driven is at least one order of magnitude higher than human one
before there is any hope that AI would be authorized (Actually the real
expectation
Well, I don't know what is the likelihood of being hit by drunk drivers
or AI driven cars, but if it were the same I'd prefer to have drunk
drivers. Drunk drivers you can understand: you can improve your chances
by making yourself more visible, do not jump from beyond obstacles, be
more careful
The first time someone's killed by an AI-controlled vehicle, you can be
sure it'll be world news. That's how journalism works.
Nick
On 7 January 2017 at 21:24, Xavier Combelle
wrote:
>
> > ...this is a major objective. E.g., we do not want AI driven cars
> > working
Yes, standards are high for AI systems … but we digress
Cheers,
David G Doshay
ddos...@mac.com
> On 7, Jan 2017, at 1:24 PM, Xavier Combelle wrote:
>
>
>> ...this is a major objective. E.g., we do not want AI driven cars
>> working right most of the time but
> ...this is a major objective. E.g., we do not want AI driven cars
> working right most of the time but sometimes killing people because
> the AI faces situations (such as a local sand storm or a painting on
> the street with a fake landscape or fake human being) outside its
> current training
On 07.01.2017 16:33, Jim O'Flaherty wrote:
I hope you get access to AlphaGo ASAP.
More realistically, I (we) would need to translate the maths into
algorithmic strategy then executed by a program module representing the
human opponent. Such is necessary because no human can remember
I love your dedication to the principles of logic. I'm looking forward to
hearing and seeing how your explorations in this area pan out. They will be
valuable to everyone interested in exploring AI weaknesses. I hope you get
access to AlphaGo ASAP.
On Jan 6, 2017 11:28 PM, "Robert Jasiek"
On 06.01.2017 23:37, Jim O'Flaherty wrote:
into a position with superko [...] how do you even get AlphaGo into a the arcane
state in the first place,
I can't in practice.
I have not provided a way to beat AlphaGo from the game start at the
empty board.
All I have shown is that there are
Okay. So I will play along. How do you think you would coax AlphaGo into a
position with superko without AlphaGo having already simulated that pathway
as a less probable win space for itself when compared to other playing
trees which avoid it? IOW, how do you even get AlphaGo into a the arcane
On 05.01.2017 17:32, Jim O'Flaherty wrote:
I don't follow.
1) "For each arcane position reached, there would now be ample data for
AlphaGo to train on that particular pathway." is false. See below.
2) "two strategies. The first would be to avoid the state in the first
place." Does AlphaGo
For each arcane position reached, there would now be ample data for AlphaGo
to train on that particular pathway. And it would emerge two strategies.
The first would be to avoid the state in the first place. And the second
would be to optimize play in that particular state. So, the human advantage
Le 05/01/2017 à 07:37, Robert Jasiek a écrit :
> On 04.01.2017 22:08, "Ingo Althöfer" wrote:
>> humanity's last hope
>
> The "last hope" are theoreticians creating arcane positions far
> outside the NN of AlphaGo so that its deep reading would be
> insufficient compensation! Another chance is
On 04.01.2017 22:08, "Ingo Althöfer" wrote:
humanity's last hope
The "last hope" are theoreticians creating arcane positions far outside
the NN of AlphaGo so that its deep reading would be insufficient
compensation! Another chance is long-term, subtle creation and use of aji.
--
robert
I think it was mentioned that Master was playing all of its moves in almost
exactly 5-second increments, even trivial forcing move responses, which was
one of the things that led people to believe it was an AI. If that's true,
AlphaGo was basically playing under a time-handicap.
On Wed, Jan 4,
16 matches
Mail list logo