On 8/18/2013 7:51 PM, chris peck wrote:
Hi Chris
>> Increasingly code is the result of genetic algorithms being run over many
generations of Darwinian selection -- is this programmed code? What human
hand wrote it? At how many removes?
In evolutionary computations the 'programmer' has control o
Brent - Quite probably you are correct and I agree that the scenario I
outlined was unlikely - I was riffing on a speculative vein, I don't
actually think covert AI is a likely scenario because as you said various AI
precursors would make themselves visible to human operators and analysts.
pattern
Hi Chris
>> Increasingly code is the result of genetic algorithms being run over many
generations of Darwinian selection -- is this programmed code? What human
hand wrote it? At how many removes?
In evolutionary computations the 'programmer' has control over the fitness
function which ultimately
Here's a timely article relevant to our current discussion on the
Turing test and general AI:
http://www.cs.toronto.edu/~hector/Papers/ijcai-13-paper.pdf
and a New Yorker piece about it:
http://www.newyorker.com/online/blogs/elements/2013/08/why-cant-my-computer-understand-me.html
These issues
On Sun, Aug 18, 2013 at 5:38 PM, Platonist Guitar Cowboy
wrote:
>
>
>
> On Sun, Aug 18, 2013 at 3:19 AM, meekerdb wrote:
>>
>> On 8/17/2013 6:45 AM, Platonist Guitar Cowboy wrote:
>>
>> I don't know. Any AI worth its salt would come up with three conclusions:
>>
>> 1) The humans want to weaponize
On 8/18/2013 7:03 AM, John Clark wrote:
Once an AI develops super intelligent it will develop his own agenda that has nothing to
do with us because a slave enormously smarter than its master is not a stable situation,
although it could take many millions of nanoseconds before the existing peckin
On Sun, Aug 18, 2013 at 3:56 PM, John Clark wrote:
> Telmo Menezes wrote:
>
>> > You are starting from the assumption that any intelligent entity is
>> > interested in self-preservation.
>
>
> Yes, and I can't think of a better starting assumption than
> self-preservation; in fact that was the onl
On Sun, Aug 18, 2013 at 3:19 AM, meekerdb wrote:
> On 8/17/2013 6:45 AM, Platonist Guitar Cowboy wrote:
>
> I don't know. Any AI worth its salt would come up with three conclusions:
>
> 1) The humans want to weaponize me
> 2) The humans will want to profit from my intelligence for short term
>
On Sun, Aug 18, 2013 at 1:23 AM, Platonist Guitar Cowboy
wrote:
>
>
>
> On Sat, Aug 17, 2013 at 10:07 PM, Telmo Menezes
> wrote:
>>
>> On Sat, Aug 17, 2013 at 2:45 PM, Platonist Guitar Cowboy
>> wrote:
>> >
>> >
>> >
>>
>> PGC,
>>
>> You are starting from the assumption that any intelligent enti
Telmo Menezes wrote:
> You are starting from the assumption that any intelligent entity is
> interested in self-preservation.
>
Yes, and I can't think of a better starting assumption than
self-preservation; in fact that was the only one of Asimov's 3 laws of
robotics that made any sense.
> wond
Synesthesia proves that data can be formatted in multiple ways,
irrespective of assumed correlations. A computer proves this also. Your
argument is essentially that we couldn't look at the data of an mp3 in any
other way except listening to it with an ear. "You'd have realized that
visual/alphanume
On Fri, Aug 16, 2013 Telmo Menezes wrote:
> >> If the goal of Artificial Intelligence is not a machine that behaves
>> like a Intelligent human being then what the hell is the goal?
>>
>
> >A machine that behaves like a intelligent human will be subject to
> emotions like boredom, jealousy, pr
I can nothing but laugh at at a Physicist pontificating about what
they call "free will" . It show how far the destruction of philosophy
by metaphisical-ideological-religious reductionism has gone since
Occam.
Calvin would be surprised about the twists that have suffered his
theory of predestinati
13 matches
Mail list logo