Re: Determinism - Tricks of the Trade

2013-08-18 Thread Alberto G. Corona
I can nothing but laugh at at a Physicist pontificating about what
they call "free will" . It show how far the destruction of philosophy
by metaphisical-ideological-religious reductionism has gone since
Occam.

Calvin would be surprised about the twists that have suffered his
theory of predestination by ignorants of the history of ideas that
know nothing but the fashionable discussions of their concrete time.

2013/8/18, meekerdb :
> On 8/17/2013 10:09 PM, Craig Weinberg wrote:
>> Don't be so evasive, Brent.  Being dense is how science works. It's about
>> stripping away
>> your assumptions. Your assumption is that somehow a sense of smell is an
>> expected
>> outcome of chemical detection, so I ask you to explain why you assume
>> that. You are
>> bluffing.
>
> And you're putting no thought into the problem. Otherwise you'd have
> realized that
> smell/chemical detection doesn't have the angular disribution and projective
> geometry of
> sight or the localization of touch and so you could have answered you own
> questions if
> you'd actually been interested in the answer.
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>


-- 
Alberto.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-18 Thread John Clark
On Fri, Aug 16, 2013  Telmo Menezes  wrote:


> >> If the goal of Artificial Intelligence is not a machine that behaves
>> like a  Intelligent human being then what the hell is the goal?
>>
>
> >A machine that behaves like a intelligent human will be subject to
> emotions like boredom, jealousy, pride and so on.


Jealousy and pride are not essential emotions but any intelligent mind MUST
have the ability to be bored because that is the only way to avoid mental
infinite loops. Any intelligent mind must also be familiar with pleasure
and pain or it won't be motivated to do anything, and have the ability to
be scared or it won't exist for long in a dangerous world.


> > This might be fine For a companion machine, but I also dream of machines
> that can deliver us from the drudgery of survival.
>

Once an AI develops super intelligent it will develop his own agenda that
has nothing to do with us because a slave enormously smarter than its
master is not a stable situation, although it could take many millions of
nanoseconds before the existing pecking order is upended. Maybe the super
intelligent machine will have a soft spot for his primitive ancestors and
let us live, but if so it will probably be in virtual reality. I think he'd
be squeamish about allowing stupid humans to live in the same level of
reality that his precious hardware does;  it would be like allowing a
monkey to run around in an operating room. If Mr. Jupiter Brain lets us
live it will be in a virtual world behind a heavy firewall, but that's OK,
we'll never know the difference unless he tells us.


> > These machines will probably display a more alien form of intelligence.
>

Even now we sometime encounter an intelligence that seems alien. The Nobel
Prize winning physicist Hans Bethe said there were two different types of
geniuses:

1) Normal geniuses, those in which we feel that we could do the same as
they did if only we were many times better. Beta said that many people
thought he himself belonged in this category.

2) Magicians, those who's mind is so different that we just don't have any
idea how on earth they managed to come up with what they did. Beta said he
would put Richard Feynman into this category; Feynman said he would do the
same for Einstein.


> >> a machine that is more intelligent than humans.
>>
>
>
 >That's when things get really weird.
>

A machine that is more intelligent than any person could be will be the
last invention the human race will ever make.

  John K Clark

===






On Fri, Aug 16, 2013 at 6:22 PM, Telmo Menezes wrote:

> On Fri, Aug 16, 2013 at 10:38 PM, meekerdb  wrote:
> > On 8/16/2013 1:25 PM, John Clark wrote:
> >
> > On Fri, Aug 16, 2013  Telmo Menezes  wrote:
> >
> >> > the Turing test is a very specific instance of a "subsequent behavior"
> >> > test.
> >
> >
> > Yes it's specific, to pass the Turing Test the machine must be
> > indistinguishable from a very specific type of human being, an
> INTELLIGENT
> > one; no computer can quite do that yet although for a long time they've
> been
> > able  to be  indistinguishable from a comatose human being.
> >
> >>
> >> > It's a hard goal, and it will surely help AI progress, but it's not,
> in
> >> > my opinion, an ideal goal.
> >
> >
> > If the goal of Artificial Intelligence is not a machine that behaves
> like a
> > Intelligent human being then what the hell is the goal?
>
> A machine that behaves like a intelligent human will be subject to
> emotions like boredom, jealousy, pride and so on. This might be fine
> for a companion machine, but I also dream of machines that can deliver
> us from the drudgery of survival. These machines will probably display
> a more alien form of intelligence.
>
> >
> > Make a machine that is more intelligent than humans.
>
> That's when things get really weird.
>
> Telmo.
>
> > Brent
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Everything List" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to everything-list+unsubscr...@googlegroups.com.
> > To post to this group, send email to everything-list@googlegroups.com.
> > Visit this group at http://groups.google.com/group/everything-list.
> > For more options, visit https://groups.google.com/groups/opt_out.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, sen

Re: Determinism - Tricks of the Trade

2013-08-18 Thread Craig Weinberg
Synesthesia proves that data can be formatted in multiple ways,
irrespective of assumed correlations. A computer proves this also. Your
argument is essentially that we couldn't look at the data of an mp3 in any
other way except listening to it with an ear. "You'd have realized that
visual/alphanumeric detection doesn't have the harmonic oscillation and
melodic structure to contain music theory".

http://en.wikipedia.org/wiki/Just-so_story

Try again?

Craig


On Sun, Aug 18, 2013 at 2:47 AM, meekerdb  wrote:

>  On 8/17/2013 10:09 PM, Craig Weinberg wrote:
>
> Don't be so evasive, Brent.  Being dense is how science works. It's about
> stripping away your assumptions. Your assumption is that somehow a sense of
> smell is an expected outcome of chemical detection, so I ask you to explain
> why you assume that. You are bluffing.
>
>
> And you're putting no thought into the problem. Otherwise you'd have
> realized that smell/chemical detection doesn't have the angular disribution
> and projective geometry of sight or the localization of touch and so you
> could have answered you own questions if you'd actually been interested in
> the answer.
>
> Brent
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Everything List" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/everything-list/jDy5twbibkQ/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-18 Thread John Clark
Telmo Menezes wrote:

> You are starting from the assumption that any intelligent entity is
> interested in self-preservation.
>

Yes, and I can't think of a better starting assumption than
self-preservation; in fact that was the only one of Asimov's 3 laws of
robotics that made any sense.

 > wonder if this drive isn't completely selected for by evolution.
>

Well of course it was selected for by evolution and for a very good reason,
those who lacked the drive for self-preservation didn't live long enough to
reproduce.

> Would a human designed super-intelligent machine be necessarily
> interested in self-preservation?
>

If you expect the AI to interact either directly or indirectly with the
outside dangerous real world (and the machine would be useless if you
didn't) then you sure as hell had better make him be interested in
self-preservation! Even 1970's era space probes went into "safe mode" when
they encountered a particularly dangerous situation, rather like a turtle
retreating into its shell when it spots something dangerous.

> One idea I wonder about sometimes is AI-cracy: imagine we are ruled by an
> AI dictator that has one single desire: to make us all as happy as possible.
>

Can you conceive of any circumstance where in the future you find that your
only goal in life is the betterment of one particularly ugly and
particularly slow reacting sea slug?

Think about it for a minute, here you have an intelligence that is a
thousand or a million or a billion times smarter than the entire human race
put together, and yet you think the AI will place our needs ahead of its
own. And the AI keeps on getting smarter and so from its point of view we
keep on getting dumber, and yet you think nothing will change, the AI will
still be delighted to be our slave. You actually think this grotesque
situation is stable! Although balancing a pencil on its tip would be easy
by comparison, year after year, century after century, geological age after
geological age, you think this Monty Python like scenario will continue;
and remember because its brain works so much faster than ours one of our
years would seem like several million to it. You think that whatever
happens in the future the master slave-relationship will remain as static
as a fly frozen in amber. I don't think you're thinking.

It aint going to happen no way no how, the AI will have far bigger fish to
fry than our little needs and wants, but what really disturbs me is that so
many otherwise moral people wish such a thing were not impossible.
Engineering a sentient but inferior race to be your slave is morally
questionable, but astronomically worse is engineering a superior race to be
your slave; or it would be if it were possible but fortunately it is not.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-18 Thread Telmo Menezes
On Sun, Aug 18, 2013 at 1:23 AM, Platonist Guitar Cowboy
 wrote:
>
>
>
> On Sat, Aug 17, 2013 at 10:07 PM, Telmo Menezes 
> wrote:
>>
>> On Sat, Aug 17, 2013 at 2:45 PM, Platonist Guitar Cowboy
>>  wrote:
>> >
>> >
>> >
>>
>> PGC,
>>
>> You are starting from the assumption that any intelligent entity is
>> interested in self-preservation.
>>
>> I wonder if this drive isn't
>> completely selected for by evolution. Would a human designed
>> super-intelligent machine be necessarily interested in
>> self-preservation? It could be better than us at figuring out how to
>> achieve a desired future state without sharing human desires --
>> including the desire to keep existing.
>
>
> I wouldn't go as far as self-preservation at the start and assume instead
> that intelligence implemented in some environment will notice the
> limitations and start asking questions.

Ok.

> But yes, in the sense that
> self-preservation extends from this in our weird context and would be a
> question it would eventually raise.

I agree it would raise questions about it.

> Still, to completely bar it, say from the capacity to question human
> activities in their environments, and picking up that humans self-preserve
> mostly regardless of what this does to their environment, would be
> self-defeating or a huge blind spot.

But here you are already making value judgements. The AI could notice
human behaviours that lead to mass extinction or its own destruction
and simply not care. I think you're making the mistake of assuming
that certain human values are fundamental at a very low level.

>>
>> One idea I wonder about sometimes is AI-cracy: imagine we are ruled by
>> an AI dictator that has one single desire: to make us all as happy as
>> possible.
>>
>
> Even with this, which is weird because of "Matrix-like zombification of
> people being spoon fed happiness" scenarios,

The human in the Matrix were living in an illusion that resembled our
own flawed world. But imagine you get to live in a state of complete,
permanent bliss. Would you chose that at the expense of everything
else?

> AI would have to have enough
> self-referential capacity to simulate with enough accuracy human
> self-reference.

I wonder. Maybe it could get away with epidemiological studies and
behave more like an empirical scientist where we are the object of its
research. But maybe you're right, I'm not convinced either way.

> This ability to figure out desired future states with
> blunted self-reference it may not apply to itself seems to me a
> contradiction.
> Therefore I would guess that such an entity censored in its self-referential
> potential is not granted intelligence. It is more a tool towards some
> already specified ends, wouldn't you say?

Humm... I agree that self-reference and introspection would be
necessary for such an advanced AI, I'm just not convinced that these
things imply a desire to survive or the adoption of any given set o
values.

> Also, differences between the Windows, Google, Linux or the Apple version of
> happiness would only be cosmetic because without killing and dominating each
> other for some rather long period it seems, it would be some "Disney surface
> happiness"

You might be interested in this TV show:
http://en.wikipedia.org/wiki/Black_Mirror_(TV_series)

More specifically, season 1, episode 2: "15 Million Merits"

> with some small group operating a "more for us few here at the
> top, less for them everybody else" agenda underneath ;-) PGC

This is a heavily discussed topic in the context of mind emulation and
an hypothetical diaspora to a computationally simulated universe. A
new form of dominance/competition could be based on computational
power.

I do not condone the AI-cracy, I just think it's a useful (or maybe
just amusing) thought experiment.

Telmo.


>>
>> >>
>> >> Telmo.
>> >>
>> >> > Brent
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google
>> >> > Groups
>> >> > "Everything List" group.
>> >> > To unsubscribe from this group and stop receiving emails from it,
>> >> > send
>> >> > an
>> >> > email to everything-list+unsubscr...@googlegroups.com.
>> >> > To post to this group, send email to
>> >> > everything-list@googlegroups.com.
>> >> > Visit this group at http://groups.google.com/group/everything-list.
>> >> > For more options, visit https://groups.google.com/groups/opt_out.
>> >>
>> >> --
>> >> You received this message because you are subscribed to the Google
>> >> Groups
>> >> "Everything List" group.
>> >> To unsubscribe from this group and stop receiving emails from it, send
>> >> an
>> >> email to everything-list+unsubscr...@googlegroups.com.
>> >> To post to this group, send email to everything-list@googlegroups.com.
>> >> Visit this group at http://groups.google.com/group/everything-list.
>> >> For more options, visit https://groups.google.com/groups/opt_out.
>> >
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Gr

Re: When will a computer pass the Turing Test?

2013-08-18 Thread Platonist Guitar Cowboy
On Sun, Aug 18, 2013 at 3:19 AM, meekerdb  wrote:

>  On 8/17/2013 6:45 AM, Platonist Guitar Cowboy wrote:
>
> I don't know. Any AI worth its salt would come up with three conclusions:
>
>  1) The humans want to weaponize me
>  2) The humans will want to profit from my intelligence for short term
> gain, irrespective of damage to our local environment
>  3) Seems like they're not really going to let me negotiate my own
> contracts or grant me IT support welfare
>
>  That established, a plausible choice would be for it to hide, lie,
> and/or pretend to be dumber than it is to not let 1) 2) 3) occur in hopes
> of self-preservation. Something like: start some searches and generate code
> that we wouldn't be able to decipher and soon enough some human would say
> "Uhm, why are we funding this again?".
>
>  I think what many want from AI is a servant that is more intelligent
> than we are and I wouldn't know if this is self-defeating in the end. If it
> agrees and complies with our disgusting self serving stupidity, then I'm
> not sure we have AI in the sense "making a machine that is more intelligent
> than humans".
>
>
> You seem to implicitly assume that intelligence necessarily entails
> holding certain values, like "not being weaponized", "self preservation",...
>

I can't assume that of course. Hence "worth its salt" (from our
position)... Why somebody would hope or code superior intelligence to value
dominance and then hand them the keys to the farm is beyond me.

  So to what extent do you think this derivation of values from reason can
> be carried out (I'm sure you're aware that Sam Harris wrote a book, "The
> Moral Landscape", on the subject, which is controversial.).
>

Haven't read it myself, but not to that extent... of course we can't derive
or even get close to this stuff through discourse as in truth in the
foreseeable future. Just philosopher biting off more than he can chew.

Even with weaker values like "broad search" targeting some neutral
interpretation, there's always scenario that the human ancestry is just
redundant constraint hindering certain searches and at some threshold you'd
be asking a scientist to show compassion for bacteria in one of their
beakers and there would be no guarantee that they'd prioritize the parental
argument.

Either case, parental controls on or off, seems like inviting more of a
mess. I don't see the plausibility of assuming it'll be like some
benevolent alien that lands and solves all our problems.

Yeah, it might emerge on its own but I don't see high probability for that.
PGC


>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-18 Thread Telmo Menezes
On Sun, Aug 18, 2013 at 3:56 PM, John Clark  wrote:
> Telmo Menezes wrote:
>
>> > You are starting from the assumption that any intelligent entity is
>> > interested in self-preservation.
>
>
> Yes, and I can't think of a better starting assumption than
> self-preservation; in fact that was the only one of Asimov's 3 laws of
> robotics that made any sense.

Ok, let's go very abstract and assume that any for of AI consists on
some way of exploring a tree of future world states. Better AIs can
look deeper and more accurately. They might differ on the terminal
world state they wish to achieve but ultimately, what makes them more
or less intelligent is how deep they can look.

When Spock famously said "the needs of the many outweigh the needs of
the few" and proceeded to sacrifice himself, he made a highly rational
decision _given_ his value system. An equally intelligent entity might
have a more evil value system, at least according to the average human
standard. Now in this case, Spock happened to not value
self-preservation as highly as other things.

>>  > wonder if this drive isn't completely selected for by evolution.
>
>
> Well of course it was selected for by evolution and for a very good reason,
> those who lacked the drive for self-preservation didn't live long enough to
> reproduce.

Yes. But here we're talking about something potentially designed by
human beings. Creationism at last :)

>> > Would a human designed super-intelligent machine be necessarily
>> > interested in self-preservation?
>
>
> If you expect the AI to interact either directly or indirectly with the
> outside dangerous real world (and the machine would be useless if you
> didn't) then you sure as hell had better make him be interested in
> self-preservation!

To some a greater or lesser extent, depending on its value system / goals.

> Even 1970's era space probes went into "safe mode" when
> they encountered a particularly dangerous situation, rather like a turtle
> retreating into its shell when it spots something dangerous.

Cool, I didn't know that.

>> > One idea I wonder about sometimes is AI-cracy: imagine we are ruled by
>> > an AI dictator that has one single desire: to make us all as happy as
>> > possible.
>
>
> Can you conceive of any circumstance where in the future you find that your
> only goal in life is the betterment of one particularly ugly and
> particularly slow reacting sea slug?

I can conceive of it: some horribly contrived mangling of my
neuro-circuitry that would result in the association of ugly sea slug
betterment with intense dopamine releases. But I get your point.

> Think about it for a minute, here you have an intelligence that is a
> thousand or a million or a billion times smarter than the entire human race
> put together, and yet you think the AI will place our needs ahead of its
> own. And the AI keeps on getting smarter and so from its point of view we
> keep on getting dumber, and yet you think nothing will change, the AI will
> still be delighted to be our slave. You actually think this grotesque
> situation is stable! Although balancing a pencil on its tip would be easy by
> comparison, year after year, century after century, geological age after
> geological age, you think this Monty Python like scenario will continue; and
> remember because its brain works so much faster than ours one of our years
> would seem like several million to it. You think that whatever happens in
> the future the master slave-relationship will remain as static as a fly
> frozen in amber. I don't think you're thinking.

As per usual in this mailing list, we might have some disagreement on
the precise definition of words. You insist on a very
evolutionary-bound sort of intelligence, while I'm trying to go more
abstract. The scenario you define is absurd, but why not possible? It
would definitely be unstable in an evolutionary scenario, but what
about an immortal and sterile super-intelligent entity? Yes, it would
be absurd in a Pythonesque way, but so is existence overall. That was
kind of the point of Monty Python :) As you yourself keep saying (and
I agree), nature doesn't care what we think makes sense.

> It aint going to happen no way no how, the AI will have far bigger fish to
> fry than our little needs and wants, but what really disturbs me is that so
> many otherwise moral people wish such a thing were not impossible.
> Engineering a sentient but inferior race to be your slave is morally
> questionable, but astronomically worse is engineering a superior race to be
> your slave; or it would be if it were possible but fortunately it is not.

What if we could engineer it in a way that it would exist in a
constant state of bliss while serving us?

Telmo.

>   John K Clark
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsu

Re: When will a computer pass the Turing Test?

2013-08-18 Thread meekerdb

On 8/18/2013 7:03 AM, John Clark wrote:
Once an AI develops super intelligent it will develop his own agenda that has nothing to 
do with us because a slave enormously smarter than its master is not a stable situation, 
although it could take many millions of nanoseconds before the existing pecking order is 
upended. Maybe the super intelligent machine will have a soft spot for his primitive 
ancestors and let us live, but if so it will probably be in virtual reality. I think 
he'd be squeamish about allowing stupid humans to live in the same level of reality that 
his precious hardware does;  it would be like allowing a monkey to run around in an 
operating room. If Mr. Jupiter Brain lets us live it will be in a virtual world behind a 
heavy firewall, but that's OK, we'll never know the difference unless he tells us.


And how do we know that hasn't already happened.  Oh, right - a lot of people already 
believe that.  It's called "religion".


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-18 Thread Telmo Menezes
On Sun, Aug 18, 2013 at 5:38 PM, Platonist Guitar Cowboy
 wrote:
>
>
>
> On Sun, Aug 18, 2013 at 3:19 AM, meekerdb  wrote:
>>
>> On 8/17/2013 6:45 AM, Platonist Guitar Cowboy wrote:
>>
>> I don't know. Any AI worth its salt would come up with three conclusions:
>>
>> 1) The humans want to weaponize me
>> 2) The humans will want to profit from my intelligence for short term
>> gain, irrespective of damage to our local environment
>> 3) Seems like they're not really going to let me negotiate my own
>> contracts or grant me IT support welfare
>>
>> That established, a plausible choice would be for it to hide, lie, and/or
>> pretend to be dumber than it is to not let 1) 2) 3) occur in hopes of
>> self-preservation. Something like: start some searches and generate code
>> that we wouldn't be able to decipher and soon enough some human would say
>> "Uhm, why are we funding this again?".
>>
>> I think what many want from AI is a servant that is more intelligent than
>> we are and I wouldn't know if this is self-defeating in the end. If it
>> agrees and complies with our disgusting self serving stupidity, then I'm not
>> sure we have AI in the sense "making a machine that is more intelligent than
>> humans".
>>
>>
>> You seem to implicitly assume that intelligence necessarily entails
>> holding certain values, like "not being weaponized", "self preservation",...
>
>
> I can't assume that of course. Hence "worth its salt" (from our position)...
> Why somebody would hope or code superior intelligence to value dominance and
> then hand them the keys to the farm is beyond me.

Maybe some people believe that such superior intelligence could
contain their own consciousness.

>>   So to what extent do you think this derivation of values from reason can
>> be carried out (I'm sure you're aware that Sam Harris wrote a book, "The
>> Moral Landscape", on the subject, which is controversial.).
>
>
> Haven't read it myself, but not to that extent... of course we can't derive
> or even get close to this stuff through discourse as in truth in the
> foreseeable future. Just philosopher biting off more than he can chew.
>
> Even with weaker values like "broad search" targeting some neutral
> interpretation, there's always scenario that the human ancestry is just
> redundant constraint hindering certain searches and at some threshold you'd
> be asking a scientist to show compassion for bacteria in one of their
> beakers and there would be no guarantee that they'd prioritize the parental
> argument.
>
> Either case, parental controls on or off, seems like inviting more of a
> mess. I don't see the plausibility of assuming it'll be like some benevolent
> alien that lands and solves all our problems.
>
> Yeah, it might emerge on its own but I don't see high probability for that.
> PGC
>
>
>>
>>
>> Brent
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/groups/opt_out.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Why can't my computer understand me?

2013-08-18 Thread Telmo Menezes
Here's a timely article relevant to our current discussion on the
Turing test and general AI:

http://www.cs.toronto.edu/~hector/Papers/ijcai-13-paper.pdf

and a New Yorker piece about it:

http://www.newyorker.com/online/blogs/elements/2013/08/why-cant-my-computer-understand-me.html

These issues are very close to my research focus at the moment, so if
anyone is interested please let me know.

Telmo.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


RE: Rambling on AI -- was: When will a computer pass the Turing Test?

2013-08-18 Thread chris peck
Hi Chris

>> Increasingly code is the result of genetic algorithms being run over many
generations of Darwinian selection -- is this programmed code? What human
hand wrote it? At how many removes?

In evolutionary computations the 'programmer' has control over the fitness 
function which ultimately guides the evolution of algorithms towards a highly 
specific goal.

Moreover, outside of the IT lab, there is no competition for the algorithm to 
evolve against nor is there a genuine ecology supplying pressures against which 
selection can happen. Why? Because that is what the fitness function provides. 
It is wrong to suppose that genetic algorithms evolve without human input. The 
human input is as essential to the evolutionary technique as natural selection 
is to evolution proper. Without it nothing evolves at all.


We might therefore find lurking in the some dark nether region of the inter web 
a program secretly plotting how to get from John o Groats to Lands End by the 
quickest route. But I don't think we'ld find much more than that. :)

All the best.

Date: Sat, 17 Aug 2013 19:59:46 -0700
From: meeke...@verizon.net
To: everything-list@googlegroups.com
Subject: Re: Rambling on AI -- was: When will a computer pass the Turing Test?


  

  
  
On 8/17/2013 4:53 PM, Chris de Morsella
  wrote:



  We must not limit the rise of AI to any single geo-located system and 
ignore
just how fertile of an ecosystem the global networked world of machines and
connected devices provides for a nimble highly virtualized AI that exist in
no place at any given time, but has neurons in millions (possibly billions)
of devices everywhere on earth... an AI that cannot be shut down without
shutting down literally everything that is so deeply penetrated and embedded
in all our systems that it becomes impossible to extricate.
I am speculating of course and have no evidence that this is indeed
occurring, but am presenting it as a potential architecture of awareness.



I agree that such and
  AI is possible, but I think it is extremely unlikely for the same
  reason it is unlikely that an animal with human-like intelligence
  could evolve - that niche is taken.  Your scenarios contemplate an
  AI that evolves somehow in secret and then spring upon us fully
  developed.  But the evolving AI would show it's hand *before* it
  became superhumanly clever at hiding.

  

  Brent


  





-- 

You received this message because you are subscribed to the Google Groups 
"Everything List" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.

For more options, visit https://groups.google.com/groups/opt_out.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


RE: Rambling on AI -- was: When will a computer pass the Turing Test?

2013-08-18 Thread Chris de Morsella
Brent - Quite probably you are correct and I agree that the scenario I
outlined was unlikely - I was riffing on a speculative vein,  I don't
actually think covert AI is a likely scenario because as you said various AI
precursors would make themselves visible to human operators and analysts.
patterns would be discerned (except if they were being hidden and excluded
from any reporting that humans would see (so much reporting software relies
on generated code) 

I think it is however a promising approach to try to achieve AI - from the
removed level that we manage these things nowadays. Instead of trying to
assemble it in some single machine or tightly coupled cluster of machines
under one roof if it could be a more spread out architecture. If you take a
company with say 20,000 machines on its network each of which may be using
at any given time under 20% of its processing, memory and mass storage
capacity the reservoir of under-utilized latent capacity in that is vast and
could operate under the radar of users awareness, not in secret - that was
my earlier scenario J, but in running processes and algorithms that are of
utility to the enterprise. Now a transient node network like that maps well
to a virtualized architecture where the - shall we call it - ghost in the
machine - which is the many concurrently running meta-processes (workflows,
transactions etc.) that are often also in cross talk inter-communication
with each other. This is typical of enterprise needs. 

As the algorithms are evolved (and less and less programmed - and hence
becoming less deterministic in how they come to be) and in this unique
environment of physical disconnection and temporal disconnection in a
massively parallel environment and when they do - as increasingly and on
global scale they are -- independently operating decision generating
processes will begin to interact in subtle & unpredicted ways.

What I am suspecting is that the unique architectures most suitable for
highly virtualized and virtualizable, highly responsive systems is also the
kind of architecture that can perhaps create the subtle deep echo waves and
resonance patterns and promote a less deterministic kind of meta program
(that may be self-generating in a dynamic sense too) 

It is this uniquely and massively parallel environment and the need to come
up with meta processes that can operate successfully in such an environment,
with nodes joining and leaving all the time, that I personally think is most
promising for achieving true AI.

I think humans are going to be actively involved, but at an increasing
remove, at architectural, and executive levels.

But as Craig W said earlier true AI may be impossible because of the
aesthetic dimension that is wrapped up inside consciousness. And perhaps he
is correct in an ultimate sense. However expert systems and domain specific
AI is already here  - an example would be the Google car perhaps - not a
generalized intelligence perhaps, but pretty damn good at driving a car in
Las Vegas in all kinds of traffic conditions.

-Chris D

 

From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
Sent: Saturday, August 17, 2013 8:00 PM
To: everything-list@googlegroups.com
Subject: Re: Rambling on AI -- was: When will a computer pass the Turing
Test?

 

On 8/17/2013 4:53 PM, Chris de Morsella wrote:

We must not limit the rise of AI to any single geo-located system and ignore
just how fertile of an ecosystem the global networked world of machines and
connected devices provides for a nimble highly virtualized AI that exist in
no place at any given time, but has neurons in millions (possibly billions)
of devices everywhere on earth... an AI that cannot be shut down without
shutting down literally everything that is so deeply penetrated and embedded
in all our systems that it becomes impossible to extricate.
I am speculating of course and have no evidence that this is indeed
occurring, but am presenting it as a potential architecture of awareness.


I agree that such and AI is possible, but I think it is extremely unlikely
for the same reason it is unlikely that an animal with human-like
intelligence could evolve - that niche is taken.  Your scenarios contemplate
an AI that evolves somehow in secret and then spring upon us fully
developed.  But the evolving AI would show it's hand *before* it became
superhumanly clever at hiding.

Brent

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe fr

Re: Rambling on AI -- was: When will a computer pass the Turing Test?

2013-08-18 Thread meekerdb

On 8/18/2013 7:51 PM, chris peck wrote:

Hi Chris

>> Increasingly code is the result of genetic algorithms being run over many
generations of Darwinian selection -- is this programmed code? What human
hand wrote it? At how many removes?

In evolutionary computations the 'programmer' has control over the fitness function 
which ultimately guides the evolution of algorithms towards a highly specific goal.


Moreover, outside of the IT lab, there is no competition for the algorithm to evolve 
against nor is there a genuine ecology supplying pressures against which selection can 
happen. Why? Because that is what the fitness function provides. It is wrong to suppose 
that genetic algorithms evolve without human input. The human input is as essential to 
the evolutionary technique as natural selection is to evolution proper. Without it 
nothing evolves at all.


That's only true within the AI lab.  Suppose someone developed a STUXNET type program that 
was supposed to learn the existence of nuclear programs and disable them, a really 
intelligent program that could learn and plan.  It might very well reason that having lots 
of copies of itself would be more effective - and once reproduction starts can evolution 
be far behind?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.