On 20 Sep 2014, at 6:22 am, LizR lizj...@gmail.com wrote:
Does this mean evolution is intelligent but (probably) not conscious?
The Blind Watchmaker
K
On 20 September 2014 03:01, Stephen Paul King stephe...@provensecure.com
wrote:
Dear Bruno,
I agree, this introduces the
On 22 September 2014 20:57, Kim Jones kimjo...@ozemail.com.au wrote:
On 20 Sep 2014, at 6:22 am, LizR lizj...@gmail.com wrote:
Does this mean evolution is intelligent but (probably) not conscious?
The Blind Watchmaker
Yes.
--
You received this message because you are subscribed to the
Dear Stephen,
On 19 Sep 2014, at 17:01, Stephen Paul King wrote:
Dear Bruno,
I agree, this introduces the possibility that the inhibiting or
activation of gene aspect is the running of the particular
algorithm while the mutation and selection aspect might be seen as
a process on the
On 01 Sep 2014, at 17:57, Stephen Paul King wrote:
Hi Brent,
Have you seen any studies of the Ameoba dubia that look into
what their genome is expressing? http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2933061/
seems to suggest to me the possibility that the genome is acting
as a
Dear Bruno,
I agree, this introduces the possibility that the inhibiting or
activation of gene aspect is the running of the particular algorithm
while the mutation and selection aspect might be seen as a process on the
space of algorithms.
On Fri, Sep 19, 2014 at 9:04 AM, Bruno Marchal
Does this mean evolution is intelligent but (probably) not conscious?
On 20 September 2014 03:01, Stephen Paul King stephe...@provensecure.com
wrote:
Dear Bruno,
I agree, this introduces the possibility that the inhibiting or
activation of gene aspect is the running of the particular
The process does seem, if we think of it this way, to be intelligent, yes.
But this is a definition of intelligence that most would not consider: An
intelligence is the collection of behaviors of a system that tend to
increase the number of possible future states.
My wording doesn't quite look
On Sat, Sep 6, 2014 at 7:18 PM, Bruno Marchal marc...@ulb.ac.be wrote:
On 28 Aug 2014, at 13:33, Telmo Menezes wrote:
On Wed, Aug 27, 2014 at 11:11 PM, Platonist Guitar Cowboy
multiplecit...@gmail.com wrote:
Legitimacy of proof and evidence (e.g. for a set of cool algorithms
On 06 Sep 2014, at 18:56, John Clark wrote:
On Fri, Sep 5, 2014 at 4:41 PM, meekerdb meeke...@verizon.net wrote:
Hypatia was the deliberate target of a Christian mob incited by an
ally of Cyril and she was first kidnapped and then murder in the
most gruesome way by having her skin
On 05 Sep 2014, at 06:40, Stephen Paul King wrote:
I agree, but I strongly suspect that one does not program an AGI,
we would grow it and teach it
Yes.
The fact that humans have a very long childhood reflect the fact that
nature get the point that children are intelligent, and
On 05 Sep 2014, at 22:12, meekerdb wrote:
On 9/5/2014 11:52 AM, Bruno Marchal wrote:
According to Harvard scholars the Romans invented Christianity to
keep the Jews in check:
http://www.bibliotecapleyades.net/sociopolitica/esp_sociopol_piso02a.htm
You mean according to consipiracy
On 05 Sep 2014, at 22:41, meekerdb wrote:
On 9/5/2014 12:18 PM, Bruno Marchal wrote:
On 02 Sep 2014, at 19:40, meekerdb wrote:
On 9/2/2014 9:40 AM, Bruno Marchal wrote:
On 25 Aug 2014, at 21:04, meekerdb wrote:
Bostrom says, If humanity had been sane and had our act
together globally,
On 05 Sep 2014, at 23:35, LizR wrote:
I don't know how you could do this in practice, but nature has
proved that intelligent beings can have their behaviour towards
other beings constrained in various ways. An obvious example is that
we care for our children. If one could built (or
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of John Mikes
Sent: Saturday, September 06, 2014 1:27 PM
To: everything-list@googlegroups.com
Subject: Re: AI Dooms Us
Chris: and why on Earth would you exclude the communication of plants etc
@googlegroups.com] *On Behalf Of *John Mikes
*Sent:* Saturday, September 06, 2014 1:27 PM
*To:* everything-list@googlegroups.com
*Subject:* Re: AI Dooms Us
Chris: and why on Earth would you exclude the communication of plants etc.
from the broad meaning of language? (They don't have a blabbermouth
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Stephen Paul King
Sent: Sunday, September 07, 2014 11:43 AM
To: everything-list@googlegroups.com
Subject: Re: AI Dooms Us
Hi Chris,
Does it seem to you that there are two aspects
On 6 Sep 2014, at 10:03 am, LizR lizj...@gmail.com wrote:
PS why is a laser like a goldfish?
Because neither can whistle
K
--
You received this message because you are subscribed to the Google Groups
Everything List group.
To unsubscribe from this group and stop receiving
On Fri, Sep 5, 2014 at 4:41 PM, meekerdb meeke...@verizon.net wrote:
Hypatia was the deliberate target of a Christian mob incited by an ally
of Cyril and she was first kidnapped and then murder in the most gruesome
way by having her skin scraped off.
And for this Cyril was made a saint by
On 28 Aug 2014, at 13:33, Telmo Menezes wrote:
On Wed, Aug 27, 2014 at 11:11 PM, Platonist Guitar Cowboy multiplecit...@gmail.com
wrote:
Legitimacy of proof and evidence (e.g. for a set of cool algorithms
concerning AI, more computing power, big data etc), is an empty
question to ask,
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Stephen Paul King
We learn of each other by interacting this becomes communication once
languages emerge...
Want to point out that important communication occurs in nature without what we
Chris: and why on Earth would you exclude the communication of plants etc.
from the broad meaning of language? (They don't have a blabbermouth).
JM
On Sat, Sep 6, 2014 at 3:13 PM, 'Chris de Morsella' via Everything List
everything-list@googlegroups.com wrote:
*From:*
On 5 September 2014 16:40, Stephen Paul King stephe...@provensecure.com
wrote:
I agree, but I strongly suspect that one does not program an AGI, we
would grow it and teach it
I see we agree on that.
--
You received this message because you are subscribed to the Google Groups
Everything
On 5 September 2014 16:41, Stephen Paul King stephe...@provensecure.com
wrote:
We learn of each other by interacting this becomes communication once
languages emerge...
Great, I've been saying that too.
--
You received this message because you are subscribed to the Google Groups
On 5 September 2014 16:42, Stephen Paul King stephe...@provensecure.com
wrote:
Nah, I get what you mean. Connecting an AGI to a body is one way of
teaching it to recognize us, but do we really want to do that?
I have no idea what we want, I was just presenting a thought experiment.
My basic
Hi LizR,
Exactly, we are the 'same sort of thing'. :-) It seems that only scifi
writers actively explore this idea
http://en.wikipedia.org/wiki/Code_of_the_Lifemaker. The academics are
stuck in the mode of thinking that somehow 'intelligence' can only arise if
intentionally created by other
On Thu, Sep 4, 2014 at 6:09 AM, Telmo Menezes te...@telmomenezes.com
wrote:
Intelligence is clearly a process that can be bootstrapped -- we know
this from biology.
Yes, adults tend to be smarter than infants and infants are smarter than
one celled zygotes.
What I don't understand is
On Fri, Sep 5, 2014 at 10:57 AM, John Clark johnkcl...@gmail.com wrote:
On Thu, Sep 4, 2014 at 6:09 AM, Telmo Menezes te...@telmomenezes.com
wrote:
Intelligence is clearly a process that can be bootstrapped -- we know
this from biology.
Yes, adults tend to be smarter than infants and
AFAIK, if the AGI and humanity are not competing for the same resources, no
conflict need arise...
On Fri, Sep 5, 2014 at 11:08 AM, Terren Suydam terren.suy...@gmail.com
wrote:
On Fri, Sep 5, 2014 at 10:57 AM, John Clark johnkcl...@gmail.com wrote:
On Thu, Sep 4, 2014 at 6:09 AM, Telmo
http://wiki.lesswrong.com/wiki/Paperclip_maximizer
On Fri, Sep 5, 2014 at 11:13 AM, Stephen Paul King
stephe...@provensecure.com wrote:
AFAIK, if the AGI and humanity are not competing for the same resources,
no conflict need arise...
On Fri, Sep 5, 2014 at 11:08 AM, Terren Suydam
Hi Terren,
Ah, nice link. Thank you. Does the assumption of a finite and fixed set
of resources necessarily match the real world?
If an AGI's computation can occur on any active and evolving
network of sufficient complexity, would the paperclip argument hold?
ISTM that overall resources
One other remark.
From the previously linked article:
This may seem more like super-stupidity than super-intelligence. For
humans, it would indeed be stupidity, as it would constitute failure to
fulfill many of our important terminal values, such as life, love, and
variety. The AGI won't revise
I think it would be a purely academic exercise (as in, disconnected from
any practical consequences) to argue about the kinds of AGIs that could
have access to infinite resources.
Rejecting Yudkowsky's argument on the basis that reality *might* be
infinite seems like an odd move to me. If you
There is also the case of many AGI competing,. cooperating and colluding
with each other...
On Fri, Sep 5, 2014 at 11:35 AM, Terren Suydam terren.suy...@gmail.com
wrote:
I think it would be a purely academic exercise (as in, disconnected from
any practical consequences) to argue about the
On 02 Sep 2014, at 19:26, Richard Ruquist wrote:
On Tue, Sep 2, 2014 at 12:40 PM, Bruno Marchal marc...@ulb.ac.be
wrote:
On 25 Aug 2014, at 21:04, meekerdb wrote:
Bostrom says, If humanity had been sane and had our act together
globally, the sensible course of action would be to
On 02 Sep 2014, at 19:40, meekerdb wrote:
On 9/2/2014 9:40 AM, Bruno Marchal wrote:
On 25 Aug 2014, at 21:04, meekerdb wrote:
Bostrom says, If humanity had been sane and had our act together
globally, the sensible course of action would be to postpone
development of superintelligence
On 9/5/2014 11:52 AM, Bruno Marchal wrote:
According to Harvard scholars the Romans invented Christianity to keep the Jews
in check:
http://www.bibliotecapleyades.net/sociopolitica/esp_sociopol_piso02a.htm
You mean according to consipiracy theorist John Duran who lives in California and has
On 9/5/2014 12:18 PM, Bruno Marchal wrote:
On 02 Sep 2014, at 19:40, meekerdb wrote:
On 9/2/2014 9:40 AM, Bruno Marchal wrote:
On 25 Aug 2014, at 21:04, meekerdb wrote:
Bostrom says, If humanity had been sane and had our act together globally, the
sensible course of action would be to
I don't know how you could do this in practice, but nature has proved that
intelligent beings can have their behaviour towards other beings
constrained in various ways. An obvious example is that we care for our
children. If one could built (or otherwise cause to come into being) an AI
with a
Hi LizR,
Ah, so making sure that the AI have feed-back loops built in so that
there are consequences (short and long terms) for dumb behavior might be
a good idea. One way of doing this is ensuring that they can not be
self-immortal and must reproduce to recover a form of immortality of their
PS why is a laser like a goldfish?
--
You received this message because you are subscribed to the Google Groups
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send
On 6 September 2014 10:45, Stephen Paul King stephe...@provensecure.com
wrote:
Hi LizR,
Ah, so making sure that the AI have feed-back loops built in so that
there are consequences (short and long terms) for dumb behavior might be
a good idea. One way of doing this is ensuring that they
On 9/5/2014 5:03 PM, LizR wrote:
On 6 September 2014 10:45, Stephen Paul King stephe...@provensecure.com
mailto:stephe...@provensecure.com wrote:
Hi LizR,
Ah, so making sure that the AI have feed-back loops built in so that there
are
consequences (short and long terms) for dumb
Thank you Brent,
A quick search by way of Google verifies what you say.
Richard
On Fri, Sep 5, 2014 at 4:12 PM, meekerdb meeke...@verizon.net wrote:
On 9/5/2014 11:52 AM, Bruno Marchal wrote:
According to Harvard scholars the Romans invented Christianity to keep
the Jews in check:
On 4 September 2014 17:02, Stephen Paul King stephe...@provensecure.com
wrote:
Are the resources available to the OverLords that would allow the sharing
to be cost-free then it would make no difference, otherwise
(In Childhood's End the *Overlords *were the race who helped other
races to
On Wed, Sep 3, 2014 at 6:38 PM, John Clark johnkcl...@gmail.com wrote:
On Wed, Sep 3, 2014 at 7:54 AM, meekerdb meeke...@verizon.net wrote:
on Bruno's theory, consciousness is a binary attribute, all-or-nothing.
Intelligence has degrees
If that is true (and I'm not saying it is) then we
On Wed, Sep 3, 2014 at 7:56 PM, John Clark johnkcl...@gmail.com wrote:
On Wed, Sep 3, 2014 'Chris de Morsella' via Everything List
everything-list@googlegroups.com wrote:
a human baby is a plastic template for the individual to emerge in
And those 1000 lines of Lisp are a plastic
On 4 September 2014 22:09, Telmo Menezes te...@telmomenezes.com wrote:
What I don't understand is how people expect to have a human-level AI
(many degrees of freedom) and then also be able to micro-manage it.
You can't, of course. Every parent discovers that.
--
You received this message
On Thu, Sep 4, 2014 at 12:10 PM, LizR lizj...@gmail.com wrote:
On 4 September 2014 22:09, Telmo Menezes te...@telmomenezes.com wrote:
What I don't understand is how people expect to have a human-level AI
(many degrees of freedom) and then also be able to micro-manage it.
You can't, of
Hi,
I am looking for any papers on the effects of allowing neural networks
to couple to each other
On Thu, Sep 4, 2014 at 4:16 AM, LizR lizj...@gmail.com wrote:
On 4 September 2014 17:02, Stephen Paul King stephe...@provensecure.com
wrote:
Are the resources available to the
Hi Telmo,
What I don't understand is how people expect to have a human-level AI
(many degrees of freedom) and then also be able to micro-manage it.
exactly! A mind can only function in effective isolation. Control disallows
this as control involves coupling to the mechanisms of mind.
On Thu,
Hi,
OTOH, one can control the available resources of the AI (children)...
On Thu, Sep 4, 2014 at 6:16 AM, Telmo Menezes te...@telmomenezes.com
wrote:
On Thu, Sep 4, 2014 at 12:10 PM, LizR lizj...@gmail.com wrote:
On 4 September 2014 22:09, Telmo Menezes te...@telmomenezes.com wrote:
On 5 September 2014 00:38, Stephen Paul King stephe...@provensecure.com
wrote:
Hi,
OTOH, one can control the available resources of the AI (children)...
Depending on how clever the AI is. Proteus IV and Colossus found ways to
stop people pulling the plug (unlike HAL).
And of course you
Hi LizR,
I will repeat my question: What makes us think that the AGI will be
aware that we exist?
On Thu, Sep 4, 2014 at 8:21 PM, LizR lizj...@gmail.com wrote:
On 5 September 2014 00:38, Stephen Paul King stephe...@provensecure.com
wrote:
Hi,
OTOH, one can control the available
The entire universe as a sim? Could even an AI handle it?
Sent from AOL Mobile Mail
-Original Message-
From: Telmo Menezes te...@telmomenezes.com
To: everything-list everything-list@googlegroups.com
Sent: Thu, Sep 4, 2014 05:16 AM
Subject: Re: AI Dooms Us
div id
universe as a sim? Could even an AI handle it?
Sent from AOL Mobile Mail
-Original Message-
From: Telmo Menezes te...@telmomenezes.com
To: everything-list everything-list@googlegroups.com
Sent: Thu, Sep 4, 2014 05:16 AM
Subject: Re: AI Dooms Us
On Thu, Sep 4, 2014 at 12:10 PM
On 5 September 2014 12:58, Stephen Paul King stephe...@provensecure.com
wrote:
Hi LizR,
I will repeat my question: What makes us think that the AGI will be
aware that we exist?
Surely that depends on circumstances? If an AI is created and educated by
people then it will at least be aware
By the way, one possible scenario would be that the AI is provided with a
body - we could imagine that it's attached via radio, say, to an android
that is apparently human. To make this scenario deliberately extreme, for
the sake of argument, if the AI only interacts with the world via this
But you seem to assume that it has awareness of people beyond the sensor
data + computations that it can access and generate. Where did the property
of people come from.
Consider the case were the Google thing discovered cats from
processing YouTube data. Why do we think that it's
Sure, that would set up synchronization of sensory data input streams, but
it does not address my question: How does the AGI come so interprete those
data streams in a way that is compatible with ours?
If we build the robot body with EMF exitation sensors that operate in
the same range as ours
That's all we do... process sensor data and make complicated inferences
about those features of our experience we refer to as people (and
everything else). Of course, we undergo a great deal of training to get
there, and much of the training is done by people. To Liz's point,
purposefully designed
Cool! Terren, you grok what I'm trying to say. Thank you!!! We are freaking
AGI ourselves, operating machines made with biomolecules...
The big realization that I have had is that we have no means to
determine that the content of experience of any other AGI matches ours. All
that we can figure
On 5 September 2014 15:13, Stephen Paul King stephe...@provensecure.com
wrote:
But you seem to assume that it has awareness of people beyond the sensor
data + computations that it can access and generate. Where did the property
of people come from.
I'm not assuming it just happens. I'm
On 5 September 2014 15:18, Stephen Paul King stephe...@provensecure.com
wrote:
Sure, that would set up synchronization of sensory data input streams, but
it does not address my question: How does the AGI come so interprete those
data streams in a way that is compatible with ours?
Well, how
On 5 September 2014 16:08, Stephen Paul King stephe...@provensecure.com
wrote:
We are freaking AGI ourselves, operating machines made with biomolecules...
Sorry, I thought it was obvious that's what I was saying, too, when I
pointed out that an AGI could be connected to androids. Obviously
I agree, but I strongly suspect that one does not program an AGI, we
would grow it and teach it
On Fri, Sep 5, 2014 at 12:15 AM, LizR lizj...@gmail.com wrote:
On 5 September 2014 15:13, Stephen Paul King stephe...@provensecure.com
wrote:
But you seem to assume that it has awareness of
We learn of each other by interacting this becomes communication once
languages emerge...
On Fri, Sep 5, 2014 at 12:16 AM, LizR lizj...@gmail.com wrote:
On 5 September 2014 15:18, Stephen Paul King stephe...@provensecure.com
wrote:
Sure, that would set up synchronization of sensory data
Nah, I get what you mean. Connecting an AGI to a body is one way of
teaching it to recognize us, but do we really want to do that?
On Fri, Sep 5, 2014 at 12:18 AM, LizR lizj...@gmail.com wrote:
On 5 September 2014 16:08, Stephen Paul King stephe...@provensecure.com
wrote:
We are freaking
On 9/2/2014 10:35 PM, 'Chris de Morsella' via Everything List wrote:
*From:*everything-list@googlegroups.com [mailto:everything-list@googlegroups.com] *On
Behalf Of *John Clark
*Sent:* Tuesday, September 02, 2014 6:58 AM
*To:* everything-list@googlegroups.com
*Subject:* Re: AI Dooms Us
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
Sent: Wednesday, September 03, 2014 4:54 AM
To: everything-list@googlegroups.com
Subject: Re: AI Dooms Us
On 9/2/2014 10:35 PM, 'Chris de Morsella' via Everything List wrote
On Wed, Sep 3, 2014 at 7:54 AM, meekerdb meeke...@verizon.net wrote:
on Bruno's theory, consciousness is a binary attribute, all-or-nothing.
Intelligence has degrees
If that is true (and I'm not saying it is) then we can immediately conclude
that Bruno's theory is wrong because we know for a
2014-09-03 18:38 GMT+02:00 John Clark johnkcl...@gmail.com:
On Wed, Sep 3, 2014 at 7:54 AM, meekerdb meeke...@verizon.net wrote:
on Bruno's theory, consciousness is a binary attribute, all-or-nothing.
Intelligence has degrees
If that is true (and I'm not saying it is) then we can
On Wed, Sep 3, 2014 'Chris de Morsella' via Everything List
everything-list@googlegroups.com wrote:
a human baby is a plastic template for the individual to emerge in
And those 1000 lines of Lisp are a plastic template for the Jupiter Brain
to emerge in.
All of that living experience and
From: John Clark johnkcl...@gmail.com
To: everything-list@googlegroups.com
Sent: Wednesday, September 3, 2014 10:56 AM
Subject: Re: AI Dooms Us
On Wed, Sep 3, 2014 'Chris de Morsella' via Everything List
everything-list@googlegroups.com wrote
*To:* everything-list@googlegroups.com
*Subject:* Re: AI Dooms Us
Hi Chris,
I agree. What we see in the current development is, literally,
evolution - I would not say that it is Darwinian per se as it is not
smooth or continuous. It looks more like a punctuated equilibrium over many
Stephen, we have not communicated for quite awhile. Why would you think we
know more than - *what?* - *nothing* indeed and assume circumstances
according to our whim (mindset?).
(To Liz: who said those Aliens are benevoltent?)
We still use our present terms in postulating a far bigger world as our
On 4 September 2014 07:51, John Mikes jami...@gmail.com wrote:
Stephen, we have not communicated for quite awhile. Why would you think we
know more than - *what?* - *nothing* indeed and assume circumstances
according to our whim (mindset?).
(To Liz: who said those Aliens are benevoltent?)
Hi John,
why would have want the Zookeepers intelligence from the Earthlings?
Did you mean, Why would the Zookeepers want intelligence from
Earthlings? Why to compute things for them, of course! Distributed networks
running algorithms that evolve are very good at finding solutions to
On 4 September 2014 13:38, Stephen Paul King stephe...@provensecure.com
wrote:
Hi John,
why would have want the Zookeepers intelligence from the Earthlings?
Did you mean, Why would the Zookeepers want intelligence from
Earthlings? Why to compute things for them, of course!
Umm, not really. It is exploitation.
On Wed, Sep 3, 2014 at 9:43 PM, LizR lizj...@gmail.com wrote:
On 4 September 2014 13:38, Stephen Paul King stephe...@provensecure.com
wrote:
Hi John,
why would have want the Zookeepers intelligence from the Earthlings?
Did you mean, Why would
On 4 September 2014 13:45, Stephen Paul King stephe...@provensecure.com
wrote:
Umm, not really. It is exploitation.
Only if you aren't absorbed. Otherwise you'd only be exploiting yourself.
--
You received this message because you are subscribed to the Google Groups
Everything List group.
Humans interacting with each other form very nice (in terms of
expressiveness http://en.wikipedia.org/wiki/Expressive_power) adaptive
networks.
On Wed, Sep 3, 2014 at 9:45 PM, Stephen Paul King
stephe...@provensecure.com wrote:
Umm, not really. It is exploitation.
On Wed, Sep 3, 2014 at
Umm, explain: Absorbed. I'm not groking it...
On Wed, Sep 3, 2014 at 9:46 PM, LizR lizj...@gmail.com wrote:
On 4 September 2014 13:45, Stephen Paul King stephe...@provensecure.com
wrote:
Umm, not really. It is exploitation.
Only if you aren't absorbed. Otherwise you'd only be exploiting
Zerg http://starcraft.wikia.com/wiki/Overmind! ?
On Wed, Sep 3, 2014 at 9:46 PM, LizR lizj...@gmail.com wrote:
On 4 September 2014 13:45, Stephen Paul King stephe...@provensecure.com
wrote:
Umm, not really. It is exploitation.
Only if you aren't absorbed. Otherwise you'd only be
On 4 September 2014 13:47, Stephen Paul King stephe...@provensecure.com
wrote:
Umm, explain: Absorbed. I'm not groking it...
You become part of it.
--
You received this message because you are subscribed to the Google Groups
Everything List group.
To unsubscribe from this group and stop
On 4 September 2014 13:48, Stephen Paul King stephe...@provensecure.com
wrote:
Zerg http://starcraft.wikia.com/wiki/Overmind! ?
Well, quite. I believe the name comes from Childhood's End although
obviously Olaf Stapledon was writing about it (and influencing Clarke)
decades earlier than the
Childhood's End in on my top 20 best scifi books ever list... Umm, I
disagree with the ultimate aim of life in Star Maker (iirc) was to merge
into a single mind only to the extent that it is actually impossible
(there is a proven theorem to this effect) for this to happen. It always
goes the
OTOH, becoming capable of exploiting computational resources that are
free (note the scare quotes) is always optimal. If one can obtain
solutions to problem without having to use up one's own resources is always
a good thing (for the Overlords at least).
On Wed, Sep 3, 2014 at 9:53 PM, LizR
On 4 September 2014 14:06, Stephen Paul King stephe...@provensecure.com
wrote:
OTOH, becoming capable of exploiting computational resources that are
free (note the scare quotes) is always optimal. If one can obtain
solutions to problem without having to use up one's own resources is always
a
Right! Damping down random fluctuations in one's computer is an
optimization move.
Oh!, your thinking in more Borg terms, re: absorption
On Wed, Sep 3, 2014 at 10:25 PM, LizR lizj...@gmail.com wrote:
On 4 September 2014 14:06, Stephen Paul King stephe...@provensecure.com
wrote:
But something is amiss! Why would the OverLords wish to share their largess
with us?
On Wed, Sep 3, 2014 at 10:25 PM, LizR lizj...@gmail.com wrote:
On 4 September 2014 14:06, Stephen Paul King stephe...@provensecure.com
wrote:
OTOH, becoming capable of exploiting computational resources
On 4 September 2014 14:30, Stephen Paul King stephe...@provensecure.com
wrote:
Right! Damping down random fluctuations in one's computer is an
optimization move.
Oh!, your thinking in more Borg terms, re: absorption
I'm thinking in terms of Childhood's End.
--
You received this
On 4 September 2014 14:31, Stephen Paul King stephe...@provensecure.com
wrote:
But something is amiss! Why would the OverLords wish to share their
largess with us?
Why wouldn't they?
--
You received this message because you are subscribed to the Google Groups
Everything List group.
To
Are the resources available to the OverLords that would allow the sharing
to be cost-free then it would make no difference, otherwise
On Wed, Sep 3, 2014 at 10:37 PM, LizR lizj...@gmail.com wrote:
On 4 September 2014 14:31, Stephen Paul King stephe...@provensecure.com
wrote:
But
I have to say I find the whole thing amusing. Tegmark even suggested we
should be spending one percent of GDP trying to research this terrible
threat to humanity and wondered why we weren't doing it. Why not? Because,
unlike global warming and nuclear weapons, there is absolutely no sign of
One day, a printout of this email will be found among the post apocalyptic
wreckage by one of the few remaining humans and they will enjoy the first
laugh they've had in a year.
Just kidding. I have no idea how to calibrate this threat. I'm pretty
skeptical, but some awfully smart people are
, satellites,
nuclear power, all came from war-a very emotional process indeed!
-Original Message-
From: Pierz pier...@gmail.com
To: everything-list everything-list@googlegroups.com
Sent: Tue, Sep 2, 2014 7:22 am
Subject: Re: AI Dooms Us
I have to say I find the whole thing amusing. Tegmark
On Mon, Sep 1, 2014 at 2:45 PM, 'Chris de Morsella' via Everything List
everything-list@googlegroups.com wrote:
Amazing isn’t it. The elegance of self-assembling processes that can
do so much with so little input.
Yes, very amazing!
I doubt 1000 lines of computer code is a large
On Mon, Sep 1, 2014 at 5:03 PM, Stephen Paul King
stephe...@provensecure.com wrote:
The chicken or the egg problem is not hard to solve; just figure out how
to get something that is a little bit like both and has an evolution path
into one or the other.
That's why origin of life theorists
On Mon, Sep 1, 2014 at 6:43 PM, 'Chris de Morsella' via Everything List
everything-list@googlegroups.com wrote:
Can a single complex multi-cellular organism be understood or defined
completely without also viewing it in its larger multi-species context?
Nothing can be understood completely
On Mon, Sep 1, 2014 at 6:01 PM, Stephen Paul King
stephe...@provensecure.com wrote:
Hi Telmo,
Access to resources seems to only allow for reproduction and
continuation. For an AGI to act on the world it has to be able to use
those resources in a manner that implies that it can sense the
1 - 100 of 187 matches
Mail list logo