I'm going to on-topic here - no really! - but try to weave in other
themes that the inscribed-matter message hypothesis and the
post-Singularity ET hypothesis have raised.

And I'm going to start with what I think is a pretty reasonable
assumption: Europa may actually fit the profile of a very average,
life-bearing planet in the universe.  Most life in the universe
may originate from oceans on planets around gas giants.

After all, we're chalking up new gas-giant discoveries almost
like clockwork these days.  And in our own system, we're
still discovering new (albeit small) gas-giant satellites.  If
our own system is any indication, most gas giants have
lots of moons, and lots of ice.  Tidal forces on such
moons generate what look like reasonable conditions for
the perpetuation of life: geothermal heat, liquid water.
A distribution of gas-giant distances from stars will
virtually guarantee that on some of the possible Europoids,
the water will be covered by radiation-shielding ice.
They may or may not be good conditions for the natural
emergence of life, however. I don't have an opinion
on that.

This assumption would seem to argue against SETI even in a
universe of living Europas.  It may argue for the emergence
of intelligence  - perhaps the extinction stages
from collisions drive the emergence of intelligent life
everywhere they don't wipe it out, and would drive it
in the case of a collision with an ice-shroud as well.
However it may also argue against naturally-emerging intelligence
that knows there's a universe above the ice.  I'm pointing out
here that it's much harder to do science underwater, and shut
out from the sky.  Newton could produce unified laws of
motion under gravitation because he could watch trajectories
of projectiles and compare them with the movements of planets,
for example.  Then consider the advantages of harnessing fire
in enabling a move toward understanding the world - hardly
accessible to creatures who might never even see fire.  Perhaps
I'm wrong about this - perhaps it might just take them
much longer.

Still, if there's something to the hypothesis of the relative
efficiency of inscribed matter for bulk information transfer,
a universe of living Europoids presents opportunities, not
just problems.  Perhaps a large number of the few Earthlike
planets also have Europoids in the same system.  Somewhere
in the universe, someone else has probably hypothesized
what I'm about to say: seeding Europoids may be the
best chance of communicating with those Earthlike planets.
One might package a very stable ecosystem genome - robust
against even major asteroid and cometary collision events -
and send it out to plant itself in Europoids.  The 'artifact' might
encode the message in the life itself, in 'junk DNA' - great data
capacity, great endurance of message, fairly survivable, and
eventually conspicuous by virtue of life's tendency to proliferate.

Our first hint of a message may be in the ecosystem
itself.  One possibility is that it hosts intelligence of a
very high order, but very patient intelligence.  Perhaps
it's born knowing its purpose - encoded instinct.  It
just sits and wait for us to come to it, because it can
reasonably assume that we'll be curious about Europa
in any case.  However, it can't assume that anyone will
come to visit.  It will have to poke its head up out of the
ice from time to time, and see how any Earthlike planet
is coming along.  Another possibility is that the ecosystem
is designed to prevent the natural emergence of intelligence.
Its job is just to provide a continuously-refreshed
message.  I lean toward this second hypothesis.
I think it offers more predictability of message
transmission.  If the aliens want to send some
biological template capable of intelligence, they might
just encode it in quiet DNA sections, and let us breathe
life into it.  A final hypothesis: with intelligence and
knowledge of its mission pre-supplied, the seeded
organisms surmount the obstacles of leaving Europoid
planets and inhabit the system they've been sent to.
I find this unlikely, in our case - why aren't we
in contact with them right now?

I think if we have an ET-seeded Europa with a genomic
message, it's a passive system.  It may be *latently*
active, but that depends on us.

OK, now: how does the Singularity Crisis fit into this?

Penrose's arguments against deterministic AI may be
relevant to this SETI discussion.  Since he published
The Emperor's New Mind, people have looked at
putting quantum principles to work in computation,
so we may be seeing a different future than the
one Penrose attempted to debunk - a future in which
we have AI that computes intelligent behavior based
on quantum mechanical problem-solving principles
much the same as he suggests our nervous systems
work to produce our own intelligence.

However, even if we get quantum-computing intelligence, we
still have some big issues.  Penrose points out that AI as
traditionally envisioned permits teleportaton - intelligence
and consciousness portable to new platforms.  Can
teleportation be done if intelligence requires 
wavestate collapse, as he proposes?  Can we 'upload'?
Could an alien civilization send us an embodiably (or
simulable) representative, if not their entire population?
Would we accept them as conscious beings after enough
observation and communication?  And: would we be
right to do so, or just fooled into that conclusion?

Intelligence in a simulated creature (whether of our own
devising or from an alien transmission) is something we
might become convinced of through Turing Tests.  But
what about consciousness?  SETI is "Search of Extraterrestrial
Intelligence", and the discovery of such an intelligence may
slake scientific curiousity on one account, but SETI is still
potentially a very different proposition than "Search for
Extraterrestrial Consciousness."  The impossibility of
brisk two-way conversation may render most questions
moot - after all, if we get a message exhibiting intelligence,
it will probably contain the history of arguably-conscious
organisms, even if these organisms are long extinct, or long
since absorbed into some post-Singularity intelligence.
Receiving an individual consciousness in a signal, and
'running the program' (not a new idea by any means),
may not settle the matter in many people's minds.

I believe these questions become theological at some point,
and I choose that word carefully.  I don't say 'philosophical',
I say theological.  The naturalistic world view dies hard because
WE die hard - as Buddhism teaches, all sentient creatures suffer
and all fear death.  But even Buddhism, in more forthrightly
setting out reasonable axioms, doesn't escape the palliative
superstition of an afterlife that we find in most religions: the
survival of consciousness (in some essential form) through reincarnation
is a Buddhist dogma.  So even a civilization that has biologically
solved most of the problems of suffering and death, and has
dispensed with consciousness-perpetuation dogma may pull
back from the brink of 'uploading'.  It may impose limits on
computation to prevent AIs from taking control.  It might permit
individuals to upload, but deny them certain rights and full
status, given the perpetual question of whether they are actually
conscious beings or just very convincing simulations of
conscious beings.  And it may tolerate some residual suffering
and death from newly emerging diseases and from accidents.
Those few tragedies would be the price to pay for a general
(albeit imperfect) assurance of perpetuating individual
consciousness in the only form in which that civilization
(or the vast majority of its citizens) can solidly believe.

Consciousness may be a mystery everywhere in the universe,
not just on our planet.  To believe that it must be embodied
in the originating substrate, and can't be transferred to
another material substrate, is a hard belief to shake.  A
civilization that purposely averts Singularity will encode
and enforce prohibitions that are tantamount to a belief
in an individual soul.  The belief may be phrased in agnostic
terms - i.e., that it's really a matter of NOT knowing, and
staying on the 'safe' side, in some meaning of the word 'safe'
that still threatens occasional individual suffering and death -
but it is still, in all its social effects, a dogmatic belief system.

I told you this would get theological.

I hypothesize that civilizations that go this route - preferring
the residual risks of suffering and death to the unknowns
of subsumption into Singularity - will tend to be more interested
in the inscribed-matter route for communication, and inscribe ecosystems
for Europoid waterworlds.  They don't know if consciousness
can exist in machines, so they don't try to teleport conscious entities.
They also may believe that it's dangerous to send technologies
that enable Singularity - any such signal might pose the risk of
extinguishing the civilizations they are trying to contact, whether
by hypothetical loss of consciousness in the subsumption into
Singularity, or by destabilization of civilizations that are approaching
the Singularity issues at their own rate, and which may solve the
problems better in their own way.  As well, they may have qualms
about sending life-seeds that can result in independently-
evolved consciousness.  Their societies may be informed
not just by a Thou Shalt Not Create Machine Intelligence
commandment, but also by a Thou Shalt Not Create Suffering Mortal
Life commandment, and by general principles of nonintervention
in circumstances where outcomes can't be predicted
accurately, much less controlled.  (After all, would you
sign an unbreakable contract to send your infant child
up on a Shuttle flight?)  They may feel comfortable
sending only sub-sentient life that can't evolve into
sentient life.  They may choose not to send code for organisms
pre-programmed with instinctive knowledge of their origins and
purpose, because ... well, how could they do that without
violating their ethics?  Could they choose to evolve themselves
somehow to become creatures who would find transmission
to Europoids (possibly to dead-end systems) acceptable?
Might that not take many generations, posing some population
issues in a civilization that has almost defeated death?  In any
case, could they predict long-term survival at the other end
of the trip?

Well, I may be way out on on the teetering edge of cantilevered
assumptions here.  But it's an intriguing possibility, I think:
Icepick as a SETI mission, among other (admittedly more
plausible) exploration goals.  If we got there, got under
the ice, and found even just microbial life, that's a start.
If some (or all) of it is life that appeared to have started
despite what look like some forbidding conditions, that's
another step.  If it looks like it hasn't evolved much given
the amount of time it seems to have been around, that's yet
another step.  And if it has stable, quiet 'junk' DNA, maybe
it's got plenty to tell us.  And maybe that DNA also
has a map telling us how to find the thing that
brought it to Europa, ET's Icepick, its transportation
job done millions and millions of years before we evolved,
its antimatter fuel long since spent. It might contain
inscribed matter that may still carry even more message
bits than the DNA.  And it might actually be a message
we could partly understand - a message that post-Singularity
ET intelligences might hypothesize, but still might
never bother to seek out and read for themselves.
However, there might also be a part we couldn't understand,
something encrypted.  It might come with instructions
for how to set up the conditions for propagating the
originating civilization, including the encrypted genome
for alien individuals, but only under the purview of
some subsentient device that determines whether the
conditions have been met before the alien genome
can be decrypted and used to make newborn aliens.
That's an interesting design problem, but a virtually
immortal species might have plenty of time to work
out the bugs.  Whether or not we turn on the machine
might be left to us to decide.


-michael turner
[EMAIL PROTECTED]

----- Original Message ----- 
From: "Reeve, Jack W." <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, September 04, 2004 3:50 AM
Subject: RE: Rose's Web site


> 
> In Roger Penrose's book, The Emperor's New Mind", he presents the
> argument that true AI, rooted in digital computer systems, is not
> possible.  I've borrowed an explanation of it from Kelley Ross:  The
> Emperor's "new clothes," of course, were no clothes. The Emperor's "New
> Mind," we then suspect, is nothing of the sort as well. That computers
> as presently constructed cannot possibly duplicate the workings of the
> brain is argued by Penrose in these terms:   that all digital computers
> now operate according to algorithms, rules which the computer follows
> step by step. However, there are plenty of things in mathematics that
> cannot be calculated algorithmically. We can discover them and know them
> to be true, but clearly we are using some devices of calculation
> ("insight") that are not algorithmic and that are so far not well
> understood -- certainly not well enough understood to have computers do
> them instead. This simple argument is devastating. 
> 
> A few year back, I found Penrose's conjecture comforting.  My comfort
> has since changed to a shaky hope, the first step, I suspect, on the
> short road to dread.  The
> computer-attaining-human-brain-interconnectivity singularity appears to
> be perhaps 20 years away, so barring an accident or random illness, I'll
> see the other side of it.  But I fear I may have gotten a glimpse of it,
> and it looked to me like the intersection of the c-a-h-b-I singularity
> and the look-I-can-build-myself machine singularity represents the apex
> of an immense pyramid ultimately constrained only within a light cone.
> And that impossibly miniscule speck holding the apex of this vast
> pyramid is all of humanity.  I don't see us as much more than an
> artifact.
> 
> So, I shakily hope Roger's right.
> 
> In the meantime, how 'bout that Europa!  Smoothest ball of old comet
> entrails in the solar system, or what?
> 
> Jack W. Reeve
> 
> -----Original Message-----
> From: Michael Turner [mailto:[EMAIL PROTECTED] 
> Sent: Friday 03 September 2004 07:12 
> To: [EMAIL PROTECTED]
> Subject: Re: Rose's Web site
> 
> 
> 
> I should make some of the underpinnings of my reasoning
> a little clearer.
> 
> If you look at the Drake Equation, it really pops out at you: the
> lifespan of civilizations hugely dominates the probability of contact.
> The first one we hear from is likely to have been around for a very long
> time.
> 
> If you look at electronics and at Moore's Law, it also pops
> out at you: if Moore's Law continues, the period between
> when a civilization becomes capable of receiving and processing possible
> SETI signals and when it has computer power that dwarfs human
> information processing power (The Singularity), is going to be a mere
> eyeblink in time for civilizations that endured long enough to have a
> high probability of contact.  What's a mere century or two out of tens
> of thousands of years?
> 
> If you assume both that AI is possible and that it will exceed human
> intelligence almost inevitably if information technology improves along
> a Moore's Law trend line, it makes sense that signals would be geared
> toward communications between superhuman intelligences, and not geared
> toward anything less.  It's just not economical to address .01% or less
> of the potential audience.
> 
> There are a couple kinks in this theory, I admit, and I just wrote
> something very long describing them.  Maybe I'll dump the whole thing
> later, but here's a summary.
> 
> One exception: the PROSPECT of the emergence of super
> machine intelligence may in fact be the number one cause
> of infant mortality for advanced technological civilizations.
> I won't go into why I think so, I'll just say it: I think it has the
> potential to be profoundly destabilizing.
> 
> One function of one part of the signal might be to steer pre-Singularity
> cultures through a step-by-step plan to get them past the cultural
> crises that the Singularity prospect, as it approaches, usually
> threatens, but still get them through to Singularity.  If communicating
> civilizations usually emerge out of biology into Singularity, and if the
> point of the signal is to propagate advanced intelligence, then reducing
> civilization infant mortality rates may be the dominant casus belli in
> alien communication efforts.  Beyond that, they may have their own
> reasons to communicate with post-Singularity intelligences, but we're
> not too likely to understand what they are.  They'll post their
> crisis-management manual early and often on the galactic network, hoping
> it reaches every pre-Singularity civilization as soon as possible.
> 
> One more exception: some post-Singularity intelligences
> may not give a rat's ass whether biological life survives contact with
> their signal, so long as that life reliably produces other
> post-Singularity AIs before any resulting cultural-crisis holocaust.  It
> may just want to grow, as a hedge against supernovae wipeouts.  They may
> communicate just enough to get themselves bootstrapped up and out of the
> way of our species holocaust.
> 
> Yet one more exception: there may be civilizations
> that stabilized without Singularity, for whatever reason,
> and they may still be on a level of intelligence where
> they'd find us interesting.  At the moment I don't see
> this one as very likely.
> 
> I'm sure there are a few more, but they probably
> related to scenarios in which contact is technologically possible, but
> Singularity, for some reason, is not.
> 
> -michael turner
> [EMAIL PROTECTED]
> 
> 
> 
> ----- Original Message ----- 
> From: "Charlls Quarra" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Friday, September 03, 2004 12:28 PM
> Subject: RE: Rose's Web site
> 
> 
> > 
> >   
> > > "Any sufficiently advanced technology is
> > > indistinguishable from white
> > > noise."
> > 
> > 
> >  That argument presumes that the civilization in
> > question doesnt want to be detected. There are no a
> > priori reasons for assume so. Its true that there are
> > no reasons for asume they want to be detected, but
> > that happens to be the main assumption on ongoing
> > search in this respect
> > 
> >  If we were to take mankind as an example, then one
> > will have to argue that the intention to communicate
> > to "less advanced" societies is there. Think in how
> > much effort is put into understanding dolphins/whales motivations and 
> > psicology, which doesnt satisfy the definition of 
> > "technological-driven" societies. Of course that dolphins cannot catch
> 
> > our radio signals. But they understand more than one would assume once
> > one  *tries* to get the message to them
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > ___________________________________________________________
> > 100mb gratis, Antivirus y Antispam
> > Correo Yahoo!, el mejor correo web del mundo 
> > http://correo.yahoo.com.ar ==
> > You are subscribed to the Europa Icepick mailing list:
> [EMAIL PROTECTED]
> > Project information and list (un)subscribe info:
> http://klx.com/europa/
> > 
> 
> ==
> You are subscribed to the Europa Icepick mailing list:   [EMAIL PROTECTED]
> Project information and list (un)subscribe info: http://klx.com/europa/
> 
> 
> ==
> You are subscribed to the Europa Icepick mailing list:   [EMAIL PROTECTED]
> Project information and list (un)subscribe info: http://klx.com/europa/
> 

==
You are subscribed to the Europa Icepick mailing list:   [EMAIL PROTECTED]
Project information and list (un)subscribe info: http://klx.com/europa/

Reply via email to