OT: the "LifeBox"

Jones Beene wrote:

> ...
> 
> Basically and IMHO, at some point in the evolution of mankind -
> around the year 2012 <g> if earth does not self-immolate, we will
> have affordable (terabyte, terahertz, son-of-Xbox) computers
> advanced enough to capture an individual's total though-process,
> personality, educational slant, day-by-day biography,
> like-and-dislike, quirks, and insight ... a machine which will
> grow and mature with every individual from childhood to (going
> offline physically) IOW - a soul but different from anything
> imaginable in ancient scripture, yet surprisingly comforting -
> after you live with the concept for several days.
> 
> I would love to have access to the "lifebox" of not only my father,
> departed now for 40 years or grandparents and so on - but also to
> non-relatives. Imagine going online and spending time with long
> departed great thinkers or personal heroes.
> 
> ...
> 
> Shalom,
> 
> 
> Jones


Hello, Jones.

Not to burst your bubble or anything but before we can even think of
capturing a human's thought processes, etc. and no matter what other
hardware we have available, we are going to need computers that are
several orders of magnitude smarter than we are.  Right now we don't have
a clue how to build computers that are close to being as smart as a
two-year-old.

Artificial Intelligence researchers have been working on this since the
late 50s.  Their predictions of success are usually set about 15-20 years
in the future, paralleling hot fusion researchers' repeated predictions
of when the first practical fusion power reactors will come on-line.

But instead let's say we do -- somehow -- find out how to build such
superintelligent computers ("superintelligences").  Within a short time --
months or weeks -- they'll be able to do all the neat stuff Rucker and you
propose and much more as well.  They're the last things humans will ever
need to invent.

Rucker probably references this point in time as the "Technological
Singularity" or just "The Singularity".  (If he doesn't, shame on him.  I
note the term isn't in his table of contents.)  People have debated what
the Singularity means and its philosophical implications for the future.

And many people fear that at the cusp of the Singularity our
superintelligent computers will somehow run amok and destroy humanity. 
Reams of text have been generated discussing this, e.g., "Why would they
want to?" and suggestions about implementing Asimov's "Three Laws of
Robotics".  All this seems to me to completely ignore the real danger:

A superintelligence is (among other things) a weapon-generating machine. 
The first group or groups to get access to these machines will act to
dominate humanity, if only to prevent others from doing so.  One such
group will succeed.

It is likely during this conflict -- or to prevent challenges to the
victor -- that most of the people on earth will be murdered.  Even if not,
the group that succeeds will either find a way to completely police the
most minute human action or use superintelligence-generated methods such
as nanotech to take direct control of all human brains.

Maybe we'll all be convinced that Christ Jesus has returned to reign, or
perhaps the 12th Iman instead.  (I suppose both are possible
simultaneously.)  In any event, after superintelligence has yielded total
hegemony to one group, "crimethink" (as defined by Orwell) will be
impossible and the "singularity" (whatever it might mean) will also be
impossible.

It is dangerous to put infinite power into the hands of a fallible species.

And maybe that's true for "new energy" as well.

Fortunately, as I said above, we have no idea how to build a
superintelligent computer.

Summing up, no, Jones, it is unlikely that a LifeBox will ever be built or
that any human being will ever be able to visit the essence of a deceased
loved one inside the bowels of a computer.  Sorry.

-Walter


Reply via email to