> At 05:42 PM 12/20/2001, zim wrote:

> >As to the specifics of your specs for the system: My only question
(and 
> >one which I am inadequate to answer) has to do with Goedel's theorem. 
> >Would not your system run into questions for which there are no
logical 
> >answers and would that not throw it into one of those Star Trek "Does
not 
> >compute" loops with eventual self-destruction associated with tacky
smoke 
> >special effects? That is there are some problems for which there are
no 
> >mathematical or logical solutions. So the machine could not achieve
the 
> >sort of perfection that you describe.
> 

> From: Richard S. Crawford <[EMAIL PROTECTED]>
> 
> More precisely, the self-evolving system that The Fool describes will 
> encounter statements which are true within its internal logic but which

> cannot be proven within that logic.
> 
> In mathematics, there are statements which are true but which cannot be

> proven mathematically.
> 
> In any system symbolic logic (S), there are axioms which are true in S
but 
> which cannot be demonstrated within S.  However, those statements *can*
be 
> proven within meta-logic, S^2.  But then we find axioms within S^2
which 
> cannot be proven within S^2, but which can theoretically be proven true
by 
> using S^3.  Unfortunately, S^3 has proven too complex to be easily 
> understood by mere graduate students.

But the computer system we are describing is smarter than you are.
(According to Vinge and Moore's Law, the bitrate will surpass the bitrate
of the human mind within the next thirty-some years, /but/ that doesn't
account for distributed computing efforts [grid computing] and
super-computer arrays (of thousands of processors).  With that kept in
mind, the Hardware will surpass us before then).  Not only that it has
access to all the work that humans have ever come up with in mathematics,
i.e. what we _know_ and can figure out, it has access to and can _know_. 
But all of this is academic because I doubt that their would be any need
to solve such problems in the four fundamental software aspects of the
system.  It is much more likely to arise in the hardware aspect of the
system, or if the system is trying to figure out (thinking, theorizing,
etc. like what Hawking does) how to do weird stuff, like time travel and
faster than light travel, etc.  Those things (which have no real
relevance to the system) are fripperies, nice to have, but not a critical
component of the system.

It is very probable that the system would port itself over to a base
three system, because it is a better system.  It is the closest to base
e, and has other properties that make a trinary system better.  Instead
of true / false, you have true / false / maybe (1, 0 -1).

> For real fun, imagine writing a paper about the application of Godel's 
> Incompleteness Theorem to various systems of multi-valued inductive
logics 
> (where something can be true, false, or anywhere in between).  I wanted
to 
> write a paper about quantum logic, but my advisor told me that the 
> University's health plan wouldn't cover the cost of the insane asylum.

Vales between 1.0_ and 0.0_.

But what about things that are both true and false at the same time?

> Ahem.  Back to topic.  No OS can be perfect, not if it wants to stay
within 
> the confines of logic as defined by the structure of the Universe.  If 
> logic is only one of a suggested number of options, then I suppose The 
> Fool's dream OS is absolutely possible.

Did I indicate that I would want such a system?  Nothing could be further
from the truth.  If it were in my power I would make it so no
sufficiently advanced AI is ever created, for any reason.  But no matter
what I do (unless I can manage to wipe out mankind) would ever be enough
to stop someone somewhere from creating such a monstrosity.  It will
Happen.

> My fear is this: won't the dream OS become so powerful that it will 
> eventually try to take over the world and destroy all of humanity, and
find 
> itself at war with the Omega Point at the end of time?
 
If the singularity did happen, then the resulting computer would be
powerful enough to try and prevent the omega point from occurring, But
even if it failed to stop the omega point (what of other species who
create their own singularities / mass super AI's), the omega point is
supposed to be more powerful than that (the awaking of god).  Now since
the omega point is supposed to emulate everything that has happened in
the universe AND everything that could have happened, wouldn't their then
be an infinite number of singularity AI's that would be fighting the
omega point?  And wouldn't their be mini omega points in most emulated
universe's?  Or would the singularity AI's from a million different
civilizations battle it out, or combine forces to fight the Omega point. 
Or would they join together and become the instigator of the omega point?

Reply via email to