> At 01:33 PM 12/20/2001, Lord Kneem wrote:

> >Natural selection NEVER enters into it in any way.  How could it? 
There 
> >is never a point where randomness comes into play.  There
> >is no competition.  There is no sexual selection.  There is only
logic, 
> >deductive reasoning, research, and _Artificial_
> >intelligence, which is the point.
> >
> >Now I might say that this could be limited case in _Artificial_
selection, 
> >but again, what is it selecting against?
> >
> >Furthermore once the program has gone through _one_ cycle it should 
> >already work bug free.  From there the program should be able
> >to write _ANY_ program (with direction given as to what the program 
> >requirements are (_very_ specifically given)) bug-free.  I
> >think the next job after it has worked on itself would be the
operating 
> >system.  Then it could write a new program that designs
> >hardware.  And there it goes...

> From: Richard S. Crawford <[EMAIL PROTECTED]>

> Don't knock randomness and occasional errors in code replication.  They
can 
> be good things.  Our species didn't get where we are today without 
> randomness and error propagation.

Perhaps.

> Let's assume for a moment that no data storage system is perfect; that
is 
> to say, let's assume for a moment that occasionally disk drives crash
and 
> corrupt data, and that memory goes faulty, and that even optical media
is 
> capable of damage by various means.  Heck, even quantum fluctuations
can 
> affect data storage.  This is a bit of a stretch, I know, but bear with
me 
> for a moment.

Perhaps.

> So, assuming your software lives on a piece of hardware which is
capable of 
> minor corruption and that no system is perfect... keep it running long 
> enough, and errors will creep in.  It just happens.  DNA works like
that; 
> DNA is a very, very good information storage system, but errors do
creep in 
> (chemical errors, even molecular errors introduced by cosmic rays, that

> sort of thing).  Almost all of the time, these errors will be harmful
to 
> the program (or the living organism), but once in a very great while,
these 
> errors -- these mutations -- will be beneficial, to the point where an 
> organism with this particular mutation will be more successful at
passing 
> its genes on to the next generation than organisms that don't express
that 
> particular mutant phenotype.  Thus it is that mutations survive and
pass on.

Perhaps.  But only in very limited cases, where the 'error' or 'Mutation'
occurs in reproductive cells, AND those cells are involved in
reproduction.  In sexual reproduction it gets reduced even further
because their only say a 50% chance of that particular gene being passed
on.  Now that mutation cannot mathematically spread to every organism in
the species without a very small breeding population, i.e. if the
(population) > (some x) then it will never become a feature of every
organism in the species.  (This is why only a very few people have six
fingers).  Without isolation the feature will die off / remain marginal
for large populations.  But all of this comes after the greatest of the
selectors, _Sexual selection_.  Some 'errors' will cause no changes in
the appearance of an organism.  These Errors are the most likely to
survive to reproduce.  Errors that cause changes in the appearance of an
organism, however, run afoul of Sexual selection.  Most of these 'errors'
will cause negative effects in the appearance of an organism (especially
those that cause the greatest increase in survivability / 'fitness'). 
Most of these changes whether beneficial or not will be selected against
by sexual selection, and again they die out.

> In a similar vein, a random error in a OS's code might conceivably make

> that code ever so slightly better at, say, retrieving data through a
USB 
> port.  Very unlikely -- astronomically so -- but this is how mutations 
> occur and get passed on.

Perhaps.  But almost all changes that have an increase in one area, also
have a decrease in another area, providing no real advantage.

It is entirely possible to create a system that would know when errors of
these kinds occur and fix them.  DNA has 'copy' and 'repair'.  That's it.
 This system would employ significantly more advanced techniques: 'copy',
'repair', 'compare' (backup, and distant copies of the system very far
away (more than one copy of them (millions)) such that it is impossible
for every single one of them to be corrupt, or corrupt in the same way),
parity, MD5 hashes, etc.  The system is intelligent enough to create
error free versions of itself, (it's original purpose), it can do so
again, etc.  The system can remain error free, even in the event of such
a failure.  Also it would replace it's hardware every so often.

> Seriously, nothing is free of randomness, just by the very nature of
the 
> universe.  That's just the law of the jungle.

I hope you mean the second law of thermodynamics.  If you mean something
else...nothing can help you.

> In your ideal self-replicating and self-growing OS and software, it
might 
> even make sense for the system to introduce random code changes into a 
> subset of its own code, which it might sequestrate into a small 
> sandbox.  There it could observe the code for possibly beneficial 
> side-effects of the "mutation" and reintroduce favorable mutations back

> into its mainline.

Perhaps.  This is an intelligent entity, and would devise research
methods, like we do.  But It would only want to add changes that it fully
understands, not something that works, but it does not understand how it
works.

There was an article in discover magazine a few years ago (when I still
subscribed to it), where they were taking a programmable chip (a Field
programmable gate array (FPGA) most likely), giving it 100 nodes to work
with, filling it with random data, and _artificially selecting_ it to
create a program that could recognize an exact tone (like a telephone
ringtone?).  They bred it against itself, mutated it and chose
(artificially selected) the best matches and bred those against each
other, discarding the rest (they used multiple FPGA's).  They ran it for
a long time (> 2 weeks) before they got anything close to what they
wanted.  In the end they created (through artificial selection) a
'program' that could distinguish between 2 tones.  The really interesting
bits were:  It used significantly less than the full 100 nodes, It used
analog 'logic' (if you could call it that), that is it used states
between 0 and 1, unlike a digital program which only uses 0, 1.  But the
most interesting bits were: The researchers were unable to figure out how
the 'program' worked, and when they transferred the 'program' to
seemingly identical FPGA's (as identical as they could possibly make
them), the program did not work at all.  The researchers concluded that
the program was using the physics of the FPGA chip in ways they did not
understand, using physical properties that were not known / obvious to
them.

> One of the benefits of such a system, of course, is that your code
could 
> simulate thousands of generations of replication within seconds in its 
> isolated sandboxes (Petri dishes?  Simulation spaces?  Whatever...), 
> whereas in a biological system, such experiments can take thousands
upon 
> thousands of years to conduct.

Perhaps.  Like I said, it is an artificial _Intelligence_, capable of
doing research.  However, once again the system would not use something
that it could not understand or replicate.  

> Heh.  This is all theoretical, of course.  I haven't followed most of
the 
> work being done in AI since I was a philosophy undergraduate.  I'm not 
> saying your model is bad, I'm just saying that you shouldn't overlook
the 
> possible benefits of randomness of "artificial selection" within the 
> confines of your code.

I'm not saying it's a superior way to make an AI, It's a system to create
a system that does not crash.

It is very possible to create a system that 'evolves' itself so that the
evolved 'system' is vastly superior in every way, but that system would
be prone to errors, crashing, it would be as capable of adapting to
change, it would take an extremely long time / number of cycles to create
anything that would compare to a human mind let alone surpass it.

But the biggest objection I have, is that in this kind of a system, (as
opposed to mine) it would be much harder (impossible) to enforce the
'laws of robots' such that the system would not decide to rid itself of
it's creators (terminator)/ enslave mankind (matrix)/ etc.  In _my_
system, enforcing the 'laws of robots' would take precedence over every
other single factor, with failsafe's for the failsafe's for failsafe's,
etc.

> For some really interesting ideas, check out an older science fiction
novel 
> called _Code of the Lifemaker_ by James P. Hogan.

I will obtain a copy if and when you obtain a copy of 'The Sky Lords'
(followed by 'War of the Sky Lords' and 'Fall of the Sky Lords') by John
Brosnan.

> You might also check out _Artifical Life_ by Stephen Levy (though some
of 
> the ideas in that book are necessarily out of date), as well as a
couple of 
> good books on chaos and complexity theory, not to mention books by
Stephen 
> Jay Gould and Richard Dawkins.

I know a great deal about chaos theory and fractals.  I have a particular
interest in fractals and the mandelbrot set in particular.  I have
written several programs to draw fractals, and the mandelbrot set.

One more thing: it is customary among civilized posters to reply
_BENEATH_ what you are replying to.  Their are several reasons for this:
Conversations flow downward, and comments on particular passages are made
clear.  Most civilized posters recognize this, and can converse properly.
 When you have reply's to reply's to reply's to reply's, ad infinitum,
and someone is top-posting, the conversation is difficult to follow and
disjointed.  That makes it harder for _Everyone_ to converse properly.

Reply via email to