At 12:51 29/01/05 -0800, Hal Finney wrote:
> On 28 Jan 2005 Hal Finney wrote: > >I suggest that the answer is that accidental instantiations only > >contribute an infinitesimal amount, compared to the contributions of > >universes like ours.
Stathis Papaioannou replied:
> I don't understand this conclusion. A lengthy piece of code (whether it
> represents a moment of consciousness or anything else) is certainly less
> likely to be accidentally implemented on some random computer than on the
> computer running the original software. But surely the opposite is the case
> if you allow that all possible computer programs "run" simply by virtue of
> their existence as mathematical objects. For every program running on a
> biological or electronic computer, there must be infinitely many exact
> analogues and every minor and major variation thereof running out there in
I'm afraid I don't understand your argument here. I am using the Schmidhuber concept that the measure of a program is related to its size and/or information complexity: that shorter (and simpler) programs have greater measure than longer ones. Do you agree with that, or are you challenging that view?
My point was then that we can imagine a short program that can "naturally" evolve consciousness, whereas to create consciousness "artificially" or arbitrarily, without a course of natural evolution, requires a huge number of bits to specify the conscious entity in its entirety.
You mention infinity; are you saying that there is no meaningful difference between the measure of programs, because each one has an infinite number of analogs? Could you explain that concept in more detail?
I am not sure that I understand what you do with that measure on programs.
I prefer to look at infinite coin generations (that is infinitely reiterated self-duplications)
and put measure on infinite sets of alternatives. Those infinite sets of
relative alternative *are* themselves" generated by simple programs (like
the UD). Now we cannot know in which computational history we belong,
or more exactly we "belong" to an infinity of computational histories
(undistinguishable up to now). (It could be all the repetition of your simple program)
But to make an infinitely correct prediction we should average on all
computational histories going through our states.
Your measure could explain why simple and short subroutine persists
everywhere but what we must do is to extract the "actual measure", the one
apparently given by QM, from an internal measure on all relatively
consistent continuation of our (unknown!) probable computationnal state.
This is independent of the fact that some short programs could play the role of
some initiator of something persisting. Perhaps a quantum dovetailer ? But to
proceed by taking comp seriously this too should be justify from within.
Searching a measure on the computational histories instead of the
programs can not only be justified by thought experiments, but can
be defined neatly mathematically. Also a "modern" way of talking on the
Many Worlds is in term of relative consistent histories.
But the histories emerge from within. This too must be taken into account.
It can change the logic. (And actually changes it according to the lobian