Russell Standish wrote:
>> Hi Juergen,
>> I would like to nuance my last Post I send to you.
>> First I see in other posts, written by you, that your
>> computable real numbers are *limit* computable. It still
>> seems to me possible to diagonalize against that,
>> although it is probably less trivial.
>> But I think it isn't really relevant in our present discussion,
>> because the
>> continuum I am talking about appears in the first person discourse
>> of the machines, so it is better to
>> keep discussing the main point, which is the relevance
>> of the first person point of view, with comp, when
>> we are searching for a TOE.
>It seems to me that the cardinality of UD*, or whether UD* is a
>continuum or not is rather irrelevant. My understanding is that the UD
>argument implies a first person indeterminancy, ie every first person
>experience will have access to a random oracle.
All right. I guess you agree that such random oracle appears also with
the iterative self-duplication, which is itself appearing in UD*.
>I think the argument goes something like this:
>1) UD algorithms will have high measure in the space of all
>computations, much higher than a direct implementation of a conscious AI
>(assuming such things exist).
Hopefully so. Intuitively so. Not so easy to prove. Note also that if
you implement a conscious AI it will itself be embedded in UD*, from
his own point of view, and it will have access also to some
>2) Therefore, it is more likely that a conscious AI will find itself
>imbedded in the output of a UD, with access to a random oracle
That's what I was saying ! And that conscious AI will even find itself
in the output of an immaterial UD in Plato heaven.
>(Of course my viewpoint is that consciousness _requires_ access to a
>random oracle, making conclusion 2 even stronger, but it is not
>necessary for the argument).
Consciousness _requires_ access to a random oracle for having
relatively stable histories. Perhaps through the phase randomisation
of the white rabbits (cf my recent paper).