Jason Resch-2 wrote:
>> Taking the universal dovetailer, it could really mean everything (or
>> nothing), just like the sentence "You can interpret whatever you want
>> into
>> this sentence..." or like the stuff that monkeys type on typewriters.
> A sentence (any string of information) can be interpreted in any possible
> way, but a computation defines/creates its own meaning.  If you see a
> particular step in an algorithm adds two numbers, it can pretty clearly be
> interpreted as addition, for example.
A computation can't define its own meaning, since it only manipulates
symbols (that is the definition of a computer), and symbols need a meaning
outside of them to make sense.

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> >>
>> >> Also, the universal dovetailer can't select a computation. So if I
>> write
>> >> a
>> >> program that computes something specific, I do something that the UD
>> >> doesn't
>> >> do.
>> >>
>> >
>> > But you, as the one writing a specific program, is an element of the
>> UD.
>> First, you presuppose that I am a contained in a computation.
>> Secondly, that's not true. There are no specific programs in the UD. The
>> UD
>> itself is a specifc program and in it there is nothing in it that
>> dilineates
>> on program from the others.
> Each program has its own separate, non-overlapping, contiguous memory
> space.
This may be true from your perspective, but if you actually run the UD it
just uses its own memory space.

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> >  The UD contains an entity who believes it writes a single program.
>> No! The UD doesn't contain entities at all. It is just a computation. You
>> can only interpret entities into it.
> Why do I have to?  As Bruno often asks, does anyone have to watch your
> brain through an MRI and interpret what it is doing for you to be
> conscious?
Because there ARE no entities in the UD per its definition. It only contains
symbols that are manipulated in a particular way. The definitions of the UD
or a universal turing machine or of computers in general don't contain a
reference to entities.

So you can only add that to its working in your own imagination.

It is like 1+1=2 doesn't say anything about putting an apple into a bowl
with an apple already in it. You can interpret that into it, and its not
necessarily wrong, but it is not part of the equation.
Similarily you can interpret entities into the UD and that is also not
necessarily wrong, put the entities then still are not part of the UD.

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> >> It is similar to claiming that it is hard to find a text that is not
>> >> derived
>> >> from monkeys bashing on type writers, just because they will produce
>> >> every
>> >> possible output some day.
>> >>
>> >> Intelligence is not simply blindly going through every possibility but
>> >> also
>> >> encompasses organizing them meaningfully and selecting specific ones
>> and
>> >> producing them in a certain order and producing them within a certain
>> >> time
>> >> limit.
>> >>
>> >
>> > And there are processes that do this, within the UD.
>> No. It can't select a computation because it includes all computations.
>> To
>> select a computation you must exclude some compuations, and the UD can't
>> do
>> that (since it is precisely going through all computations)
> So it selects them all, and excludes nothing.  How is this a meaningful
> limitation?
> If you look at two entities, X, and Y.  X can do everything Y can do, and
> more, but Y can only do a subset of what X does.  You say that X is more
> limited than Y because it can't do only what Y does.
That's absolutely correct. A human that (tries to) eat all of the food in
the supermarket is more limited (and dumb) than a human that just does a
subset of this, picking the food it wants and eat that. The former human is
dead, or at least will have to visit the hospital, the latter is well and

Less is indeed more, in many cases.

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> >   The UD is an example
>> > that programs can grow beyond the intentions of the creator.
>> I don't dispute that at all. I very much agree that computer rise beyond
>> the
>> intention of their users (because we don't actually know what the program
>> will actually do).
> Okay.
> Do you believe a computer program could evolve to be more intelligent than
> its programmer?
No, not in every way. Yes, in many ways. Computer already have, to some
degree. If we take IQ as a measure of intelligence, there are already
computers that score better than the vast majority of humans.


Really it is not at all about intelligence in this sense. It is more about
awareness or universal intelligence.

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> >  The UD itself
>> > isn't intelligent, but it contains intelligences.
>> I am not even saying that the UD isn't intelligent. I am just saying that
>> humans are intelligent in a way that the UD is not (and actually the
>> opposite is true as well).
> Okay, could you clarify in what ways we are more intelligent?
> For example, could you show a problem that can a human solve that a
> computer with unlimited memory and time could not?
Say you have a universal turing machine with the alphabet {0, 1}
The problem is: Change one of the symbols of this turing machine to 2.

Given that it is a universal turing machine, it is supposed to be able to
solve that problem. Yet because it doesn't have access to the right level,
it cannot do it.
It is an example of direct self-manipulation, which turing machines are not
capable of (with regards to their alphabet in this case).
You could of course create a model of that turing machine within that turing
machine and change their alphabet in the model, but since this was not the
problem in question this is not the right solution.

Or the problem "manipulate the code of yourself if you are a program, solve
1+1 if you are human (computer and human meaning what the average humans
considers computer and human)" towards a program written in a turing
universal programming language without the ability of self-modification. The
best it could do is manipulate a model of its own code (but this wasn't the
Yet we can simply solve the problem by answering 1+1=2 (since we are human
and not computers by the opinion of the majority).

View this message in context: 
Sent from the Everything List mailing list archive at Nabble.com.

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to