What Gödel discovered were that the set of true statements in mathematics,
(integer arithmetics) can not be demonstrated by a finite set of axioms.
And invented a way to discover axioms with means of an automatic procedure,
diagonalization, that the most basic interpreted program can perform. But
this was the end of the Hilbert idea.

What Penrose and others did is to find  a *particular** *(altroug qute
direct) translation of the Gódel theorem to an equivalent problem in terms
 of a Turing machine where the machine does not perform the diagonalization
and the set of axioms can not be extended..

2012/8/24 Alberto G. Corona <agocor...@gmail.com>

> Honestly I do not find the Gödel theorem a limitation for computers. I
> think that Penrose and other did a right translation from the Gódel theorem
> to a  problem of a Turing machine,. But this translation can be done in a
> different way.
>
> It is possible to design a program that modify itself by adding new
> axioms, included the diagonalizations, so that the number of axioms can
> grow for any need. This is rutinely done for equivalent problems in
> rule-based expert systems or in ordinary interpreters (aided by humans) in
> complex domains. But reduced to integer aritmetics, A turing machine that
> implements a math proof system at the deep level, that is, in an
> interpreter where new axioms can be automatically added trough
> diagonalizations, may expand the set of know deductions by incorporating
> new axioms trough diagonalization. This is not prohibited by the Gódel
> theorem. What is prohibited is to know all true statements on this domain.
> But this also apply to humans. So a computer can realize that a new axiom
> is absent in his initial set and to add it, Just like humans.
>
> I do not see in this a limitation for human free will. I wrote about this
> before. The notion of free will based on the deterministc nature of the
> phisics or computation is a degenerated, false problem which is an
> obsession of the Positivists. Look form "degenerated" and "Positivism" to
> find mi opinion about that in this list if you are interested.
>
>
> 2012/8/24 Jason Resch <jasonre...@gmail.com>
>
>>
>>
>> On Thu, Aug 23, 2012 at 1:18 PM, benjayk <benjamin.jaku...@googlemail.com
>> > wrote:
>>
>>>
>>>
>>> Jason Resch-2 wrote:
>>> >
>>> >> Taking the universal dovetailer, it could really mean everything (or
>>> >> nothing), just like the sentence "You can interpret whatever you want
>>> >> into
>>> >> this sentence..." or like the stuff that monkeys type on typewriters.
>>> >>
>>> >>
>>> > A sentence (any string of information) can be interpreted in any
>>> possible
>>> > way, but a computation defines/creates its own meaning.  If you see a
>>> > particular step in an algorithm adds two numbers, it can pretty
>>> clearly be
>>> > interpreted as addition, for example.
>>> A computation can't define its own meaning, since it only manipulates
>>> symbols (that is the definition of a computer),
>>
>>
>> I think it is a rather poor definition of a computer.  Some have tried to
>> define the entire field of mathematics as nothing more than a game of
>> symbol manipulation (see
>> http://en.wikipedia.org/wiki/Formalism_(mathematics) ).  But if
>> mathematics can be viewed as nothing but symbol manipulation, and
>> everything can be described in terms of mathematics, then what is not
>> symbol manipulation?
>>
>>
>>> and symbols need a meaning
>>> outside of them to make sense.
>>>
>>
>> The meaning of a symbol derives from the context of the machine which
>> processes it.
>>
>>
>>>
>>>
>>> Jason Resch-2 wrote:
>>> >
>>> >>
>>> >>
>>> >> Jason Resch-2 wrote:
>>> >> >
>>> >> >>
>>> >> >> Also, the universal dovetailer can't select a computation. So if I
>>> >> write
>>> >> >> a
>>> >> >> program that computes something specific, I do something that the
>>> UD
>>> >> >> doesn't
>>> >> >> do.
>>> >> >>
>>> >> >
>>> >> > But you, as the one writing a specific program, is an element of the
>>> >> UD.
>>> >> First, you presuppose that I am a contained in a computation.
>>> >>
>>> >> Secondly, that's not true. There are no specific programs in the UD.
>>> The
>>> >> UD
>>> >> itself is a specifc program and in it there is nothing in it that
>>> >> dilineates
>>> >> on program from the others.
>>> >>
>>> >
>>> > Each program has its own separate, non-overlapping, contiguous memory
>>> > space.
>>> This may be true from your perspective, but if you actually run the UD it
>>> just uses its own memory space.
>>>
>>>
>> Is your computer only running one program right now or many?
>>
>>
>>>
>>> Jason Resch-2 wrote:
>>> >
>>> >>
>>> >> Jason Resch-2 wrote:
>>> >> >
>>> >> >  The UD contains an entity who believes it writes a single program.
>>> >> No! The UD doesn't contain entities at all. It is just a computation.
>>> You
>>> >> can only interpret entities into it.
>>> >>
>>> >>
>>> > Why do I have to?  As Bruno often asks, does anyone have to watch your
>>> > brain through an MRI and interpret what it is doing for you to be
>>> > conscious?
>>> Because there ARE no entities in the UD per its definition. It only
>>> contains
>>> symbols that are manipulated in a particular way.
>>
>>
>> You forgot the processes, which are interpreting those symbols.
>>
>> The spikes of neural activity in your optic nerve are just symbols, but
>> given an interpreter (your visual cortex and brain) those symbols become
>> quite meaningful.
>>
>>
>>> The definitions of the UD
>>> or a universal turing machine or of computers in general don't contain a
>>> reference to entities.
>>>
>>>
>> The definition of this universe doesn't contain a reference to human
>> beings either.
>>
>>
>>> So you can only add that to its working in your own imagination.
>>>
>>>
>> I think I would still be able to experience meaning even if no one was
>> looking at me.
>>
>>
>>> It is like 1+1=2 doesn't say anything about putting an apple into a bowl
>>> with an apple already in it. You can interpret that into it, and its not
>>> necessarily wrong, but it is not part of the equation.
>>> Similarily you can interpret entities into the UD and that is also not
>>> necessarily wrong, put the entities then still are not part of the UD.
>>>
>>>
>>> Jason Resch-2 wrote:
>>> >
>>> >
>>> >>
>>> >> Jason Resch-2 wrote:
>>> >> >
>>> >> >> It is similar to claiming that it is hard to find a text that is
>>> not
>>> >> >> derived
>>> >> >> from monkeys bashing on type writers, just because they will
>>> produce
>>> >> >> every
>>> >> >> possible output some day.
>>> >> >>
>>> >> >> Intelligence is not simply blindly going through every possibility
>>> but
>>> >> >> also
>>> >> >> encompasses organizing them meaningfully and selecting specific
>>> ones
>>> >> and
>>> >> >> producing them in a certain order and producing them within a
>>> certain
>>> >> >> time
>>> >> >> limit.
>>> >> >>
>>> >> >
>>> >> > And there are processes that do this, within the UD.
>>> >> No. It can't select a computation because it includes all
>>> computations.
>>> >> To
>>> >> select a computation you must exclude some compuations, and the UD
>>> can't
>>> >> do
>>> >> that (since it is precisely going through all computations)
>>> >>
>>> >>
>>> > So it selects them all, and excludes nothing.  How is this a meaningful
>>> > limitation?
>>> >
>>> > If you look at two entities, X, and Y.  X can do everything Y can do,
>>> and
>>> > more, but Y can only do a subset of what X does.  You say that X is
>>> more
>>> > limited than Y because it can't do only what Y does.
>>> That's absolutely correct. A human that (tries to) eat all of the food in
>>> the supermarket is more limited (and dumb) than a human that just does a
>>> subset of this, picking the food it wants and eat that. The former human
>>> is
>>> dead, or at least will have to visit the hospital, the latter is well and
>>> alive.
>>>
>>> Less is indeed more, in many cases.
>>>
>>>
>>> Jason Resch-2 wrote:
>>> >
>>> >>
>>> >> Jason Resch-2 wrote:
>>> >> >
>>> >> >   The UD is an example
>>> >> > that programs can grow beyond the intentions of the creator.
>>> >> I don't dispute that at all. I very much agree that computer rise
>>> beyond
>>> >> the
>>> >> intention of their users (because we don't actually know what the
>>> program
>>> >> will actually do).
>>> >>
>>> >>
>>> > Okay.
>>> >
>>> > Do you believe a computer program could evolve to be more intelligent
>>> than
>>> > its programmer?
>>> No, not in every way. Yes, in many ways. Computer already have, to some
>>> degree. If we take IQ as a measure of intelligence, there are already
>>> computers that score better than the vast majority of humans.
>>>
>>> http://www.sciencedaily.com/releases/2012/02/120214100719.htm
>>
>>
>> Interesting.  Although my suspicion is they just programmed the
>> http://oeis.org/ database into it, and sorted the sequences by how well
>> known they were.
>>
>>
>>>
>>>
>>> Really it is not at all about intelligence in this sense. It is more
>>> about
>>> awareness or universal intelligence.
>>>
>>>
>>> Jason Resch-2 wrote:
>>> >
>>> >>
>>> >> Jason Resch-2 wrote:
>>> >> >
>>> >> >  The UD itself
>>> >> > isn't intelligent, but it contains intelligences.
>>> >> I am not even saying that the UD isn't intelligent. I am just saying
>>> that
>>> >> humans are intelligent in a way that the UD is not (and actually the
>>> >> opposite is true as well).
>>> >>
>>> >>
>>> > Okay, could you clarify in what ways we are more intelligent?
>>> >
>>> > For example, could you show a problem that can a human solve that a
>>> > computer with unlimited memory and time could not?
>>> Say you have a universal turing machine with the alphabet {0, 1}
>>> The problem is: Change one of the symbols of this turing machine to 2.
>>>
>>
>> Your example is defining a problem to not be solvable by a specific
>> entity, not turing machines in general.  Let's say there were an android
>> next to this other turing machine with a tape with 1's and 0's on it.  The
>> android could write a 2 on it just as easily as any human could.  Now of
>> course the turing machine with the tape might not lack this capability, but
>> that is a limitation of that particular incarnation of a Turing machine.
>>
>> Equivalent example: You may be unable to conduct brain surgery on
>> yourself, but this does not mean humans (or Turing machines) are incapable
>> of performing brain surgery.
>>
>>
>>>
>>> Given that it is a universal turing machine, it is supposed to be able to
>>> solve that problem. Yet because it doesn't have access to the right
>>> level,
>>> it cannot do it.
>>
>> It is an example of direct self-manipulation, which turing machines are
>>> not
>>> capable of (with regards to their alphabet in this case).
>>>
>>
>> Neither can humans change fundamental properties of our physical
>> incarnation.  You can't decide to turn one of your neurons into a magnetic
>> monopole, for instance, but this is not the kind of problem I was referring
>> to.
>>
>> To avoid issues of level confusion, it is better to think of problems
>> with informational solutions, since information can readily cross levels.
>>  That is, some question is asked and some answer is provided.  Can you
>> think of any question that is only solvable by human brains, but not
>> solvable by computers?
>>
>>
>>> You could of course create a model of that turing machine within that
>>> turing
>>> machine and change their alphabet in the model, but since this was not
>>> the
>>> problem in question this is not the right solution.
>>>
>>> Or the problem "manipulate the code of yourself if you are a program,
>>> solve
>>> 1+1 if you are human (computer and human meaning what the average humans
>>> considers computer and human)" towards a program written in a turing
>>> universal programming language without the ability of self-modification.
>>> The
>>> best it could do is manipulate a model of its own code (but this wasn't
>>> the
>>> problem).
>>> Yet we can simply solve the problem by answering 1+1=2 (since we are
>>> human
>>> and not computers by the opinion of the majority).
>>>
>>>
>> These are certainly creative examples, but they are games of language.  I
>> haven't seen any fundamental limitation that can't be trivially reflected
>> back and applied as an equivalent limitation of humans.
>>
>> Jason
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To post to this group, send email to everything-list@googlegroups.com.
>> To unsubscribe from this group, send email to
>> everything-list+unsubscr...@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/everything-list?hl=en.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to