On 04 Sep 2012, at 22:40, benjayk wrote:



Bruno Marchal wrote:

Right. It makes only first person sense to PA. But then RA has
succeeded in making PA alive, and PA could a posteriori realize that
the RA level was enough.
Sorry, but it can't. It can't even abstract itself out to see that
the RA
level "would be" enough.

Why?
No system can reason as if it did not exist, because to be coherent it would
than have to cease to reason.

Why? You just seem to reason that if you don't exist you would cease to reason.
But I don't see the relevance of this to what I said.


If PA realizes that RA is enough, then this can only mean that RA + its own
realization about RA is enough.

Yes, that is why PA can believe that RA is it ontological source, despite being much epistemologically much stronger than RA.





Bruno Marchal wrote:

I see you doing this all the time; you take some low level that can
be made
sense of by something transcendent of it and then claim that the low
level
is enough.

For the ontology. Yes.
I honestly never understood what you mean by ontology and epistemology.

Ontology is what we take as existing at the base level. In my favorite theory what exist is simply 0, s(0), etc.
And nothing else.

Put it differently, it is what the variable used in the theory represent. ExP(x) means that there is some number verifying P.

Epistemological existence is about the memory content of such numbers, resulting from their complex interaction with other numbers. In the math part, they are handle by prefixing modalities, and have shape like

[]Ex[]P(x), or

[]<>Ex []<>P(x)

and more complex one. Note that those are still arithmetical sentences as all modalities used here admit purely arithmetical intepretations.




For
me it seems that it is exactly backwards. We need the 1-p as the ontology,
because it is what necessarily primitively exists from the 1-p view.

... from the 1p views.

But when we search a "scientific theory" we bet on some sharable reality beyond the 1p view, be it a physical universe or an arithmetical one.


Arithmetic is one possible epistemology.

And assuming comp, it is one possible epistemology.



I don't even get what it could mean that numbers are ontologically real, as we know them only as abstractions (so they are epistemology). If we try to talk as if numbers are fundamentally real - independent of things - we can't
even make sense of numbers.

?
I can. One number, two numbers, three numbers, etc.



What is the abstract difference between 1 and 2 for example.

1

:)


What is the
difference between 0s and 0ss?

0s



What's the difference between the true
statement that 1+1=2 and the false statement that 1+2=2?

You just named it. The first is true, the second is false.


How is any of it
more meaningful than any other abitrary string of symbols?

T#gtti Hyz# uuuu8P^ii ?





We can only make sense of them as we see that they refer to numbers *of
objects* (like for example the string "s").

OK.


If we don't do that we could as well embrace axioms like 1=2 or 1+1+1=1 or
1+9=2343-23 or 1+3=*?ABC or  whatever else.

OK.





Bruno Marchal wrote:

Strangely you agree
for the 1-p viewpoint. But given that's what you *actually* live, I
don't
see how it makes sense to than proceed that there is a meaningful 3-
p point
of view where this isn't true. This "point of view" is really just an
abstraction occuring in the 1-p of view.

Yes.
If this is true, how does it make sense to think of the abstraction as
ontologically real and the non-abstraction as mere empistemology? It seems
like total nonsense to me (sorry).

Because the abstraction provides a way to make sense of how 3p numbers get 1p views and abstract their own idea of what numbers are.

NUMBERS ====> CONSCIOUSNESS ====> PHYSICAL REALM ====> HUMAN ====> HUMAN'S CONCEPTION OF NUMBERS





Bruno Marchal wrote:



Bruno Marchal wrote:

With comp, to make things simple, we are high level programs. Their
doing is 100* emulable by any computer, by definition of programs and
computers.
OK, but in this discussion we can't assume COMP. I understand that
you take
it for granted when discussing your paper (because it only makes
sense in
that context), but I don't take it for granted, and I don't consider
it
plausible, or honestly even meaningful.

Then you have to tell me what is not Turing emulable in the
functioning of the brain.
*everything*!

You point here on their material constitution. That begs the question.


Rather show me *what is* turing emulable in the brain.

The chemical reactions, the neuronal processing, etc. Anything described in any book on brain.


Even
according to COMP, nothing is, since the brain is material and matter is not
emulable.

Right. But that matter exists only in the 1p plural view, not in the ontology.




As I see it, the brain as such has nothing to do with emulability. We can do simulations, sure, but these have little to do with an actual brain, except
that they mirror what we know about it.

It seems to me you are simply presuming that everything that's relevant in the brain is turing emulable, even despite the fact that according to your
own assumption nothing really is turing emulable about the brain.

... about the physical constitution of the brain. OK.





Bruno Marchal wrote:

Also, I don't take comp for granted, I assume it. It is quite different.

I am mute on my personal beliefs, except they change all the time.

But you seems to believe that comp is inconsistent or meaningless, but
you don't make your point.
I don't know how to make it more clear. COMP itself leads to the conclusion that our brains fundamentally can't be emulated, yet it starts with the
assumption that they can be emulated.

At some level. You forget the key point.


We can only somehow try to rescue COMPs consistency by postulating that what the brain is doesn't matter at all, only what an emulation of it would be
like.

Yes. the brain's constitution does not matter. Comp is functionalism at some level.


I genuinely can't see the logic behind this at all.

I think this is due to the identification of mind and brain that you are doing, but with comp the brain is a mental commodities, like the body. It does not create consciousness, it relativize it in complex context only.






Bruno Marchal wrote:


In which way does one thing substitute another thing if actually the
correct
interpretation of the substitution requires the original? It is like
saying
"No you don't need the calculator to calculate 24,3^12. You can
substitute
it with pen and pencil, where you write down 24,3^12=X and then
insert the
result of the calculation (using your calculator) as X."
If COMP does imply that interpreting a digital einstein needs a real
einstein (or more) than it contradicts itself (because in this case
we can't
*always* say YES doctor, because then there would be no original
left to
interpret the emulation).
Really it is quite a simple point. If you substitute the whole
universe with
an emulation (which is possible according to COMP)

It is not.
You are right, it is not, if we take the conclusions of your reasoning into
account. Yet COMP itself strongly seems to suggest it. That's the
contradiction.

? Comp is "it exists a level such that I survive an emulation of it". Then it makes the whole of the observable reality, including consciousness not Turing emulable. It might seems weird, but I don't see a contradiction yet.




Bruno Marchal wrote:

If there was something outside the universe
to interpret the simulation, then this would be the level on which
we can't
be substituted (and if this would be substituted, then the level
used to
interpret this substitution couldn't be substituted, etc....).
In any case, there is always a non-computational level, at which no
digital
substitution is possible - and we would be wrong to say YES with
regards to
that part of us, unless we consider that level "not-me" (and this
doesn't
make any sense to me).


Indeed we are not our material body. We are the result of the activity
of the program supported by that body. That's comp.

I don't have a clue why you believe this is senseless or inconsistent.
For one thing, with COMP we postulate that we can substitute a brain with a
digital emulation ("yes doctor"),

At some level.



yet the brain

The material brain.


and every possible
substitution can't be purely digital according to your reasoning (since
matter is not digital).

Change of matter is not important if it preserves the right functionality at some level.


So if we do a substitution, it could only be a semi-digital or a non- digital substitution, but then your whole reasoning falls apart (the steps assume
you are solely digital).

COMP is simply contradictory, unless we take for granted your result (we are already only arithmetical, and so no substitution does really take place -
yes doctor is just a metaphor for "I am digital"), but then it is
tautological and your reasoning is merely an explanation of what it means if
we are digital.

No, it predicts that the logic of observable is not boolean, and so it is not tautological. It explain that the whole filed of physics is given by self-referential modalities in arithmetic; so it is testable, and thus not tautological.




Of course we could engage in stretching the meaning of words and argue that COMP says "functionally correct substitution", meaning that it also has to be correctly materially implementened. But in this case we can't derive anything from this, because a "correct implementation" may actually require
a biological brain or even something more.

The consequences will go through as long as a level of substitution exist.


Bruno


Actually I don't think you have any problems to understand that on an
intellectual level. More probably you just don't want to lose your "proof", because it seems to be very important you (you defended it in thousands of posts). But honestly, this is just ego and has nothing to do with a genuine
search for truth.

benjayk

--
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34389259.html
Sent from the Everything List mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to