Dear Bruno,

I agree with you (almost) completely that "we" (bio-beings) are computers,
except for the *diminishing factor* we HAVE to include into a "computer" as
a machine of knowable components and capabilities, observed WITHIN our
perspectives as of yesterday.
Your term "universal computer" may fit better: an infinite 'machine' with
infinite capabilities/domain of which we (may) select aspects we DO know
of... (That may be MY version as I understand (or don't) it. The
"humanized" size reduced description.
"Computer" BTW is called in other languages something like 'calculational
machine' which separates it sharply from the more subtle sense of
'computing in English (I think even more in French) as closer to "mentally
put together" straight from the Latin origin. The calculational aspect - I
think - dates back to Babbage way before Turing. GAI applies series of
thoughts to 'compute' instead of numbers (sorry!) and 'meanings' are the
result. (Nevertheless I consider AI still a humanly limited art, since it
starts from what we can
observe and deduce and arrives at - similarly - what we can observe and
deduce (even if surprised).)

The "bio" - indeed one of the two science-domains we know the least of (the
other is neurology/psych) includes infinite networks of influences, applies
infinite inputs and we observe only part of them: the "perceived reality"
part. E.g. a cell does not end at its outer membrane and those
characteristics WE apply. It reacts to wider physical domains and
not-so-physical procedures as well.
In my agnostic view I do not presume what kind of 'items' populate the
infinite (beyond our models) complexity of everything (call it: existence)
what kind of relations they may have what we translate in our ignorance as
"our world" (call it:* physical*).
We cannot even look beyond our limited models of known items/aspects of
yesterday. We (conventional science) explain them all in the framework of
our knowledge base (of yesterday) and improve on THAT whenever we 'get'
something more to it.

Don't let yourself drag into a narrower vision just to be able to agree,
please. I say openly: I dunno (not Nobel-stuff I admit).

John Mikes

On Tue, Nov 29, 2011 at 12:44 PM, benjayk

> Bruno Marchal wrote:
> >
> >> I only say that I do not have a perspective of being a computer.
> >
> > If you can add and multiply, or if you can play the Conway game of
> > life, then you can understand that you are at least a computer.
> So, then I am computer or something more capable than a computer? I have no
> doubt that this is true.
> Bruno Marchal wrote:
> >
> >> When I look
> >> at myself, I see (in the center of my attention) a biological being,
> >> not a
> >> computer.
> >
> > Biological being are computers. If you feel to be more than a
> > computer, then tell me what.
> Biological beings are not computers. Obviously a biological being it is not
> a computer in the sense of physical computer. It is also not an abstract
> digital computer (even according to COMP it isn't) since a biological being
> is physical and "spiritual" (meaning related to subjective conscious
> experience beyond physicality and computability).
> Neither physicality nor spirituality can be reduced to computations.
> Neither
> can they be derived from it. Your reasoning doesn't work (due to the
> reasons
> I already gave and clarify below).
> And no, there is no need for any evidence for some non-turing emulable
> infinity in the brain. We just need non-turing emulable finite stuff in the
> brain, and that's already there. No one yet succeeded to emulate the brain,
> and we can just assume something can be substituted by an emulation if we
> show that it can be.
> That seems quite unlikely, since already very simple objects like a stone
> can't be emulated. If we simulate a stone, we just simulate our description
> of it, we can't actually touch it and use it.
> BTW, I am not saying this non-turing emulable stuff is some mysterious
> primitive matter that actually no one can show the existence of. It is
> consciousness, and you can see for yourself that it exists.
> Bruno Marchal wrote:
> >
> >>
> >>
> >> Bruno Marchal wrote:
> >>>
> >>>> It's harder to dinstinguish
> >>>> yourself from other simulated selfes than from other biological
> >>>> selves,
> >>>> because of the natural biological barriers that we have, that
> >>>> computers
> >>>> lack.
> >>>
> >>> Ah?
> >> I can see that I am physically/biologically seperate from you,
> >
> > You cannot see that.
> ???
> Of course I can see that. We don't share the same brain and body,
> relatively
> speaking. Of course we can't be seperate in any ultimate way (even just
> according to QM), but I don't say that.
> Bruno Marchal wrote:
> >
> >> while we
> >> could be both simulated on one computer, without any clear physical
> >> dividing
> >> barrier.
> >
> > All my point is that once we assume comp, the word "physical" can no
> > more be taken as granted.
> No, that's not your only point as presented by you. You say that assuming
> COMP experience is related only to a measure on the computations.
> You can't just assume there is only computational immaterialism and
> materialsm.
> Bruno Marchal wrote:
> >
> > You seem to *presuppose* a primary physical universe (Aristotle). I do
> > not.
> I don't either. Frankly I wonder why you think that, given that I have
> taken
> a very obvious non-material standpoint in our discussions thus far.
> It somehow seems like you pretend that all opinions except your own and the
> ones of your favorite opponents (the ones you can easily refute) do not
> exist.
> Honestly I am quite stupid to discuss with someone that just chooses to
> plainly ignore everything that doesn't fit into his own preconceived
> notions
> of what someone that's criticizing is saying.
> It is quite strange to say over and over again that I haven't studied your
> arguments (I have, though obviously I can't understand all the details,
> given how complicated they are), while you don't even bother to remember
> the
> most fundamental premise of my argumentation (non-materialism). It is like
> I
> was saying to you: "Oh it seems to me you just presuppose that we are
> material computers, that's why your argument works".
> Your argument may work against materialism (I am not sure, I don't take
> materialism seriously anyway - frankly materialism is a joke, since
> materialist are not even capable to say what matter is supposed to be), but
> you don't take into account any of the alternatives that can be taken more
> seriously (any sort of non-materialism).
> It seems very much you presuppose a purely material or computational
> ontology.
> Bruno Marchal wrote:
> >
> >>
> >>
> >> Bruno Marchal wrote:
> >>>
> >>>> We can only say YES if we assume there is no self-referential loop
> >>>> between
> >>>> my instantiation and my environment (my instantiation influences
> >>>> what world
> >>>> I am in, the world I am in influences my instantiation, etc...).
> >>>
> >>> Why? Such loops obviously exist (statistically), and the relative
> >>> proportion statistics remains unchanged, when doing the substitution
> >>> at the right level. If such loop plays a role in consciousness, you
> >>> have to enlarge the digital "generalized" brain. Or comp is wrong,
> >>> 'course.
> >> I think it is self-refuting if we not already take the conclusion for
> >> granted (saying YES only based on the faith we are already purely
> >> digital).
> >> Imagine substituting our whole generalized brain (let's say the
> >> milky way).
> >> Then you cannot have access to the fact that the whole milky way was
> >> substituted,
> >
> > In the reasoning we use the fact that you are told in advance. That
> > you cannot see the difference is the comp assumption.
> Ah, OK. If you can't notice you are being substituted the very statement
> that you are being substituted is meaningless. If I can't know or believe
> (based on any kind of evidence) that I am being substituted, what do we
> base
> the statement that we are being substituted on? It is as abitrary as saying
> that I am the pink unicorn.
> Bruno Marchal wrote:
> >
> >> because otherwise the whole milky way would have to appear to
> >> be a computer running a simulation of the milky way, making our
> >> experience
> >> drastically different (which is not possible, given that our
> >> experience
> >> should remain invariant). But if we don't have access to the fact/
> >> the way
> >> that we are being substituted, it makes no sense to say YES, because
> >> we
> >> can't even say whether are being substituted. If a substitution is not
> >> taking place subjectively, the question of saying YES becomes
> >> meaningless
> >> (making COMP meaningless).
> >
> > Of course not. You talk like a doctor who would provides artificial
> > brain without asking the permission of the patient. Then comp entails
> > that, if the doctor is choosing the right subst level, the patient
> > will not see the difference. But that's part of the point.
> If the patient can't see the difference, the doctor is of no help, since he
> will be the same after the operation as before. If his brain was damaged,
> the doctor will make the computer simulate a damaged brain, what a big
> success!
> So the only option that is remotely rational is to say NO (since if he says
> YES he has nothing to gain but much to lose), that's why saying YES is
> close
> to meaningless. It is as meaningful as saying yes to a magician that
> transforms you into a pink unicorn that will experience the same way as you
> did.
> If we still say YES, we just have faith that nothing will happen, even
> though it is pretty clear that something will happen. If we have that
> faith,
> we believe in abitrary mysterious occurences. You can't derive anything
> from
> that. Especially you can't derive that we surived due to the instantiation
> of the right computations.
> Bruno Marchal wrote:
> >
> >> The only way we could know we are being substituted is if there is
> >> something
> >> other than the milky way to communicate with (which can see we are
> >> being
> >> substituted).
> >
> > Yes. Like the doctor.
> But we have no basis whatsoever to believe the statement of the doctor that
> substituted you, unless he gives you evidence that you actually DID change,
> and in this case your experience can't remain invariant (because you become
> aware that your brain has changed).
> When the doctor says he substituted you, he either lies, or believes that
> substitution=non-substitution, or he just asserts that he substituted the
> way he interfaces with you (or simulates you) - in which case we ourselves
> remain unsubstituted.
> If you say we take the doctor on faith, than fine, you base your whole
> argument on absolute blind faith. Unfortunately then we could as well base
> the argument on "1+1=3" or "there are pink unicorn in my room even though I
> don't notice them", so it's worthless. Note, I agree it is not meaningless
> to say YES or NO to a substitution, just in the particular way you need it
> in order for your argument.
> Bruno Marchal wrote:
> >
> >> But then we have no reason to suspect that this other will
> >> remain invariant, because from its perspective we have shifted from
> >> being
> >> the milky way to being a computer running a simulation of a milky
> >> way, which
> >> is such a big difference that it will inevitably totally change its
> >> response
> >> (to the point of not being the same other / the same relative world
> >> anymore
> >> - a a totally different interaction s taking place).
> >
> > You beg the question. Assuming comp he will say "thanks doctor,  I
> > feel better now".
> No, he can't say that, since, as you just wrote youself, *he can't notice
> the difference*. It is stupid to say thanks for a doctor that didn't change
> anything.
> Bruno Marchal wrote:
> >
> >> Or we just *believe* we are being substituted (for whatever reason)
> >> and say
> >> YES to that, without any evidence we actually are being substituted,
> >> but
> >> then we are not saying YES to an actual substitution but to the
> >> conclusion
> >> (I am just a digital machine that is already equal to the
> >> substitution).
> >
> > Please just study the proof and tell me what you don't understand. I
> > don't see the relevance of the paragraph above, nor can I see what you
> > are arguing about.
> I studied your proof. Of course your proof works if you assume the
> conclusion at the start or assume something nonsensical (like saying YES to
> a substitution that doesn't subjectively happen). My point is that either
> you are just proving your assumption (we say YES due to a belief in our
> digital, that is, we say YES because we already are digitally substituted),
> or your proof doesn't work (because actually the patient will notice he has
> been substituted, that is, he didn't survive a substitution, but a change
> of
> himself - if he survives).
> I guess I will abandon the discussion, if in the next post you also don't
> bother to respond to anything essential I said. Apparently you are
> dogmatically insisting that everyone that criticizes your argument doesn't
> understand it and is wrong, and therefore you don't actually have to
> inspect
> what they are saying. If this is the case a discussion is quite futile. Up
> to know I just had the faith that you know better than that and will sooner
> or later give an actual response, but now I am not so sure anymore.
> Bruno Marchal wrote:
> >
> >> Either way, our experience doesn't remain invariant, or we have no
> >> way to
> >> state we are being substituted (making COMP meaningless).
> >
> > This point is not valid. We can say "yes" for a substitution in
> > advance. Then, in that case, just surviving a fatal brain illness will
> > make the difference.
> But you just said that this can't happen, because he himself will
> subjectively remain unchanged. His fatal brain illness will still be there,
> because we have to include it in the substitution. Otherwise you are not
> substituting, you are changing him. And in this case he will "survive" as
> what he changed into (even if this is just a collection of misfiring
> transistors). But then we obviously don't know whether he really survives
> in
> any sense of the word, and if, in what sense he did survive (since this
> depends in which way we changed him).
> Bruno Marchal wrote:
> >
> >>
> >> How is that not a reductio ad absurdum?
> >> The only situtation where COMP may be reasonable is if the
> >> substitute is
> >> very similar in a way beyond computational similarity - which we can
> >> already
> >> confirm due to digital implants working.
> >
> > The apparent success of digital implants confirms that we don't need
> > to go beyond computational similarity.
> It doesn't, because the surrounding neurons may make additional connections
> to interpret the computations that are happening. This just works as long
> as
> the neurons can make enough new connections to fill the similarity gap.
> Bruno Marchal wrote:
> >
> >> This would make COMP work in a quite special case scenario, but
> >> wrong in
> >> general.
> >
> > It is hard to follow you.
> I am not saying anything very complicated. It is only hard to follow
> because
> your are insisting on some theoretical situtation which is non-sensical in
> reality.
> If you do insists that we say YES in the way you would like us to, we
> either
> say YES to your conlusion, or we just say YES to something that doesn't
> happen (which doesn't allow any conclusion to be drawn).
> benjayk
> --
> View this message in context:
> Sent from the Everything List mailing list archive at
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to
> To unsubscribe from this group, send email to
> For more options, visit this group at

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at

Reply via email to