On Sep 7, 10:36 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 06 Sep 2011, at 22:30, Craig Weinberg wrote:
>
> >>> What accounts for substitution level?
>
> >> It is the level where your local constituants can be replaced by
> >> digital device without changing your private experience.
>
> > That doesn't account for the phenomena, it just defines the meaning of
> > the term.
>
> The phenomenon, in this case, is simply "no change in my consciousness
> other than the usual one, when I do thing"

But I mean why would the substitution level be at any particular level
rather than another. Why is it a fixed level rather than a
relativistic continuum?

>
>
>
> >> Of course
> >> only God knows it.
>
> > What accounts for that? Why should this factor be completely
> > inscrutable if it's a natural function of arithmetic?
>
> It is intuitively inscrutable (as it can be argued with thought
> experiments), and it corresponds (assuming DM) with inscrutable
> arithmetical relations. We know today that most arithmetical relations
> have inscrutable components, with some reasonable definition of
> inscrutability.
>

That's a good point. I feel like we're looking at different parts of
the same elephant here. I'm modeling (1-p essence) ∞ (3-p existence)
but if arithmetic is only the computable tip of the inscrutable
iceberg (which is how I'm interpreting what you're saying...unless you
are saying that inscrutable non-comp is arithmetic also, in which case
I think you could only be talking about the same thing I am only I
like calling it 'sense' rather than arithmetic.)

>
>
> >> The account of the experiencer (of the
> >> substitution) does not count, as he may suffer from anosognosia.
>
> > Accounts of non-experiencers equally do not count as they may suffer
> > from HADD/prognosia.
>
> Sure.
>
>
>
> >>> Is it a hard threshold whereby
> >>> Pinocchio becomes a real boy suddenly, or is it a gradient of
> >>> escalating qualia?
>
> >> By definition, if the reconstituted person witness some new feelings,
> >> like having lost something, or having a headache, or whatever, it
> >> means that the level of substitution was not well chosen. People will
> >> accept such substitution when their friends will account for only
> >> slight secondary phenomena, like short nausea or something.
>
> > But what is the threshold at which a reconstituted person feels
> > anything at all?
>
> It does not exist. The person always feel something.

I agree, which supports my perspective that the person actually IS
'feelings'.

> I guess you ask
> what is the threshold for such machine to manifest genuinely the
> phenomenon of feeling something? Here comp gives a clear answer: we
> cannot have such criterion. We can have plausibility criteria, and
> that is what we use in the everyday life. But such everyday-life
> suggests we will always meet difficulties to decide if some entity is
> conscious or not. In case of lower animals, plants, and tomorrow's
> machines, some people will argue that they don't feel, like some
> people argues that some higher animals or even some humans does not
> feel, or not sufficiently to be considered as a genuine person (like
> with some form of coma).

Fair enough. Sort of climbing the ladder from virus to host cell.
Guess there doesn't need to be a particular rung that signifies a new
perceptual range, but there does seem to be an important difference
(to us) between the perceptual range at the bottom rung and the top
rung of the ladder.

>
> > Is it a sudden instantiation of fully formed
> > awareness in a machine, or does the machine individually activate and
> > gradually integrate autonomous modules of quasi-awareness into a
> > psyche?
>
> The 3- machine-body, be it a brain or a silicon chip, is not a person,
> and does not have consciousness.
> The 1-machine-soul is always conscious (that is counter-intuitive, and
> is still an open problem with mechanism, to be sure).

I would see the 1-machine soul as always 'aware' or 'sensorimotive' -
potentially i/o addressable, but not necessarily 'conscious' or
cognitively self-addressable. My only challenge to your view then is
that I see a hierarchical/holarchical schema that separates different
levels of machine-soul, some of which tend toward the machine end, and
some which push toward the soul end, and that there is a difference
which is not accounted from by complexity alone.

I'm not saying that difference is a sentimental prejudice like a pro-
life or racist impulse, but rather an important feature of the 1-p
sensorimotive continuum as it scales up through higher perceptual-
relativity inertial frames. Otherwise we would have no more feeling
for crushing a person's head as popping a balloon and machines would
behave like bacteria or fungi right out of the box.

>
>
>
> >>> Either way seems insufficient for the same reason
> >>> that vanishing or absent qualia seem unlikely.
>
> >> Why? Chalmers makes clear that what would be astonishing, is fading
> >> qualia with no change in the behavior.
>
> > Because qualia appearing without some self-generated behavioral
> > precursor would be just as astonishing.
>
> Relatively to us, sure. But when someone says "yes" to the doctor,
> there has been a self-generated behavioral precursor (indeed the whole
> life of the person underdoing the cut and pasted experience.

Sure, yeah. I was thinking more of an artificial person created 'in
utero' so to speak.
>
>
>
> >> Absent qualia, and vanishing
> >> sensations already occurs in many consciousness pathologies, in
> >> general due to brain troubles, like with Alzheimer.
>
> > Right, but the human sensations do not seem to spontaneously appear in
> > inorganic phenomena.
>
> Nor do they spontaneously appear in organic phenomena. They need some
> long computations before.

It's hard to say. I think that the long computations would correlate
to sensations on a cellular level, so that when human sensations
appear, it corresponds to gestalt human (person) calculations coming
online.

>
> > There has never been a computer which suddenly
> > expressed fear of being turned off, nor has there been any sign that a
> > computer will ever evolve by itself into something that could behave
> > that way.
>
> That seems to me to be a gratuitous affirmation. Even today computers
> inherit from a long history, and they evolve quickly (with or despite
> humans, that is not entirely clear: today the humans do not want
> intelligent machines. I am not sure they even want intelligent
> children; intelligence is always a threat for authorities).

Still, I think there is something about the native character of
machines and computers thus far which should not be so easily
dismissed. That logic, computation, and mechanism have always been
commonly associated with being 'cold', 'sterile', and lacking feeling
I think is not entirely due to superstition or stereotype. I think
there is an important element there that is underrepresented by comp.
A bias which skews more Kraftwerk and less Tom Waits. Lacking
concrete, visceral anchoring. This is what I'm saying is locked up
within the experiences of the descendants of carnivorous meat that
cannot be simulated in semiconductor glass.

It may not be locked up literally in the substance of meat proteins
themselves or of muscle tissue, but the arithmetic is deeper, bassier,
more cruel and predatory than polite silicon can instantiate on it's
own. I don't know that these feelings/perceptual levels can be
articulated adequately through arithmetic, but if they can, I think we
will have to acknowledge the phenomenological difference in the first
place in order to understand it. We will need to know how to make
numbers bleed and die if they are ever to live and breathe.
>
>
>
> >> If the copy of the
> >> brain is to gross, the survivor might loss a lot. Now, a "prolife"
> >> surgeon might well give a very gross digital brain to someone,
> >> without
> >> its consent, by arguing that the life of his patient is sacred
> >> (instead of the more computationalist *quality* of life notion).
> >> Fading qualia does not apply here.
>
> >>> If Pinocchio
> >>> spontaneously opens his eyes one day as a fully realized human
> >>> being,
> >>> that would have odd subjective problems (do they project a simulated
> >>> history in their memory or do they know that they came into
> >>> existence
> >>> today but know everything about the world and their own lives?)
>
> >> UDA illustrates the comp answer to all such questions. The memory of
> >> the past is always a construction of the current brain. What counts
> >> are all the logico-arithmetical relations encoded in the locally
> >> genuine machinery.
>
> > So he would never know that he was just born.
>
> Well, he could if you tell him, and provides some explanation. But
> only its body is new, is personal life can be an older thing,
> supervening on a series of bodies.
>
> > I suppose there's
> > precedent for that kind of thing in hypnotic suggestion, etc. I think
> > that sense has a way of differentiating tangible experience from
> > memory or hallucination,
>
> How? You will need some non-comp magic here.

It's in the way that sensorimotive experience is entangled. You can
feel that something is missing even if you don't know what it is. You
can remember that there's something you can't remember. It's because
there's so many millions of sensorimotive connections and
interconnections. At any given time we are assuming that our working
set is our world, but it's just the top of the stack. We may not be
able to tell that we are dreaming while we are dreaming, but when we
are awake we can usually tell that we are awake.

That extra certainty is not linked to any specific content - ie, we
are not checking for continuity errors in the structural determinism
of our world to validate it, rather it is implicit in more of a social-
moral-somatic consequential gravity which permeates our experience.
Our body tells our nervous system that it's actually walking around in
an unforgivingly serious world. That sense of seriousness is not
perfect by any means, and one of the things that recreational drugs do
is to dampen that sense of gravity so that our motive force can
overcome the sensory gridlock that holds it in check.

It's a sense that accumulates with age, with cultural age, with group
transmission, moral amplifications through specific personalities,
etc. Not all of are equally sensitive, or "sane". We eventually learn
to be sane even in our dreams, where we don't need to be. Is it a
consequence of information density? Would a dream that is long enough
eventually become a world? I don't think it would. There is something
about the length of sense over time - the constancy that rests upon
non-comp insensitivity. It's the vertiginous dread of significance in
the face of it's opposite; entropy and annihilation, that maybe cannot
be emulated. It's not magic, it's just not something that can be added
to nothingness to make it something. No Higgs Boson seasoning makes it
real, gives it mass, it's the reality itself, the essential-
existential, unbearable lightness of being that is heavy.

>
> > even though our conscious cognitive version
> > of that can be compromised.
>
> Yes. Always. We can never be sure that we are not dreaming, or that
> reality is not an arithmetical video game. Assuming comp, we can be
> close to sure that we are in (an infinity of) "arithmetical video
> games". This has a non trivial structure, making comp empirically
> refutable.

I think if you take that (infinity of arithmetical video games) and
multiply it (or divide it) by it's opposite (finite non-arithmetic
tangible realities) then you get my model. Sense = Essence X (or ÷)
Existence. I think that dividing the set of possible video games by
the set of impossible concrete realities actually yields a greater
infinity than the arithmetic. It's arithmetic masked through collapsed
insensate density of non-information to arrive at a third entity which
is a surprise even to itself.

>
> > I think there is a fundamental difference
> > between simulation and genuine experience, and that it is neither
> > rooted in arithmetic nor physics but in the connection between the
> > two.
>
> There is a basic difference between one simulation and one genuine
> experience indeed. The genuine experience needs somehow an infinity of
> emulation (existing in arithmetic).

I'm not compelled to agree that it's only the quantitative
requirements of identical emulation to equal the genuine, I think
that's part of it for sure, but also there is an actual non-emulable
vector that is assigned by virtue of it being a specific instance or
set of timespace-massenergy coordinates within the expanding (or
timespace 'involving') singularity. Even though A=A on one level, each
instance of each pattern in the cosmos represents a hole in the
singularity with a unique signature that cannot be forged (there's
nothing else to forge it with except other parts of the
singularity...which are all connected/merged behind the mask of
timespace.)

>
>
>
> >>> or do
> >>> they gradually come online with morbid in-between states of tortured
> >>> semi-consciousness without means to express or relieve their
> >>> discomforts?
>
> >> That can happen with brain disease too. I guess the pioneer of
> >> immortality will not have an easy beginning in afterlife. This is not
> >> even for after tomorrow.
>
> > How do you know that the arithmetic doesn't have to be run from the
> > beginning (conception or birth) in real time? If you grew a perfect
> > adult clone, it would still be a newborn infant psychologically. The
> > fact that the adult psyche is not passed on from mother to child in
> > the womb makes me think that genuine experience is required to
> > generate significance of a certain qualitative character.
>
> You need billions of years of complex struggles to generate a genome.
> But you need just some hours of love + 9 month to copy it.
> Consciousness and life are easier to explain as relations between
> elements instead of primitive elements.

Copying a brain digitally into a blank brain should similarly be
easier than running a digital psyche on a computer.

>
>
>
> >>>>> You would agree though that a ventriloquist does not transfer
> >>>>> the ability to feel, see, and understand to his dummy, I assume,
> >>>>> so
> >>>>> doesn't that mean that the difference between a wooden dummy and a
> >>>>> machine capable of human feeling is just a matter of degree of
> >>>>> complexity.
>
> >>>> No. The dummy should behave the same in presence and absence of the
> >>>> ventriloquist. But even more, the "dummy" body should do the right
> >>>> computations.
>
> >>> To me, the computations are the ventriloquist. They are just a way
> >>> for
> >>> the ventriloquist to save his act on disk, so that they can be
> >>> executed at a later time through the dummy.
>
> >> You confuse a particular program, with a universal one, having the
> >> same self-referential ability than you and me.
>
> > Our self-referential ability is an aspect of our awareness though. I
> > can't see a reason to assume that a universal program's self-reference
> > would be a form of awareness, any more than my image in the mirror is
> > an emulation of me.
>
> "emulation" does not applied to mirror, which use elementary
> reflections (bouncing photons, Descartes laws, etc.).
> Machine's self-reference concerns a machine which dynamically emulate
> and transforms itself. It is a very different phenomenon based on very
> different fields. The analogy with mirror should not be taken too much
> seriously.

I think the mirror analogy is useful as a reductio ad absurdum though.
We can see that the image bears excellent reverse fidelity optically,
but we are not tempted to say that this behavior has any experience at
all. The mirror image behaves perfectly in it's environment, and so I
would say it constitutes a form of simplistic UM execution. If I could
only get a mirror to show you the image of someone else and hooked it
up to two racks of impressive servers, how could you tell that I had
not built an optical AGI person? Just because we put a lot of
sophisticated arithmetic in between the original and the emulation
doesn't mean that the emulation process isn't empty of feeling. A
mirror is a generator of optical p-zombies.

>
>
>
> >>>>> If so, I think to claim that explains qualia almost
> >>>>> completely is not only premature, but, to my mind, somewhat
> >>>>> deceptive.
> >>>>> It's a con. (Sorry, not accusing you personally - just the
> >>>>> presumption
> >>>>> of the position).
>
> >>>> The theory explains why numbers develop many sort of beliefs.
> >>>> Some of
> >>>> them being lived as self-referentially true but non communicable,
> >>>> or
> >>>> non provable. They also follows axioms or theorems in theories of
> >>>> qualia done independently of comp.
>
> >>> It sounds promising, but without an example it's oblique to me. It's
> >>> critical acclaim of an idea that I haven't been able to get out of
> >>> the
> >>> intriguing packaging. Isn't there some natural language example you
> >>> can give me of the theory - without variables or big picture
> >>> generalizations?
>
> >> I am specialized in theory and big picture. Examples abounds: look
> >> around you.
>
> > When I look around me I see a world that makes sense as a concrete
> > experience, not as an arithmetic abstraction.
>
> All numbers among those being computational state of machines and
> having a long computation/history provably say this too.

Why do they though? What is gained by arithmetic pretending to be non
arithmetic? Why can't it just all stay in machine language condensed
quantitative notations instead of somehow inventing 'experience' and
qualia out of nowhere?

>
>
>
> >>> Can you tell me a story about one particular number
> >>> and why it has developed a belief, or one particular qualia that is
> >>> explained by a particular computation theory?
>
> >> Yes. Take the life of Craig Weinberg as an example. Assuming comp,
> >> this illustrates a point of the theory: machines have necessarily an
> >> hard time to conceive that comp might be true. Comp explains why such
> >> an intuition is correct. In Plotinian term this is because some of
> >> our
> >> beliefs are true, or connected to the one-without-name.
>
> > Craig Weinberg used to assume comp though. What changed?
>
> Craig Weinberg did not understood the comp mind body problem, and
> decided another theory, which is still compatible with comp, except
> that for unknown reason he wants it to be non-comp a priori.
> Perhaps some machine bites you?

lol, machines bite me all the time. I used to make my first computer
do math in an endless loop on screen to punish it. I don't know why
you want my theory to be non-comp, I say that sense is both comp and
non-comp. It's the variance x invariance between the two.

> Some other people abandon comp after understanding UDA, because they
> *want* a primitively material universe (but that is wishful thinking,
> because there are no evidence that there is a primitive universe, nor
> that comp is untrue).

Yeah I don't need it to be primitively material, but I don't need it
to be primitively immaterial either. It's the overlap that is mundane
reality and the underlap that is profound esoterica.

>
>
>
> >>>>>> I think the hard problem is 99% solved, and 100% metasolved. And
> >>>>>> given
> >>>>>> that the solution predicts how matter appears and behave, the
> >>>>>> only
> >>>>>> thing to do to get the whole picture is to derive physics from
> >>>>>> self-
> >>>>>> reference/machine's theology. This might lead to a refutation of
> >>>>>> comp,
> >>>>>> or to a refutation of the classical theory of knowledge
> >>>>>> (although I
> >>>>>> doubt this can be possible).
>
> >>>>> I think that the way it approaches the hard problem is itself
> >>>>> self-
> >>>>> referential. By equating consciousness with computation to begin
> >>>>> with,
> >>>>> it makes sense that computation can be used to find itself to be
> >>>>> the
> >>>>> source of consciousness. To me, the fact that consciousness is
> >>>>> private
> >>>>> and non-computable are the least descriptive possible aspects of
> >>>>> them.
>
> >>>> The theory explains the role of consciousness: it speeds up UMs
> >>>> relatively to other UMs.
>
> >>> That concurs with my ideas too. Cumulative entanglement is a way of
> >>> encapsulating or recapitulating computation (sort of literally
> >>> 'coming
> >>> to a head') - but, I don't think it gets to the heart of the
> >>> matter at
> >>> all. It doesn't address the qualitative quality of qualia.
>
> >> That the whole point of the theory. Modal logic makes it possible to
> >> handle qualitative features, and arithmetical self-reference offers
> >> the (variate) modal logics on a plateau, and this by distinguishing
> >> the communicable parts from the non communicable parts (and even the
> >> first person singular parts from the first person plural parts).
>
> >> But with all this, it would have been still possible that those
> >> qualia
> >> are epiphenomenal. The point here was to explain that the theory
> >> gives
> >> a role (and thus a 3-functional-role, of the kind capable of being
> >> selected by evolution) to consciousness (the quality) in the probable
> >> worlds/computations. So consciousness is not epiphenomenal. So comp
> >> explains the quality, the non communicability of the quality, and
> >> provides to consciousness a role in the beyond-cosmic competition
> >> between all the UMs and LUMs. They can also recognize themselves, as
> >> UMs or LUMs, and climb toward [no-name], from reality layers to
> >> reality layers.
>
> > Why would encapsulating information make sense in 3-p though? If the
> > computation already exists as is, what wants it to be re-presented in
> > a different way through awareness?
>
> Why? The fact is: numbers do that all the time. Universal numbers
> encapsulate themselves and all other UMs, infinitely often. Why?
> Because this follows form addition and multiplication.
>  From inside the reason is different, and more akin to the idea that
> truth (God) is a universal soul attractor, in the long run.

The truth is a universal soul attractor in my view because the
singularity can only appear to break itself into pieces by involuting
timespace. In the long run, it can only be what it is. The idea of
addition and multiplication encapsulating something though is a
sensorimotive strategy of a human mind. Scaling up a computation to a
billion bits doesn't involve any actual encapsulation - each one of
those billion bits has to do something, it's only our understanding of
it and our control of it through programming algorithms that has an
encapsulated re-presentation.

>
> >>> To say that
> >>> consciousness has a role in a machine universe is putting the cart
> >>> before the horse. It is the machine that has a role in supporting
> >>> consciousness.
>
> >> That is what most mechanists say. But they are ultimately wrong. It
> >> is
> >> the consciousness of the (L)UMs which select the consistent
> >> continuations, and this is concomitant with the deepening of the
> >> stories, and the "body apparitions".
> >> I recall that physics has become a branch of machine psychology, if
> >> the UDA reasoning is valid.
>
> > What makes you so sure that it's the machine that has consciousness
> > and not consciousness that perceives mechanism (among other things)?
>
> ?
> I was just saying that it is consciousness who perceive the mechanism.

You're saying it is the consciousness *of the (L)UMs*. Why aren't the
(L)UMs a figment of the consciousness of the material medium hosting
them?

>
>
>
> >>> To say that consciousness has a role in computation is
> >>> to say that a screenplay has a role within a movie set, but that the
> >>> stage and props are primitives from which movieness arises.
>
> >> ?
>
> > I'm saying that a theater exists to play movies for an audience. Your
> > view seems to me to be saying that if you build a room with seats and
> > a projector that a movie will appear when you turn out the lights.
>
> ?
> (you can't extrapolate from metaphors).

boo. I thought that was a pretty good one actually?

>
>
>
> >> Consciousness selects the histories (like in the WM duplication), and
> >> in each history, it speeds-up the computations (like in engineering).
>
> > I agree consciousness recapitulates through it's selections, but there
> > would be no point in recapitulating anything if the genuine experience
> > weren't significant to begin with.
>
> Right.
>
> > Does one computation have any more
> > inherent significance than another?
>
> Certainly not, from the 3-perspective, and certainly yes for most 1-
> perspectives.
>
>
>
> >>>>> It diminishes the relevance of how significance is achieved
> >>>>> through
> >>>>> qualia, minimizes the intensity of biological commitment to
> >>>>> survival
> >>>>> and things like the difference between pain and pleasure.
>
> >>>> I have no clue why you say so.
>
> >>> Because numbers don't have to care about anything.
>
> >> Ah?
> >> For 3-numbers, that is obvious? Nor does a brain or anything third-
> >> person describable. So if I say that a machine thinks, or that a
> >> number thinks, I am always talking of the first person associated
> >> locally and relatively with it. In that sense, numbers and machine
> >> can
> >> think, trivially (in the comp theory).
> >> So here, you are just saying that comp is false, but without
> >> providing
> >> an argument.
>
> > Why do first person local numbers have to care about anything?
>
> Because they are naturally attracted by the good. First the personal
> good, then for higher purpose.
> The personal good itself might come from very simple routines like "do
> whatever you can for eating and avoiding being eaten". But this, when
> encoded can be something akin to the Mandelbrot set, or the UD,
> leading to extremely complex 3-internal situations, and to even more
> surprises from the 1-perspectives.

Isn't the 'good' just what is defined arbitrarily by the rules of it's
initial state?

>
>
>
> >>>>> I don't see
> >>>>> that a number can be spectacularly painful. Unless you're talking
> >>>>> about a particular arithmetic configuration that explains misery
> >>>>> and
> >>>>> ecstasy or blue versus red?
>
> >>>> I don't see any problem here, other than mathematical questions.
> >>>> You
> >>>> can't refute Newton physics by saying that it cannot predict
> >>>> weather.
>
> >>> But you shouldn't you refute the use of Newton physics to predict
> >>> weather when people suggest that there is no problem in doing it?
>
> >> Well, I should refuse (not refute) the use of Newton for weather
> >> broadcasting. Because it would be non affordable. But the behavior of
> >> the weather does not refute Newton's laws, for some level of
> >> description.
>
> > The experience of seeing in color doesn't violate Maxwell's equations
> > at some level of description either,
>
> Wow. That's comp!
>
> > but neither do field equations of
> > any kind anticipate the basic visual qualities of colors themselves.
>
> I guess you are right. But arithmetic is full of processes leading to
> states having no shorter history or explanation. So again, that fits
> very well with the comp hypothesis. In this case, of basic visuals,
> you need Maxwell+self-reference+deep history.
>
I still don't see color coming out of the function of self reference
and deep history alone, although those are part of it. There needs to
be that unique signifier that comes from something being a hole in the
whole singularity. It can't arise out of just generic, color like
impressions - it is visual specificity incarnate. The visible spectrum
is a truth anchor that has no one-to-one correlation to any other
sense. It can't be translated into sound or electromagnetism
equations. Qualia is subtractive - holes in the whole.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to