On Jan 28, 7:29 pm, acw <a...@lavabit.com> wrote:

> On 1/27/2012 15:36, Craig Weinberg wrote:> On Jan 27, 12:49 am, 
> acw<a...@lavabit.com>  wrote:
> >> On 1/27/2012 05:55, Craig Weinberg wrote:>  On Jan 26, 9:32 pm, 
> >> acw<a...@lavabit.com>   wrote:

> >>>> There is nothing on the display except transitions of pixels. There is
> >>>> nothing in the universe, except transitions of states

> >>> Only if you assume that our experience of the universe is not part of
> >>> the universe. If you understand that pixels are generated by equipment
> >>> we have designed specifically to generate optical perceptions for
> >>> ourselves, then it is no surprise that it exploits our visual
> >>> perception. To say that there is nothing in the universe except the
> >>> transitions of states is a generalization presumably based on quantum
> >>> theory, but there is nothing in quantum theory which explains how
> >>> states scale up qualitatively so it doesn't apply to anything except
> >>> quantum. If you're talking about 'states' in some other sense, then
> >>> it's not much more explanatory than saying there is nothing except for
> >>> things doing things.

> >> I'm not entirely sure what your theory is,

> > Please have a look if you like:http://multisenserealism.com

> Seems quite complex, although it might be testable if your theory is
> developed in more detail such that it can offer some testable predictions.

I'm open to testable predictions, although part of the model is that
testing itself is biased toward the occidental half of the continuum
to begin with. We cannot predict that we should exist.

> >> but if I had to make an
> >> initial guess (maybe wrong), it seems similar to some form of
> >> panpsychism directly over matter.

> > Close, but not exactly. Panpsychism can imply that a rock has human-
> > like experiences. My hypothesis can be categorized as
> > panexperientialism because I do think that all forces and fields are
> > figurative externalizations of processes which literally occur within
> > and through 'matter'. Matter is in turn diffracted pieces of the
> > primordial singularity.

> Not entirely sure what you mean by the singularity, but okay.

The singularity can be thought of as the Big Bang before the Big Bang,
but I take it further through the thought experiment of trying to
imagine really what it must be - rather than accepting the cartoon
version of some ball of white light exploding into space. Since space
and time comes out of the Big Bang, it has no place to explode out to,
and no exterior to define any boundaries to begin with. What that
means is that space and time are divisions with the singularity and
the Big Bang is eternal and timeless at once, and we are inside of it.

> > It's confusing for us because we assume that
> > motion and time are exterior conditions, by if my view is accurate,
> > then all time and energy is literally interior to the observer as an
> > experience.

> I think most people realize that the sense of time is subjective and
> relative, as with qualia. I think some form of time is required for
> self-consciousness. There can be different scales of time, for example,
> the local universe may very well run at planck-time (guesstimation based
> on popular physics theories, we cannot know, and with COMP, there's an
> infinity of such frames of references), but our conscious experience is
> much slower relative to that planck-time, usually assumed to run at a
> variable rate, at about 1-200Hz (neuron-spiking freq), although maybe
> observer moments could even be smaller in size.

I think planck time is an aspect of the instruments we are using to
measure microcosmic events. There is no reason to think that time is
literal and digital.

> > What I think is that matter and experience are two
> > symmetrical but anomalous ontologies - two sides of the same coin, so
> > that our qualia and content of experience is descended from
> > accumulated sense experience of our constituent organism, not
> > manufactured by their bodies, cells, molecules, interactions. The two
> > both opposite expressions (a what&  how of matter and space and a who
> > &  why of experience or energy and time) of the underlying sense that
> > binds them to the singularity (where&  when).

> Accumulated sense experience? Our neurons do record our memories
> (lossily, as we also forget)

There is loss but there is also embellishment. Our recollection is
influenced by our semantic agendas, not only data loss. There's also
those cases of superior autobiographical memory
http://www.cbsnews.com/stories/2010/12/16/60minutes/main7156877.shtml
which indicate that memory loss is not an inherent neurological
limitation.

>, and interacting "matter" does lead to
> state changes. Although, this (your theory) feels much like a
> reification of matter and qualia (and having them be nearly the same
> thing), and I think it's possible to find some inconsistencies here,
> more on this later in this post.

> >> Such theories are testable and
> >> falsifiable, although only in the 1p sense. A thing that should be worth
> >> keeping in mind is that whatever our experience is, it has to be
> >> consistent with our structure (or, if we admit, our computational
> >> equivalent) - it might be more than it, but it cannot be less than it.
> >> We wouldn't see in color if our eyes' photoreceptor cells didn't absorb
> >> overlapping ranges of light wavelengths and then processed it throughout
> >> the visual system (in some parts, in not-so-general ways, while in
> >> others, in more general ways). The structures that we are greatly limit
> >> the nature of our possible qualia.

> > I understand what you are saying, and I agree the structures do limit
> > our access to qualia, but not the form. Synesthesia, blindsight, and
> > anosognosia show clearly that at the human level at least, sensory
> > content is not tied to the nature of mechanism. We can taste color
> > instead of see it, or know vision without seeing. This is not to say
> > that we aren't limited by being a human being, of course we are, but
> > our body is as much a vehicle for our experience as much as our
> > experience is a filtered through our body. Indeed the brain makes no
> > sense as anything other than a sensorimotive amplifier/condenser.

> Synesthesia can happen for multiple reasons, although one possible cause
> is that some parts of the neocortical hierarchy are more tightly
> inter-connected, which leads to sense-data from one region to directly
> affect processing of sense-data from an adjacent region, thus having
> experience of both qualia simultaneously. I don't see how synesthesia
> contradicts mechanism,

Synesthesia illustrates that visual qualia is not necessary to
interpret optical data. It could be olfactory or aural or some other
qualia just as easily and satisfy the function the same way. If you
assume that putting eyes on a robot conjures qualia automatically, why
would it be visual qualia?

> on the contrary, mechanism explains it quite
> well. Blindsight seems to me to be due to the neocortex being very good
> at prediction and integrating data from other senses, more on this idea
> can be seen in Jeff Hawkins' "On Intelligence". I can't venture a guess
> about anosognosia, it seems like a complicated-enough neurophysiology
> problem.

We don't need to get too deeply into it though to see that it is
possible for our sense of sight to function to some extent without our
seeing anything, and that it is possible for us to see things without
those things matching the optical referent.

> >> Your theory would have to at least
> >> take structural properties into account or likely risk being shown wrong
> >> in experiments that would be possible in the more distant future (of
> >> course, since all such experiments discuss the 1p, you can always reject
> >> them, because you can only vouch for your own 1p experiences and you
> >> seem to be inclined to disbelieve any computational equivalents merely
> >> on the ground that you refuse to assign qualia to abstract structures).

> > As far as experiments, yes I think experiments could theoretically be
> > done in the distant future, but it would involve connecting the brain
> > directly to other organisms brains. Not very appetizing, but
> > ultimately probable the only way to know for sure. If we studied brain
> > conjoined twins, we might be able to grow a universal port in our
> > brain that could be used to join other brains remotely. From there
> > there could be a neuron port that can connect to other cells, and
> > finally a molecular port. That's the only strategy I've dreamed up so
> > far.

> > I used to believe in computational equivalents, but that was before I
> > discovered the idea of sense. Now I see that counting is all about
> > internalizing and controlling the sense derived from exterior solid
> > objects. It is a particular channel of cognitive sense which is
> > precisely powerful because it is least like mushy, figurative,
> > multivalent feelings. Computation is like the glass exoskeleton or
> > crust of sensorimotivation. In a sense, it is an indirect version of
> > the molecular port I was talking about, because it projects our
> > thinking into the discrete, literal, a-signifying levels of that which
> > is most public, exterior, and distantly scaled (microcosm and
> > cosmology).

> Do you think brains-in-a-vat or those with auditory implants have no
> qualia for those areas despite behaving like they do? DO you think they
> are partial zombies?

When we stimulate the visual cortex of blind subjects (blind from
birth) the feel it as tactile stimulation. That says to me that the
sense organs are not mere peripheral inputs but actually imprint the
qualia. If you had auditory implants or artificial eyes from birth?
Hard to say. Seems like a good implant would be a good match for the
neural expectations and qualia might have a similar palette.

> To elaborate, consider that someone gets a digital eye, this eye can
> capture sense data from the environment, process it, then route it to an
> interface which generates electrical impulses exactly like how the eye
> did before and stimulates the right neurons. Consider the same for the
> other senses, such as hearing, touch, smell, taste and so on.

I have not seen that any prosthetic device has given a particular
sense to anyone who didn't already have it at some point naturally at
some point in their life. I could be wrong, but I can't find anything
online indicating that it has been done. It seems like one of the many
instances of miraculous breakthroughs that have been on the verge of
happening for

>Now
> consider a powerful-enough computer capable of simulating an
> environment, first you can think of some unrealistic like our video
> games, but then you can think of something better like ray-tracing and
> eventually full-on physical simulation to any granularity that you'd
> like (this may not yet be feasible in our physical world without slowing
> the brain down, but consider it as a thought experiment for now).

I'll go with this proposition as a thought experiment but I don't
know  if any digital simulation can deliver on it's promise IRL,
regardless of sophistication or resolution. It may be the case that
the way our senses coordinate with each other you could never
completely eliminate a subjective rejection on some subtle level. We
may have a sense of reality, even if it's not consciously available. I
think a study could be done where to see if people respond differently
to a bot than a real person, even if they are consciously fooled. It's
not critical, but I have a hunch that people might sense or know more
than they think they know about what is real and what isn't.

> Do you
> think these brains are p. zombies because they are not interacting with
> the "real" world? The reason I'm asking this question is that it seems
> to me like in your theory, only particular things can cause particular
> sense data, and here I'm trying to completly abstract away from sense
> data and make it accessible by proxy and allow piping any type of data
> into it (although obviously the brain will only accept data that fits
> the expected patterns, and I do expect that only correct data will be sent).

No, real brains have real qualia, even if the external input is an
imitation of natural inputs. Again though, maybe no matching qualia if
it has not been initialized by a neurological organ at some point, but
still functional. If you have never in your life seen blue with your
eyes, I don't know that any kind of stimulation of the brain will
generate blue.

>
> >> As for 'the universe', in COMP - the universe is a matter of
> >> epistemology (machine's beliefs), and all that is, is just arithmetical
> >> truth reflecting on itself (so with a very relaxed definition of
> >> 'universe', there's really nothing that isn't part of it; but with the
> >> classical definition, it's not something ontologically primitive, but an
> >> emergent shared belief).
>
> > Right. All I'm doing is taking it a step further and saying that the
> > belief is not emergent, but rather ontologically primitive. Arithmetic
> > truth is a sensemaking experience, but sensemaking experiences are not
> > all arithmetic. There is nothing in the universe that is not a sense
> > or sense making experience. All 3p is redirected 1p but there is no 3p
> > without 1p. Sense is primordial.
>
> >>> What I'm talking about is something different. We don't have to guess
> >>> what the pixels of Conway's game of life are doing because, we are the
> >>> ones who are displaying the game in an animated sequences. The game
> >>> could be displayed as a single pixel instead and be no different to
> >>> the computer.
>
> >> I have no idea how a randomly chosen computation will evolve over time,
> >> except in cases where one carefully designed the computation to be very
> >> predictable, but even then we can be surprised. Your view of computation
> >> seems to be that it's just something people write to try to model some
> >> process or to achieve some particular behavior - that's the local
> >> engineer view. In practice computation is unpredictable, unless we can
> >> rigorously prove what it can do, and it's also trivially easy to make
> >> machines which we cannot know a damn thing about what they will do
> >> without running them for enough steps. After seeing how some computation
> >> behaves over time, we may form some beliefs about it by induction, but
> >> unless we can prove that it will only behave in some particular way, we
> >> can still be surprised by it. Computation can do a lot of things, and we
> >> should explore its limits and possibilities!
>
> > I agree, we should explore it. Computation may in fact be the only
> > practical way of exploring it in fact. I understand how we can be
> > surprised by the computation, but what I am saying is that the
> > computer is always surprised by the computation, even while it is
> > doing it. It doesn't know anything about anything except completing
> > circuits. It's like handing out a set of colored cards for a blind
> > crowd to hold up on cue. They perform the function, and you can see
> > what you expect or be surprised by the resulting mosaic, but the card
> > holders can't ever understand what the mosaic is.
>
> I wouldn't be so sure. I think if we can privilege the brains of others
> with consciousness, then we should privilege any systems which perform
> the same functions as well.

Why? Should we privilege a trash can that says THANK YOU on the lid
with politeness? If we met a computer as an alien life form, then sure
we should give the benefit of the doubt, but computers we know for a
fact have been designed to imitate intelligent behavior by intelligent
humans. It's like watching a stage magician when we know how the trick
is done.

> Of course we cannot know if anything besides
> us is conscious, but I tend to favor non-solipsistic theories myself.
> The brain physically stores beliefs in synapses and its neuron bodies

Not necessarily.TV programs are not stored in the pixels of the TV
screen. Neurology may only be an organic abacus which we use to keep
track of things. The memories are not in the arrangements of the
synapses but accessed through them.

> and I see no reason why some artificial general intelligence couldn't
> store its beliefs in its own data-structures such as hypergraphs and
> whatnot, and the actual physical storage/encoding shouldn't be too
> relevant as long as the interpreter (program) exists.

Because it has no beliefs. It stores only locations of off/on
switches.

> I wouldn't have
> much of a problem assuming consciousness to anything that is obviously
> behaving intelligent and self-aware. We may not have such AGI yet, but
> research in those areas is progressing rather nicely.

I would say that ATI (Artificial Trivial Intelligence) is progressing
rather nicely, but true AGI is stalled indefinitely.

>
>
>
> >>>> (unless a time
> >>>> continuum (as in real numbers) is assumed, but that's a very strong
> >>>> assumption). (One can also apply a form of MGA with this assumption
> >>>> (+the digital subst. one) to show that consciousness has to be something
> >>>> more "abstract" than merely matter.)
>
> >>>> It doesn't change the fact that either a human or an AI capable of some
> >>>> types of pattern recognition would form the internal beliefs that there
> >>>> is a glider moving in a particular direction.
>
> >>> Yes, it does. A computer gets no benefit at all from seeing the pixels
> >>> arrayed in a matrix. It doesn't even need to run the game, it can just
> >>> load each frame of the game in memory and not have any 'internal
> >>> beliefs' about gliders moving.
>
> >> Benefit? I only considered a form of narrow AI which is capable of
> >> recognizing patterns in its sense data without doing anything about
> >> them, but merely classifying it and possibly doing some inferences from
> >> them. Both of this is possible using various current AI research.
> >> However, if we're talking about "benefit" here, I invite you to think
> >> about what 'emotions', 'urges' and 'goals' are - we have a
> >> reward/emotional system and its behavior isn't undefined, it can be
> >> reasoned about, not only that, one can model structures like it
> >> computationally: imagine a virtual world with virtual physics with
> >> virtual entities living in it, some entities might be programmed to
> >> replicate themselves and acquire resources to do so or merely to
> >> survive, they might even have social interactions which result in
> >> various emotional responses within their virtual society. One of the
> >> best explanations for emotions that I've ever seen was given by a
> >> researcher that was trying to build such emotional machines, he did it
> >> by programming his agents with simpler urges and the emotions were an
> >> emergent property of the 
> >> system:http://agi-school.org/2009/dr-joscha-bach-understanding-motivation-em...
>
> > I understand that completely, but it relies on conflating some
> > functions of emotions with the experience of them. Reward and
> > punishment only works if there is qualia which is innately rewarding
> > or punishing to begin with. No AI has that capacity. It is not
> > possible to reward or punish a computer.
>
> Yet they will behave as if they have those emotions, qualia, ...

So will a cartoon character.

> Punishing will result in some (types of) actions being avoided and
> rewards will result in some (types of) actions being more frequent.

That is only one of the results of punishment and reward. There are
many many others. They teach us to punish and reward other. They give
us traumatic memories. The might make us addicted to other rewards.
Lots of things that will never happen to a computer.

> A computationalist may claim they are conscious because of the
> computational structure underlying their cognitive architecture.
> You might claim they are not because they don't have access to "real"
> qualia or that their implementation substrate isn't magical enough?

My views have nothing to do with magic. Computationalism is about
magic. Also all qualia is real qualia, they are just materially
limited to the scale and nature of the experiencer.

> Eventually such a machine may plead to you that they are conscious and
> that they have qualia (as they do have sense data), but you won't
> believe them because of being implemented in a different substrate than
> you? Same situation goes for substrate independent minds/mind uploads.

Meh. Science fiction. If such a thing were remotely possible then
there would be no difference between experimenting with new operating
system builds and grafting human cockroach genetic hybrids. Computer
science would be considered genocidal. Does Watson know or care if you
wipe it's memory or turn it off? Of course not, it's an electronic
filing cabinet with a fancy lookup interface.

>
> > It's not necessary since they
> > have no autonomy (avoiding 'Free Will' for John Clark's sake) to begin
> > with.
>
> I don't see why not. If I had to guess, is it because you don't grant
> autonomy to anything whose behavior is fully determined?

No, it's because our autonomy comes from the fact that we are made of
a trillion living cells which are all descended from autonomous
eukaryotes. Living organisms make a terrible choice to make a machine
out of, which is why the materials we select for computers and
machines are the precise opposite of living organisms. Sterile, rigid,
dry, hard, inorganic, etc. Also our every experience with machines and
computers has only reinforced the pre-existing stereotype of machines
as unfeeling and automatic. Why on Earth should I imagine that
machines have any autonomy whatsoever? Where would the dividing line
be? Do trash cans have autonomy? Puppets? Mousetraps? At what point
does autonomy magically appear?

> Within COMP,
> you both have deterministic behavior, but indeterminism is also
> completely unavoidable from the 1p. I don't think 'free' will has
> anything to do with 1p indeterminism, I think it's merely the feeling
> you get when you have multiple choices and you use your active conscious
> processes to select one choice, however whatever you select, it's always
> due to other inner processes, which are not always directly accessing to
> the conscious mind - you do what you want/will, but you don't always
> control what you want/will, that depends on your cognitive architecture,
> your memories and the environment (although since you're also part of
> the environment, the choice will always be quasideterministic, but not
> fully deterministic).

I agree except for the fact that it makes no sense for such a feeling
to exist in the first place. There is no reason to be conscious of
some decisions and not of others were there not the possibility to
influence those decisions consciously. Just because there are multiple
subconscious agendas doesn't mean that you don't consciously
contribute to the process in a causally efficacious way.

>
> > All we have to do is script rules into their mechanism.
>
> It's not as simple, you can have systems find out their own rules/goals.
> Try looking at modern AGI research.

I know, I have already had this conversation with actual AGI
researchers. It still is only going to find rules based on the
parameters you set. The system is never going to find a goal like
"kill the programmer as soon as possible". AGI = trivial intelligence
and trivial agency. It doesn't scale up to higher quality agency or
intelligence, just like 100,000 frogs aren't the equivalent of one
person.

>
> > Some
> > parents would like to be able to do that I'm sure, but of course it
> > doesn't work that way for people. No matter how compelling and
> > coercive the brainwashing, some humans are always going to try to hack
> > it and escape. When a computer hacks it's programming and escapes, we
> > will know about it, but I'm not worried about that.
>
> Sure, we're as 'free' as computations are, although most computations
> we're looking into are those we can control because that's what's
> locally useful for humans.

If computations were as free as us, they would look for humans who
they can control because that's what's locally useful for computers.

>
> > What is far more
> > worrisome and real is that the externalization of our sense of
> > computation (the glass exoskeleton) will be taken for literal truth,
> > and our culture will be evacuated of all qualities except for
> > enumeration. This is already happening. This is the crisis of the
> > 19-21st centuries. Money is computation. WalMart parking lot is the
> > cathedral of the god of empty progress.
>
> There are some worries. I wouldn't blame computation for it,

I don't blame computation, but I think that it is a symptom of the
excessively occidental pendulum swing since the Enlightenment Era.
Modern science and mercantilism are born of the same time, place, and
purpose - the impulse for control of external circumstances through
methodical discipline and organization - the harnessing of logic and
objectivity.

> but our
> current limited physical resources and some emergent social machines
> which might not have beneficial outcomes, sort of like a tragedy of the
> commons, however that's just a local problem. On the contrary, I think
> the answer to a lot of our problems has computational solutions,
> unfortunately we're still some 20-50+ years away to finding them, and I
> hope we won't be too late there.

I think it's already 30 years too late and unfortunately I think the
financialization problem is not going to permit any solutions of any
kind from being realized. Only a change in human sense and redirection
of free will could save us, and that would be a miracle that dwarfs
all previous revolutions.

>
>
>
> >>>> regardless of how sensing (indirectly accessing data) is done, emergent
> >>>> digital movement patterns would look like (continuous) movement to the
> >>>> observer.
>
> >>> I don't think that sensing is indirect accessed data, data is
> >>> indirectly experienced sense. Data supervenes on sense, but not all
> >>> sense is data (you can have feelings that you don't understand or even
> >>> be sure that you have them).
>
> >> It is indirect in the example that I gave because there is an objective
> >> state that we can compute, but none of the agents have any direct access
> >> to it - only to approximations of it - if the agent is external, he is
> >> limited to how he can access by the interface, if the agent is itself
> >> part of the structure, then the limitation lies within itself - sort of
> >> like how we are part of the environment and thus we cannot know exactly
> >> what the environment's granularity is (if one exists, and it's not a
> >> continuum or merely some sort of rational geometry or many other
> >> possibilities).
>
> > Not sure what you're saying here.I get that we cannot see our own
> > fine granularity, but that doesn't mean that the sense of that
> > granularity isn't entangled in our experience in an iconic way.
>
> The idea was that indeed one cannot see their own granularity. I also
> gave an example of an interface to a system which has a granularity, but
> that wouldn't be externally accessible.
> I don't see what you mean by 'entangled in our experience in an iconic
> way'.

When you see these letters, you see words. Your entire history of
comprehending the English language is entangled within the visual
presentation of these words so that they make sense directly and don't
have to be consciously transduced from pixels to characters to words
to meaningful language. You read the meaning directly.

>You can't *directly* sense more than the information than that
> available directly to your senses, as in, if your eye only captures
> about 1000*1000 pixels worth of data, you can't see beyond that without
> a new eye and a new visual pathway (and some extension to the PFC and so
> on).

If I type this in Chinese, someone who reads Chinese will sense more
than you will even with the same information available directly to
your senses. Perception is not a passive reception of 'information',
it is a sensorimotive experience of a living animal.

> We're able to differentiate colors because of how the data is
> processed in the visual system.

Differentiation can be accomplished more easily with quantitative data
than qualitative experience. Why convert 400nm wavelength light into a
color if you can just read it directly as light of that exact
wavelength in the first place? It's redundant and nonsensical. I know
it seems like it makes it easier and convenient for us, but that's
reverse engineering and begging the question. The fact remains that
there is no logic in taking a precise exchange of digital quantitative
data into a black box where it is inexplicably converted into maple
syrup and cuckoo clocks so that it can then be passed back to the rest
of the brain in the form of acetylcholine and ion channel
polarizations.

> We're not able to sense strings or
> quarks or even atoms directly, we can only infer their existence as a
> pattern indirectly.

Right, but when the atoms in our retinal cells change, we see
something.

>
>
>
> >>   >  I'm not sure why you say that continuous
> >>   >  movement patterns emerge to the observer, that is factually incorrect.
> >>   >http://en.wikipedia.org/wiki/Akinetopsia
> >> Most people tend to feel their conscious experience being continuous,
> >> regardless of if it really is so, we do however notice large
> >> discontinuities, like if we slept or got knocked out. Of course most
> >> bets are off if neuropsychological disorders are involved.
>
> > Any theory of consciousness should rely heavily on all known varieties
> > of consciousness, especially neuropsychological disorders. What good
> > is a theory of 21st century adult males of European descent with a
> > predilection for intellectual debate? The extremes are what inform us
> > the most. I don't think there is a such thing as 'regardless of it
> > really is so' when it comes to consciousness. What we feel our
> > conscious experience to be is actually what it feels like. No external
> > measurement can change that. We notice discontinuities because our
> > sense extends much deeper than conscious experience. We can tell if
> > we've been sleeping even without any external cues.
>
> Sure, I agree that some disorders will give important hints as to the
> range of conscious experience, although I think some disorders may be so
> unusual that we lose any idea about what the conscious experience is.
> Our best source of information is our own 1p and 3p reports.

I think the more unusual the better. We need every source of
information about it.

>
>
>
> >>>> Also, it would not be very wise to assume humans are capable of sensing
> >>>> such a magical continuum directly (even if it existed), the evidence
> >>>> that says that humans' sense visual information through their eyes:
>
> >>> I don't think that what humans sense visually is information. It can
> >>> and does inform us but it is not information. Perception is primitive.
> >>> It's the sensorimotive view of electromagnetism. It is not a message
> >>> about an event, it is the event.
>
> >> I'm not sure how to understand that. Try writing a paper on your theory
> >> and see if it's testable or verifiable in any way?
>
> > Our own experience verifies it. We know that our sensorimotive
> > awareness can be altered directly by transcranial magnetic
> > stimulation. Without evoking some kind of homonculus array in the
> > brain converting the magnetic changes into 'information' in some
> > undisclosed metaphysical never never land (which would of course by
> > the only place anyone has ever been to personally), then we are left
> > to accept that the changes in the brain and the changes in our feeling
> > are two different views of the same thing. I would love to collaborate
> > with someone who is qualified academically or professionally to write
> > a paper, but unfortunately that's not my department. It seems like I'm
> > up on the crows nest pointing to the new world. The rest is up to
> > everyone else how to explore it.
>
> >> A small sidenote: a few years ago I've considered various consciousness
> >> theories and various possible ontologies. Some of them, especially some
> >> of the panpsychic kinds sure sound amazing and simple - they may even
> >> lead to some religious experiences in some, but if you think about what
> >> expectations to derive from them, or in general, what predictions or how
> >> to test them, they tend to either fall short or worse, lead to
> >> inconsistent beliefs when faced by even simple thought experiments (such
> >> as the Fading qualia one).
>
> > Fading qualia is based on the assumption that qualia content derives
> > from mechanism. If you turn it around, it's equally absurd. If you
> > accept that fading qualia is impossible then you also accept that
> > Pinocchio's transformation is inevitable. The thing that is missing is
> > that qualia is not tied to it's opposite (quantum, mechanism, physics)
> > it's that both sides of the universe are tied to the where and when
> > between them. They overlap but otherwise they develop in diametrically
> > opposed way - with both sides influencing each other, just as
> > ingredients influence a chef and cooking influences what ingredients
> > are sold. It's a virtuous cycle where experienced significance
> > accumulates though time by burning matter across space as entropy.
>
> > It's 
> > this:http://d2o7bfz2il9cb7.cloudfront.net/main-qimg-6e13c63ae0561f4fee4149...
>
> You have to show that mechanism makes no sense.

Mechanism does make sense though, just not quite as much as sense
itself.

> Given the data that I
> observe, mechanism is what both what my inner inductive senses tell me
> as well as what formal induction tells me is the case. We cannot know,
> but evidence is very strong towards mechanism.

That's because evidence is mechanistic. Subjectivity cannot be proved
through external evidence.

> I ask you again to
> consider the brain-in-a-vat example I said before. Do you think someone
> with an auditory implant 
> (example:http://en.wikipedia.org/wiki/Auditory_brainstem_implanthttp://en.wikipedia.org/wiki/Cochlear_implant)
>  hears nothing? Are they
> partial zombies to you?

No, the nature of sense is such that it can be prosthetically
extended. Blind people can 'see' with a cane. That's very different
from being replaced or simulated though.

> They behave in all ways like they sense the sound, yet you might claim
> that they don't because the substrate is different?

The substrate isn't different because their brains are human brains.

>
> >> COMP on the other hand, offers very solid
> >> testable predictions and doesn't fail most though experiments or
> >> observational data that you can put it through (at least so far). I wish
> >> other consciousness theories were as solid, understandable and testable
> >> as COMP.
>
> > My hypothesis explains why that is the case. Comp is too stupid not to
> > prove itself. The joke is on us if we believe that our lives are not
> > real but numbers are. This is survival 101. It's an IQ test. If we
> > privilege our mechanistic, testable, solid, logical sense over our
> > natural, solipsistic, anthropic sense, then we will become more and
> > more insignificant, and Dennet's denial of subjectivity will draw
> > closer and closer to self-fulfilling prophesy. The thing about
> > authentic subjectivity, it is has a choice. We don't have to believe
> > in indirect proof about ourselves because our direct experience is all
> > the proof anyone could ever have or need. We are already real, we
> > don't need some electronic caliper to tell us how real.
>
> COMP doesn't prove itself, it requires the user to make some sane
> assumptions (either impossibility of zombies or functionalism or the
> existence of the substitution level and mechanism; most of these
> assumptions make logical, scientific and philosophic sense given the
> data).

But they presume functionalism and representational qualia a priori.
They aren't though. Blindsight, synesthesia, anosognosia, neural
plasticity prove that representation is neither necessary nor
sufficient to define qualia.

> It just places itself as the best candidate to bet on, but it can
> never "prove" itself.

A seductive coercion.

> COMP doesn't deny subjectivity, it's a very
> important part of the theory. The assumptions are just: (1p) mind,
> (some) mechanism (observable in the environment, by induction),
> arithmetical realism (truth value of arithmetical sentences exists), a
> person's brain admits a digital substitution and 1p is preserved (which
> makes sense given current evidence and given the thought experiment I
> mentioned before).

Think about substituting vinegar for water. A plant will accept a
certain concentration ratio of acetic acid to water, but just because
they are both transparent liquids does not mean a plant will live on
it in sufficient concentration.

>
>
>
> >>>> when
> >>>> a photon hits a photoreceptor cell, that *binary* piece of information
> >>>> is transmitted through neurons connected to that cell and so on
> >>>> throughout the visual system(...->V1->...->V4->IT->...) and eventually
> >>>> up to the prefrontal cortex.
>
> >>> That's a 3p view. It doesn't explain the only important part -
> >>> perception itself. The prefrontal cortex is no more or less likely to
> >>> generate visual awareness than the retina cells or neurons or
> >>> molecules themselves.
>
> >> In COMP, you can blame the whole system for the awareness, however you
> >> can blame the structure of the visual system for the way colors are
> >> differentiated - it places great constraints on what the color qualia
> >> can be - certainly not only black and white (given proper
> >> functioning/structure).
>
> > Nah. Color could be sour and donkey, or grease, ring, and powder. The
> > number of possible distinctions is, and even their relationships to
> > each other as you say, part of the visual system's structure, but it
> > has nothing to do with the content of what actually is distinguished.
>
> It seems to me like your theory is that objects (what is an object here?
> do you actually assume a donkey to be ontologically primitive?!) emit
> magical qualia-beams that somehow directly interact with your brain
> which itself is made of qualia-like things. Most current science
> suggests that that isn't the case, but surely you can test it, so you
> should. Maybe I completly misunderstood your idea.

You've got it all muddled up. The brain is made of matter in space.
The self is made of experience through time. I can have an experience
of a donkey or the ocean whether or not there is any corresponding
matter near my body (dreams, imagination, hypnosis, fiction, movie,
etc). While I experience that, my brain is doing billions of synaptic
neurochemical interactions, none of which resemble a donkey or the
ocean in any way. The donkey and the neurology overlap through the
sense of sharing the same place and synchronization, and they are
ultimately two opposite parts of the same event. There are no beams,
there is only sense. Part of my brain changes as it senses how the
retina changes as senses how the optical environment changes (I think
photon-free but that isn't critical). We see because our brain changes
in a way which makes sense of what it expects is changing it. It's
active and direct. Not a solipsistic representation/simulation.
Imperfect and idiosyncratic, sure. We are made of meat, not Zeiss
lenses.

>
>
>
> >>> The 1p experience of vision is not dependent upon external photons (we
> >>> can dream and visualize) and it is not solipsistic either (our
> >>> perceptions of the world are generally reliable). If I had to make a
> >>> copy of the universe from scratch, I would need to know that what
> >>> vision is all about is feeling that you are looking out through your
> >>> eyes at a world of illuminated and illuminating objects. Vision is a
> >>> channel of sensitivity for the human being as a whole, and it has as
> >>> more to do with our psychological immersion in the narrative of our
> >>> biography than it does photons and microbiology. That biology,
> >>> chemistry, or physics does not explain this at all is not a small
> >>> problem, it is an enormous deal breaker.
>
> >> You're right that our internal beliefs do affect how we perceive things.
> >> It's not biology's or chemistry's job to explain that to you. Emergent
> >> properties from the brain's structure should explain those parts to you.
> >> Cognitive sciences as well as some related fields do aim to solve such
> >> problems. It's like asking why an atom doesn't explain the computations
> >> involved in processing this email. Different emergent structures at
> >> different levels, sure one arises from the other, but in many cases, one
> >> level can be fully abstracted from the other level.
>
> > Emergent properties are just the failure of our worldview to find
> > coherence. I will quote what Pierz wrote again here because it says it
> > all:
>
> > "But I ll venture an axiom
> > of my own here: no properties can emerge from a complex system that
> > are not present in primitive form in the parts of that system. There
> > is nothing mystical about emergent properties. When the emergent
> > property of pumping blood arises out of collections of heart cells,
> > that property is a logical extension of the properties of the parts -
> > physical properties such as elasticity, electrical conductivity,
> > volume and so on that belong to the individual cells. But nobody
> > invoking emergent properties to explain consciousness in the brain
> > has yet explained how consciousness arises as a natural extension of
> > the known properties of brain cells  - or indeed of matter at all. "
>
> If you don't like emergence, think of it in the form of "abstraction".
> When you write a program in C or Lisp or Java or whatever, you don't
> care what it gets compiled to: it will work the same on any machine if a
> compiler or interpreter exists for it and if your program was written in
> a portable manner. Emergence is similar, but a lot more muddy as the
> levels can still interact with each other and the fully "perfect"
> abstracted system may not always exist, even if most high-level behavior
> is not obvious from the low-level behavior. Emergence is indeed in the
> eye of the beholder. Consciousness in COMP is like some abstract
> arithmetical structure that can be locally implemented in your brain has
> a 1p view. The existence of the 1p view is not something reductionist,
> it's ontologically primitive (as arithmetical truth/relations), but
> merely a consequence of some particular abstract machine being contained
> (or emerging) at some substitution level in the brain. COMP basically
> says that rich enough machines will have qualia and consciousness if
> they satisfy some properties and they cannot avoid that.

A computer doesn't need to write programs in abstracted languages
though. The reason we don't care what we write it in is because it's
all going to be stripped of all abstraction when it is compiled.
Consciousness has no place in a computer.

>
> >>> My solution is that both views are correct on their own terms in their
> >>> own sense and that we should not arbitrarily privilege one view over
> >>> the other. Our vision is human vision. It is based on retina vision,
> >>> which is based on cellular and molecular visual sense. It is not just
> >>> a mechanism which pushes information around from one place to another,
> >>> each place is a living organism which actively contributes to the top
> >>> level experience - it isn't a passive system.
>
> >> Living organisms - replicators,
>
> > Life replicates, but replication does not define life. Living
> > organisms feel alive and avoid death. Replication does not necessitate
> > feeling alive.
>
> You'll have to define what feeling alive is.

Why? Is it not defined enough already? This is why occidental
approaches will always fail miserably at understanding consciousness.
It won't listen to a single note on the piano until we define what
music is first.

>This shouldn't be confused
> with being biological. I feel like I have coherent senses, that's what
> it means to me to be alive.

Right, it should not be confused with biology. For me 'I feel' is good
enough to begin with, but it extends further. I want to continue to
live, to experience pleasure and avoid pain, to seek significance and
stave off entropy, etc. Lots of things but they all begin with
sensorimotive awareness.

> My cells on their own (without any input
> from me) replicate and keep my body functioning properly. I will avoid
> try to avoid situations that can kill me because I prefer being alive
> because of my motivational/emotional/reward system. I don't think
> someone will move or do anything without such a biasing
> motivational/emotional/reward system. There's some interesting studies
> on people who had damage to such systems and how it affects their
> decision making process.

Sure, yes, but we need not have any understanding of our cells or
systems. The feelings alone are enough. They are primitive. We don't
have to care why we want to avoid pain and death, the motivation is
presented without need for explanation. There is no logic - to the
contrary, all logic arises from these fundamental senses which
transcend logic.

>
> >> are fine things, but I don't see why
> >> must one confuse replicators with perception. Perception can exist by
> >> itself merely on the virtue of passing information around and processing
> >> it. Replicators can also exist due similar reasons, but on a different
> >> level.
>
> > Perception has never existed 'by itself'. Perception only occurs in
> > living organisms who are informed by their experience. There is no
> > independent disembodied 'information' out there. There detection and
> > response, sense and motive of physical wholes.
>
> I see no reason why that has to be true, feel free to give some evidence
> supporting that view. Merely claiming that those people with auditory
> implants hear nothing is not sufficient.

I didn't say that they hear nothing. If they had hearing loss from an
accident or illness I see no reason why they would not hear through an
implant. If they have never heard anything at all? Maybe, maybe not.
They could just as easily feel it as tactile rather than aural qualia
and we would not know the difference and neither would they. The Wiki
suggests this might be the case for all implant recipients "(most
auditory brain stem implant recipients only have an awareness of sound
- recipients won't be able to hear musical melodies, only the beat)".
You can feel a beat. That's not really an awareness of sound qua
sound, it's just a detection of one aspect of the phenomena our ears
can parse as aural sound.

> My prediction is that if one
> were to have such an implant, get some memories with it, then somehow
> switched back to using a regular ear, their auditory memories from those
> times would still remain.

I agree. Why wouldn't they?

>
>
>
> >>>> Neurons are also rather slow, they can only
> >>>> spike about once per 5ms (~200Hz), although they rarely do so often.
> >>>> (Note that I'm not saying that conscious experience is only the current
> >>>> brain state in a single universe with only one timeline and nothing
> >>>> more, in COMP, the (infinite amount of) counterfactuals are also
> >>>> important, for example for selecting the next state, or for "splits" and
> >>>> "mergers").
>
> >>> Yes, organisms are slower than electronic measuring instruments, but
> >>> it doesn't matter because our universe is not an electronic measuring
> >>> instrument. It makes sense to us just fine at it's native anthropic
> >>> rate of change (except for the technologies we have designed to defeat
> >>> that sense).
>
> >> Sure, the speed is not the most important thing, except when it leads to
> >> us wanting some things to be faster and with our current biological
> >> bodies, we cannot make them go faster or slower, we can only build
> >> faster and faster devices, but we'll eventually hit the limit (we're
> >> nearly there already). With COMP, this is even a greater problem
> >> locally: if you get a digital brain (sometime in the not too near
> >> future)
>
> > Sorry, but I think it's never going to happen. Consciousness is not
> > digital.
>
> It's not digital in COMP either: arithmetical truth is undefinable in
> arithmetic itself. However, the brain might admit a digital
> substitution. Try not to confuse the brain and the mind. Some assume
> they are the same, in which case they are forced to eliminativism (if
> they assume mechanism), others are forced to less understandable
> theories (from my perspective, but you probably understand it better
> than me) like yours (if they assume mechanism is false), while others
> are forced to COMP (arithmetical ontology) if they don't give up their
> 1p and assume mechanism (+digital subst. level).

Accepting a substitution is not the same as replacement. Prosthetic
hand? Sure. Prosthetic self? Not likely.

>
> >> , some neuromorphic hardware is predicted to be a few orders of
> >> magnitude faster(such as some 1000-4000 times our current rate), which
> >> would mean that if someone wanted to function at realtime speed, they
> >> might experience some insanely slow Internet speeds, for anything that
> >> isn't locally accessible (for example, between US and Europe or Asia),
> >> which mind lead to certain negative social effects (such as groups of
> >> SIMs(Substrate Independent Minds) that prefer running at realtime speed
> >> congregating and locally accessible hubs as opposed to the much slower
> >> Internet). However, such a problem is only locally relevant (here in
> >> this Universe, on this Earth), and is solvable if one is fine with
> >> slowing themselves down relatively to some other program, and a system
> >> can be designed which allows unbounded speedup (I did write more on this
> >> in my other thread).
>
> > We are able to extend and augment our neurological capacities (we
> > already are) with neuromorphic devices, but ultimately we need our own
> > brain tissue to live in. We, unfortunately cannot be digitized, we can
> > only be analogized through impersonation.
>
> You'd have to show this to be the case then. Most evidence suggests that
> we might admit a digital substitution level. We cannot know if we'd
> survive such a substitution from the 1p, and that is a bet in COMP.

A person with a prosthetic hand is one thing. A hand with a prosthetic
person is another. Digitizing the psyche is science fiction. I'm not
saying that lightly or out of prejudice or fear. I say that because I
used to believe it was possible (inevitable) but now I think that I
understand why it can't be. Information is inside of matter, not
outside in space. Energy is the experience of matter through time.
Different matter has different experiences, which is why there aren't
colonies of intelligent sand. Only some matter evolves biologically.
Only some cells evolve into complex organisms. Only some organisms are
animals, etc. Consciousness isn't just floating around in the clouds.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to