On 1/27/2012 15:36, Craig Weinberg wrote:
On Jan 27, 12:49 am, acw<a...@lavabit.com>  wrote:
On 1/27/2012 05:55, Craig Weinberg wrote:>  On Jan 26, 9:32 pm, 
acw<a...@lavabit.com>   wrote:

There is nothing on the display except transitions of pixels. There is
nothing in the universe, except transitions of states

Only if you assume that our experience of the universe is not part of
the universe. If you understand that pixels are generated by equipment
we have designed specifically to generate optical perceptions for
ourselves, then it is no surprise that it exploits our visual
perception. To say that there is nothing in the universe except the
transitions of states is a generalization presumably based on quantum
theory, but there is nothing in quantum theory which explains how
states scale up qualitatively so it doesn't apply to anything except
quantum. If you're talking about 'states' in some other sense, then
it's not much more explanatory than saying there is nothing except for
things doing things.

I'm not entirely sure what your theory is,

Please have a look if you like: http://multisenserealism.com



Seems quite complex, although it might be testable if your theory is developed in more detail such that it can offer some testable predictions.

but if I had to make an
initial guess (maybe wrong), it seems similar to some form of
panpsychism directly over matter.

Close, but not exactly. Panpsychism can imply that a rock has human-
like experiences. My hypothesis can be categorized as
panexperientialism because I do think that all forces and fields are
figurative externalizations of processes which literally occur within
and through 'matter'. Matter is in turn diffracted pieces of the
primordial singularity.
Not entirely sure what you mean by the singularity, but okay.

It's confusing for us because we assume that
motion and time are exterior conditions, by if my view is accurate,
then all time and energy is literally interior to the observer as an
experience.
I think most people realize that the sense of time is subjective and relative, as with qualia. I think some form of time is required for self-consciousness. There can be different scales of time, for example, the local universe may very well run at planck-time (guesstimation based on popular physics theories, we cannot know, and with COMP, there's an infinity of such frames of references), but our conscious experience is much slower relative to that planck-time, usually assumed to run at a variable rate, at about 1-200Hz (neuron-spiking freq), although maybe observer moments could even be smaller in size.

What I think is that matter and experience are two
symmetrical but anomalous ontologies - two sides of the same coin, so
that our qualia and content of experience is descended from
accumulated sense experience of our constituent organism, not
manufactured by their bodies, cells, molecules, interactions. The two
both opposite expressions (a what&  how of matter and space and a who
&  why of experience or energy and time) of the underlying sense that
binds them to the singularity (where&  when).

Accumulated sense experience? Our neurons do record our memories (lossily, as we also forget), and interacting "matter" does lead to state changes. Although, this (your theory) feels much like a reification of matter and qualia (and having them be nearly the same thing), and I think it's possible to find some inconsistencies here, more on this later in this post.

Such theories are testable and
falsifiable, although only in the 1p sense. A thing that should be worth
keeping in mind is that whatever our experience is, it has to be
consistent with our structure (or, if we admit, our computational
equivalent) - it might be more than it, but it cannot be less than it.
We wouldn't see in color if our eyes' photoreceptor cells didn't absorb
overlapping ranges of light wavelengths and then processed it throughout
the visual system (in some parts, in not-so-general ways, while in
others, in more general ways). The structures that we are greatly limit
the nature of our possible qualia.

I understand what you are saying, and I agree the structures do limit
our access to qualia, but not the form. Synesthesia, blindsight, and
anosognosia show clearly that at the human level at least, sensory
content is not tied to the nature of mechanism. We can taste color
instead of see it, or know vision without seeing. This is not to say
that we aren't limited by being a human being, of course we are, but
our body is as much a vehicle for our experience as much as our
experience is a filtered through our body. Indeed the brain makes no
sense as anything other than a sensorimotive amplifier/condenser.

Synesthesia can happen for multiple reasons, although one possible cause is that some parts of the neocortical hierarchy are more tightly inter-connected, which leads to sense-data from one region to directly affect processing of sense-data from an adjacent region, thus having experience of both qualia simultaneously. I don't see how synesthesia contradicts mechanism, on the contrary, mechanism explains it quite well. Blindsight seems to me to be due to the neocortex being very good at prediction and integrating data from other senses, more on this idea can be seen in Jeff Hawkins' "On Intelligence". I can't venture a guess about anosognosia, it seems like a complicated-enough neurophysiology problem.

Your theory would have to at least
take structural properties into account or likely risk being shown wrong
in experiments that would be possible in the more distant future (of
course, since all such experiments discuss the 1p, you can always reject
them, because you can only vouch for your own 1p experiences and you
seem to be inclined to disbelieve any computational equivalents merely
on the ground that you refuse to assign qualia to abstract structures).

As far as experiments, yes I think experiments could theoretically be
done in the distant future, but it would involve connecting the brain
directly to other organisms brains. Not very appetizing, but
ultimately probable the only way to know for sure. If we studied brain
conjoined twins, we might be able to grow a universal port in our
brain that could be used to join other brains remotely. From there
there could be a neuron port that can connect to other cells, and
finally a molecular port. That's the only strategy I've dreamed up so
far.

I used to believe in computational equivalents, but that was before I
discovered the idea of sense. Now I see that counting is all about
internalizing and controlling the sense derived from exterior solid
objects. It is a particular channel of cognitive sense which is
precisely powerful because it is least like mushy, figurative,
multivalent feelings. Computation is like the glass exoskeleton or
crust of sensorimotivation. In a sense, it is an indirect version of
the molecular port I was talking about, because it projects our
thinking into the discrete, literal, a-signifying levels of that which
is most public, exterior, and distantly scaled (microcosm and
cosmology).

Do you think brains-in-a-vat or those with auditory implants have no qualia for those areas despite behaving like they do? DO you think they are partial zombies? To elaborate, consider that someone gets a digital eye, this eye can capture sense data from the environment, process it, then route it to an interface which generates electrical impulses exactly like how the eye did before and stimulates the right neurons. Consider the same for the other senses, such as hearing, touch, smell, taste and so on. Now consider a powerful-enough computer capable of simulating an environment, first you can think of some unrealistic like our video games, but then you can think of something better like ray-tracing and eventually full-on physical simulation to any granularity that you'd like (this may not yet be feasible in our physical world without slowing the brain down, but consider it as a thought experiment for now). Do you think these brains are p. zombies because they are not interacting with the "real" world? The reason I'm asking this question is that it seems to me like in your theory, only particular things can cause particular sense data, and here I'm trying to completly abstract away from sense data and make it accessible by proxy and allow piping any type of data into it (although obviously the brain will only accept data that fits the expected patterns, and I do expect that only correct data will be sent).

As for 'the universe', in COMP - the universe is a matter of
epistemology (machine's beliefs), and all that is, is just arithmetical
truth reflecting on itself (so with a very relaxed definition of
'universe', there's really nothing that isn't part of it; but with the
classical definition, it's not something ontologically primitive, but an
emergent shared belief).

Right. All I'm doing is taking it a step further and saying that the
belief is not emergent, but rather ontologically primitive. Arithmetic
truth is a sensemaking experience, but sensemaking experiences are not
all arithmetic. There is nothing in the universe that is not a sense
or sense making experience. All 3p is redirected 1p but there is no 3p
without 1p. Sense is primordial.


What I'm talking about is something different. We don't have to guess
what the pixels of Conway's game of life are doing because, we are the
ones who are displaying the game in an animated sequences. The game
could be displayed as a single pixel instead and be no different to
the computer.

I have no idea how a randomly chosen computation will evolve over time,
except in cases where one carefully designed the computation to be very
predictable, but even then we can be surprised. Your view of computation
seems to be that it's just something people write to try to model some
process or to achieve some particular behavior - that's the local
engineer view. In practice computation is unpredictable, unless we can
rigorously prove what it can do, and it's also trivially easy to make
machines which we cannot know a damn thing about what they will do
without running them for enough steps. After seeing how some computation
behaves over time, we may form some beliefs about it by induction, but
unless we can prove that it will only behave in some particular way, we
can still be surprised by it. Computation can do a lot of things, and we
should explore its limits and possibilities!

I agree, we should explore it. Computation may in fact be the only
practical way of exploring it in fact. I understand how we can be
surprised by the computation, but what I am saying is that the
computer is always surprised by the computation, even while it is
doing it. It doesn't know anything about anything except completing
circuits. It's like handing out a set of colored cards for a blind
crowd to hold up on cue. They perform the function, and you can see
what you expect or be surprised by the resulting mosaic, but the card
holders can't ever understand what the mosaic is.

I wouldn't be so sure. I think if we can privilege the brains of others with consciousness, then we should privilege any systems which perform the same functions as well. Of course we cannot know if anything besides us is conscious, but I tend to favor non-solipsistic theories myself. The brain physically stores beliefs in synapses and its neuron bodies and I see no reason why some artificial general intelligence couldn't store its beliefs in its own data-structures such as hypergraphs and whatnot, and the actual physical storage/encoding shouldn't be too relevant as long as the interpreter (program) exists. I wouldn't have much of a problem assuming consciousness to anything that is obviously behaving intelligent and self-aware. We may not have such AGI yet, but research in those areas is progressing rather nicely.


(unless a time
continuum (as in real numbers) is assumed, but that's a very strong
assumption). (One can also apply a form of MGA with this assumption
(+the digital subst. one) to show that consciousness has to be something
more "abstract" than merely matter.)

It doesn't change the fact that either a human or an AI capable of some
types of pattern recognition would form the internal beliefs that there
is a glider moving in a particular direction.

Yes, it does. A computer gets no benefit at all from seeing the pixels
arrayed in a matrix. It doesn't even need to run the game, it can just
load each frame of the game in memory and not have any 'internal
beliefs' about gliders moving.

Benefit? I only considered a form of narrow AI which is capable of
recognizing patterns in its sense data without doing anything about
them, but merely classifying it and possibly doing some inferences from
them. Both of this is possible using various current AI research.
However, if we're talking about "benefit" here, I invite you to think
about what 'emotions', 'urges' and 'goals' are - we have a
reward/emotional system and its behavior isn't undefined, it can be
reasoned about, not only that, one can model structures like it
computationally: imagine a virtual world with virtual physics with
virtual entities living in it, some entities might be programmed to
replicate themselves and acquire resources to do so or merely to
survive, they might even have social interactions which result in
various emotional responses within their virtual society. One of the
best explanations for emotions that I've ever seen was given by a
researcher that was trying to build such emotional machines, he did it
by programming his agents with simpler urges and the emotions were an
emergent property of the 
system:http://agi-school.org/2009/dr-joscha-bach-understanding-motivation-em...http://agi-school.org/2009/dr-joscha-bach-understanding-motivation-em...http://agi-school.org/2009/dr-joscha-bach-the-micropsi-architecturehttp://www.cognitive-ai.com/

I understand that completely, but it relies on conflating some
functions of emotions with the experience of them. Reward and
punishment only works if there is qualia which is innately rewarding
or punishing to begin with. No AI has that capacity. It is not
possible to reward or punish a computer.
Yet they will behave as if they have those emotions, qualia, ...
Punishing will result in some (types of) actions being avoided and rewards will result in some (types of) actions being more frequent. A computationalist may claim they are conscious because of the computational structure underlying their cognitive architecture. You might claim they are not because they don't have access to "real" qualia or that their implementation substrate isn't magical enough? Eventually such a machine may plead to you that they are conscious and that they have qualia (as they do have sense data), but you won't believe them because of being implemented in a different substrate than you? Same situation goes for substrate independent minds/mind uploads.

It's not necessary since they
have no autonomy (avoiding 'Free Will' for John Clark's sake) to begin
with.

I don't see why not. If I had to guess, is it because you don't grant autonomy to anything whose behavior is fully determined? Within COMP, you both have deterministic behavior, but indeterminism is also completely unavoidable from the 1p. I don't think 'free' will has anything to do with 1p indeterminism, I think it's merely the feeling you get when you have multiple choices and you use your active conscious processes to select one choice, however whatever you select, it's always due to other inner processes, which are not always directly accessing to the conscious mind - you do what you want/will, but you don't always control what you want/will, that depends on your cognitive architecture, your memories and the environment (although since you're also part of the environment, the choice will always be quasideterministic, but not fully deterministic).

All we have to do is script rules into their mechanism.
It's not as simple, you can have systems find out their own rules/goals. Try looking at modern AGI research.

Some
parents would like to be able to do that I'm sure, but of course it
doesn't work that way for people. No matter how compelling and
coercive the brainwashing, some humans are always going to try to hack
it and escape. When a computer hacks it's programming and escapes, we
will know about it, but I'm not worried about that.

Sure, we're as 'free' as computations are, although most computations we're looking into are those we can control because that's what's locally useful for humans.

What is far more
worrisome and real is that the externalization of our sense of
computation (the glass exoskeleton) will be taken for literal truth,
and our culture will be evacuated of all qualities except for
enumeration. This is already happening. This is the crisis of the
19-21st centuries. Money is computation. WalMart parking lot is the
cathedral of the god of empty progress.

There are some worries. I wouldn't blame computation for it, but our current limited physical resources and some emergent social machines which might not have beneficial outcomes, sort of like a tragedy of the commons, however that's just a local problem. On the contrary, I think the answer to a lot of our problems has computational solutions, unfortunately we're still some 20-50+ years away to finding them, and I hope we won't be too late there.



regardless of how sensing (indirectly accessing data) is done, emergent
digital movement patterns would look like (continuous) movement to the
observer.

I don't think that sensing is indirect accessed data, data is
indirectly experienced sense. Data supervenes on sense, but not all
sense is data (you can have feelings that you don't understand or even
be sure that you have them).

It is indirect in the example that I gave because there is an objective
state that we can compute, but none of the agents have any direct access
to it - only to approximations of it - if the agent is external, he is
limited to how he can access by the interface, if the agent is itself
part of the structure, then the limitation lies within itself - sort of
like how we are part of the environment and thus we cannot know exactly
what the environment's granularity is (if one exists, and it's not a
continuum or merely some sort of rational geometry or many other
possibilities).

Not sure what you're saying here.I get that we cannot see our own
fine granularity, but that doesn't mean that the sense of that
granularity isn't entangled in our experience in an iconic way.

The idea was that indeed one cannot see their own granularity. I also gave an example of an interface to a system which has a granularity, but that wouldn't be externally accessible. I don't see what you mean by 'entangled in our experience in an iconic way'. You can't *directly* sense more than the information than that available directly to your senses, as in, if your eye only captures about 1000*1000 pixels worth of data, you can't see beyond that without a new eye and a new visual pathway (and some extension to the PFC and so on). We're able to differentiate colors because of how the data is processed in the visual system. We're not able to sense strings or quarks or even atoms directly, we can only infer their existence as a pattern indirectly.


  >  I'm not sure why you say that continuous
  >  movement patterns emerge to the observer, that is factually incorrect.
  >http://en.wikipedia.org/wiki/Akinetopsia
Most people tend to feel their conscious experience being continuous,
regardless of if it really is so, we do however notice large
discontinuities, like if we slept or got knocked out. Of course most
bets are off if neuropsychological disorders are involved.

Any theory of consciousness should rely heavily on all known varieties
of consciousness, especially neuropsychological disorders. What good
is a theory of 21st century adult males of European descent with a
predilection for intellectual debate? The extremes are what inform us
the most. I don't think there is a such thing as 'regardless of it
really is so' when it comes to consciousness. What we feel our
conscious experience to be is actually what it feels like. No external
measurement can change that. We notice discontinuities because our
sense extends much deeper than conscious experience. We can tell if
we've been sleeping even without any external cues.


Sure, I agree that some disorders will give important hints as to the range of conscious experience, although I think some disorders may be so unusual that we lose any idea about what the conscious experience is.
Our best source of information is our own 1p and 3p reports.




Also, it would not be very wise to assume humans are capable of sensing
such a magical continuum directly (even if it existed), the evidence
that says that humans' sense visual information through their eyes:

I don't think that what humans sense visually is information. It can
and does inform us but it is not information. Perception is primitive.
It's the sensorimotive view of electromagnetism. It is not a message
about an event, it is the event.

I'm not sure how to understand that. Try writing a paper on your theory
and see if it's testable or verifiable in any way?

Our own experience verifies it. We know that our sensorimotive
awareness can be altered directly by transcranial magnetic
stimulation. Without evoking some kind of homonculus array in the
brain converting the magnetic changes into 'information' in some
undisclosed metaphysical never never land (which would of course by
the only place anyone has ever been to personally), then we are left
to accept that the changes in the brain and the changes in our feeling
are two different views of the same thing. I would love to collaborate
with someone who is qualified academically or professionally to write
a paper, but unfortunately that's not my department. It seems like I'm
up on the crows nest pointing to the new world. The rest is up to
everyone else how to explore it.


A small sidenote: a few years ago I've considered various consciousness
theories and various possible ontologies. Some of them, especially some
of the panpsychic kinds sure sound amazing and simple - they may even
lead to some religious experiences in some, but if you think about what
expectations to derive from them, or in general, what predictions or how
to test them, they tend to either fall short or worse, lead to
inconsistent beliefs when faced by even simple thought experiments (such
as the Fading qualia one).

Fading qualia is based on the assumption that qualia content derives
from mechanism. If you turn it around, it's equally absurd. If you
accept that fading qualia is impossible then you also accept that
Pinocchio's transformation is inevitable. The thing that is missing is
that qualia is not tied to it's opposite (quantum, mechanism, physics)
it's that both sides of the universe are tied to the where and when
between them. They overlap but otherwise they develop in diametrically
opposed way - with both sides influencing each other, just as
ingredients influence a chef and cooking influences what ingredients
are sold. It's a virtuous cycle where experienced significance
accumulates though time by burning matter across space as entropy.

It's this: 
http://d2o7bfz2il9cb7.cloudfront.net/main-qimg-6e13c63ae0561f4fee41492d92b52097


You have to show that mechanism makes no sense. Given the data that I observe, mechanism is what both what my inner inductive senses tell me as well as what formal induction tells me is the case. We cannot know, but evidence is very strong towards mechanism. I ask you again to consider the brain-in-a-vat example I said before. Do you think someone with an auditory implant (example: http://en.wikipedia.org/wiki/Auditory_brainstem_implant http://en.wikipedia.org/wiki/Cochlear_implant) hears nothing? Are they partial zombies to you? They behave in all ways like they sense the sound, yet you might claim that they don't because the substrate is different?

COMP on the other hand, offers very solid
testable predictions and doesn't fail most though experiments or
observational data that you can put it through (at least so far). I wish
other consciousness theories were as solid, understandable and testable
as COMP.

My hypothesis explains why that is the case. Comp is too stupid not to
prove itself. The joke is on us if we believe that our lives are not
real but numbers are. This is survival 101. It's an IQ test. If we
privilege our mechanistic, testable, solid, logical sense over our
natural, solipsistic, anthropic sense, then we will become more and
more insignificant, and Dennet's denial of subjectivity will draw
closer and closer to self-fulfilling prophesy. The thing about
authentic subjectivity, it is has a choice. We don't have to believe
in indirect proof about ourselves because our direct experience is all
the proof anyone could ever have or need. We are already real, we
don't need some electronic caliper to tell us how real.

COMP doesn't prove itself, it requires the user to make some sane assumptions (either impossibility of zombies or functionalism or the existence of the substitution level and mechanism; most of these assumptions make logical, scientific and philosophic sense given the data). It just places itself as the best candidate to bet on, but it can never "prove" itself. COMP doesn't deny subjectivity, it's a very important part of the theory. The assumptions are just: (1p) mind, (some) mechanism (observable in the environment, by induction), arithmetical realism (truth value of arithmetical sentences exists), a person's brain admits a digital substitution and 1p is preserved (which makes sense given current evidence and given the thought experiment I mentioned before).


when
a photon hits a photoreceptor cell, that *binary* piece of information
is transmitted through neurons connected to that cell and so on
throughout the visual system(...->V1->...->V4->IT->...) and eventually
up to the prefrontal cortex.

That's a 3p view. It doesn't explain the only important part -
perception itself. The prefrontal cortex is no more or less likely to
generate visual awareness than the retina cells or neurons or
molecules themselves.

In COMP, you can blame the whole system for the awareness, however you
can blame the structure of the visual system for the way colors are
differentiated - it places great constraints on what the color qualia
can be - certainly not only black and white (given proper
functioning/structure).

Nah. Color could be sour and donkey, or grease, ring, and powder. The
number of possible distinctions is, and even their relationships to
each other as you say, part of the visual system's structure, but it
has nothing to do with the content of what actually is distinguished.


It seems to me like your theory is that objects (what is an object here? do you actually assume a donkey to be ontologically primitive?!) emit magical qualia-beams that somehow directly interact with your brain which itself is made of qualia-like things. Most current science suggests that that isn't the case, but surely you can test it, so you should. Maybe I completly misunderstood your idea.


The 1p experience of vision is not dependent upon external photons (we
can dream and visualize) and it is not solipsistic either (our
perceptions of the world are generally reliable). If I had to make a
copy of the universe from scratch, I would need to know that what
vision is all about is feeling that you are looking out through your
eyes at a world of illuminated and illuminating objects. Vision is a
channel of sensitivity for the human being as a whole, and it has as
more to do with our psychological immersion in the narrative of our
biography than it does photons and microbiology. That biology,
chemistry, or physics does not explain this at all is not a small
problem, it is an enormous deal breaker.

You're right that our internal beliefs do affect how we perceive things.
It's not biology's or chemistry's job to explain that to you. Emergent
properties from the brain's structure should explain those parts to you.
Cognitive sciences as well as some related fields do aim to solve such
problems. It's like asking why an atom doesn't explain the computations
involved in processing this email. Different emergent structures at
different levels, sure one arises from the other, but in many cases, one
level can be fully abstracted from the other level.

Emergent properties are just the failure of our worldview to find
coherence. I will quote what Pierz wrote again here because it says it
all:

"But I’ll venture an axiom
of my own here: no properties can emerge from a complex system that
are not present in primitive form in the parts of that system. There
is nothing mystical about emergent properties. When the emergent
property of ‘pumping blood’ arises out of collections of heart cells,
that property is a logical extension of the properties of the parts -
physical properties such as elasticity, electrical conductivity,
volume and so on that belong to the individual cells. But nobody
invoking ‘emergent properties’ to explain consciousness in the brain
has yet explained how consciousness arises as a natural extension of
the known properties of brain cells  - or indeed of matter at all. "

If you don't like emergence, think of it in the form of "abstraction". When you write a program in C or Lisp or Java or whatever, you don't care what it gets compiled to: it will work the same on any machine if a compiler or interpreter exists for it and if your program was written in a portable manner. Emergence is similar, but a lot more muddy as the levels can still interact with each other and the fully "perfect" abstracted system may not always exist, even if most high-level behavior is not obvious from the low-level behavior. Emergence is indeed in the eye of the beholder. Consciousness in COMP is like some abstract arithmetical structure that can be locally implemented in your brain has a 1p view. The existence of the 1p view is not something reductionist, it's ontologically primitive (as arithmetical truth/relations), but merely a consequence of some particular abstract machine being contained (or emerging) at some substitution level in the brain. COMP basically says that rich enough machines will have qualia and consciousness if they satisfy some properties and they cannot avoid that.

My solution is that both views are correct on their own terms in their
own sense and that we should not arbitrarily privilege one view over
the other. Our vision is human vision. It is based on retina vision,
which is based on cellular and molecular visual sense. It is not just
a mechanism which pushes information around from one place to another,
each place is a living organism which actively contributes to the top
level experience - it isn't a passive system.

Living organisms - replicators,

Life replicates, but replication does not define life. Living
organisms feel alive and avoid death. Replication does not necessitate
feeling alive.

You'll have to define what feeling alive is. This shouldn't be confused with being biological. I feel like I have coherent senses, that's what it means to me to be alive. My cells on their own (without any input from me) replicate and keep my body functioning properly. I will avoid try to avoid situations that can kill me because I prefer being alive because of my motivational/emotional/reward system. I don't think someone will move or do anything without such a biasing motivational/emotional/reward system. There's some interesting studies on people who had damage to such systems and how it affects their decision making process.

are fine things, but I don't see why
must one confuse replicators with perception. Perception can exist by
itself merely on the virtue of passing information around and processing
it. Replicators can also exist due similar reasons, but on a different
level.

Perception has never existed 'by itself'. Perception only occurs in
living organisms who are informed by their experience. There is no
independent disembodied 'information' out there. There detection and
response, sense and motive of physical wholes.

I see no reason why that has to be true, feel free to give some evidence supporting that view. Merely claiming that those people with auditory implants hear nothing is not sufficient. My prediction is that if one were to have such an implant, get some memories with it, then somehow switched back to using a regular ear, their auditory memories from those times would still remain.


Neurons are also rather slow, they can only
spike about once per 5ms (~200Hz), although they rarely do so often.
(Note that I'm not saying that conscious experience is only the current
brain state in a single universe with only one timeline and nothing
more, in COMP, the (infinite amount of) counterfactuals are also
important, for example for selecting the next state, or for "splits" and
"mergers").

Yes, organisms are slower than electronic measuring instruments, but
it doesn't matter because our universe is not an electronic measuring
instrument. It makes sense to us just fine at it's native anthropic
rate of change (except for the technologies we have designed to defeat
that sense).

Sure, the speed is not the most important thing, except when it leads to
us wanting some things to be faster and with our current biological
bodies, we cannot make them go faster or slower, we can only build
faster and faster devices, but we'll eventually hit the limit (we're
nearly there already). With COMP, this is even a greater problem
locally: if you get a digital brain (sometime in the not too near
future)

Sorry, but I think it's never going to happen. Consciousness is not
digital.

It's not digital in COMP either: arithmetical truth is undefinable in arithmetic itself. However, the brain might admit a digital substitution. Try not to confuse the brain and the mind. Some assume they are the same, in which case they are forced to eliminativism (if they assume mechanism), others are forced to less understandable theories (from my perspective, but you probably understand it better than me) like yours (if they assume mechanism is false), while others are forced to COMP (arithmetical ontology) if they don't give up their 1p and assume mechanism (+digital subst. level).

, some neuromorphic hardware is predicted to be a few orders of
magnitude faster(such as some 1000-4000 times our current rate), which
would mean that if someone wanted to function at realtime speed, they
might experience some insanely slow Internet speeds, for anything that
isn't locally accessible (for example, between US and Europe or Asia),
which mind lead to certain negative social effects (such as groups of
SIMs(Substrate Independent Minds) that prefer running at realtime speed
congregating and locally accessible hubs as opposed to the much slower
Internet). However, such a problem is only locally relevant (here in
this Universe, on this Earth), and is solvable if one is fine with
slowing themselves down relatively to some other program, and a system
can be designed which allows unbounded speedup (I did write more on this
in my other thread).

We are able to extend and augment our neurological capacities (we
already are) with neuromorphic devices, but ultimately we need our own
brain tissue to live in. We, unfortunately cannot be digitized, we can
only be analogized through impersonation.

You'd have to show this to be the case then. Most evidence suggests that we might admit a digital substitution level. We cannot know if we'd survive such a substitution from the 1p, and that is a bet in COMP.

Craig



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to