Perception contradicts mechanism directly.
My view is the universe is that contradiction. The inherent
polarization of it is such that it cannot be resolved and that it must
be resolved. That is the engine of the cosmos. On the micro and
macrocosmic levels (relative to us), the polarity is arithmetic, but
on the mesocosmic level (isomorphic to us) the polarization is
blurred, ambiguous, and figurative. That's another polarity entirely,
but they arise from each other logically.
Isn't it obvious that
different levels of perception yield different novel
possibilities?
That a ripe peach does something that a piece of charcoal doesn't?
That yellow is different from just a bluer kind of red?
I believe that the sensations you describe are equivalent to
certain
computations.
What is equivalent? Is an apple equivalent to an orange? It's a
matter
of pattern recognition. If you recognize a common pattern, you can
project equivalence, but objectively, there is no equivalent to
yellow. You either see it or it does not exist for you. No
computation
can substitute for that experience. It has no equivalent. It can be
created in people who can see yellow by exposure to certain optical
conditions, but also by maybe pushing on your eyeball or falling
asleep. Yellow is associated with various computations, but it is
not
itself a computation. It is a sensorimotive subjective presence.
Perhaps your "sensorimotive subject" supervenes on these
computations.
If it did, then why have yellow at all? Why not just have the
computations?
Thus consciousness, and computation are higher-level
phenomenon, and accordingly can be equivalently realized by
different
physical media, or even as functions which exist platonically in
number
theory.
Human consciousness is a higher level phenomenon of neurological
awareness, which is a higher level phenomena of biology, genetics,
chemistry, and physics.
I think you are on to something with this.
cool. if you do end up getting what I'm talking about, It's possible
that you'll find it pretty interesting. All of this bickering over AGI
and zombies is really not at all what I'm here to talk about.
Speculating on the consciousness of non-human subjects is really the
least valuable implication of my hypothesis. What my idea lets you to
is to look out of your own eyes and see what you actually see
(meaning, image, feeling) without compulsively translating it
intellectually into the opposite of what it is (generic, arithmetic
mechanism). Then you can get a firm handle on what the difference is,
why it's important, and how they can coexist without one disqualifying
the other.
My hope is that there is a threshold where is is possible for someone
will reach a supersaturated tipping point and crystallize an
understanding of what I'm talking about, like those 'When You See
it..." memes (http://static.black-frames.net/images/when-you-see-
it_____________.jpg). Once you realize that what we perceive is both
fact and fiction and that both fact and fiction are themselves a
matter of perception then it gives you the freedom to appreciate the
cosmos as it is, in all it's true demented genius, rather than as a
theoretical construct to support the existence of fact at the expense
of fiction (or vice versa).
It is also a lower level phenomenon of
anthropology, zoology, ecology, geology, and astrophysics-cosmology.
Some psychological functions can be realized by different physical
media, some physical functions, like producing epinephrine, can be
realized by different psychological means (a movie or a book,
memory,
conversation, etc).
How do you get 'pieces' to 'interact' and obey
'rules'? The rules have to make sense in the particular context,
and
there has to be a motive for that interaction, ie sensorimotive
experience.
If there were no hard rules, life could not evolve.
'Hard rules' can only arise if the phenomena they govern have a
way of
being concretely influenced by them. Otherwise they are
metaphysical
abstractions. The idea of 'rules' or 'information' is a human
intellectual analysis. The actual thing that it is would be
sensorimotive experience.
Are you advocating subjective idealism or phenomenalism now?
I'm advocating a sense monism encapsulation of existential-essential
pseudo-dualism.
Could you please restate this using words with a conventional
meaning?
I'm advocating a universe based entirely on sense, sense being the
unresolvable tension between, yet unity among, subjective experiences
and objective existence.
No. Just the fact of not occupying the same space as my body
makes it
different.
Not different in any way you could notice.
I would notice if someone that looked exactly like me was standing
somewhere else besides where I'm standing.
If everything in the universe
were shifted to the left by 10 meters, would this universe be
different?
That's not possible, since space is inside of the universe, not
outside of it. Space is an abstraction we use to understand the
relation between objects.
Would it affect your consciousness in any noticeable way?
The idea of two separate things being 'identical' is a
function of pattern recognition. Identical to who?
There is of course a strong correlation between physical and
psychological phenomena of a human mind/body, but that
correlation
is
not causation. Psychological properties can be multiply
realized in
physical properties,
This means you think other different physical forms can have
identical
psychological forms. E.g., a computer can have the experience of
red.
If the computer was made out of something that can experience red,
then sure.
The human experience of perceiving red is equivalent to a certain
computation.
What computation would that be? If I arrange milk bottles so that
they
fall over in a pattern which is the equivalent to that computation,
will the milk bottles see red?
No, the mind which supervenes on the computation of the milk
bottles will
experience red.
A mind arises from a collection of milk bottles? Automatically? Does
it think about anything other than the one momentary experience of red
that occurs somehow from bottles knocking each other down in some
particular configuration?
Will I see red if I look at the milk
bottles?
No.
How can you seriously entertain that as a reality?
You won't see red when you look at a neuron involved in the
processing of
that sensory data, nor will the individual neurons which serve as
the basis
for that processing know the experience of red.
I agree. So what is it exactly that does know the experience of red?
Entertaining the idea of
milk bottles having a private experience is no more a leap than
entertaining
the idea that the cells in your brain can do the same.
On one level that's true, since we have no direct access to what other
things experience, but it doesn't mean that it's very likely that the
experience it has could ever be comparable to that of our brain cells.
If it were, there would be no reason to have brain cells at all. We
could just be a giant amoeba or pile of sand and have any experience
possible - human or otherwise. Instead of needing eyes we could just
drill a hole in our skull. Something makes humans different from non-
humans, I think that it's related to the experiences of organisms over
time as well as the consequences of the physical conditions local to
their bodies.
This computation could be performed by any kind of matter that
can be arranged into a functional Turing machine. This
computation also
exists in mathematics already.
I'm confident that no computation generated by a Turing is
equivalent
to seeing red.
We should have an answer in a few decades, when you can ask those
with
digital brains what color a ripe strawberry has.
Promises promises.
but physical properties can be multiply realized
in psychological properties as well. Listening to the same song
will
show up differently in the brain of different people, and
different
even in the same person over time, but the song itself has an
essential coherence and invariance that makes it a recognizable
pattern to all who can hear it. The song has some concrete
properties
which do not supervene meaningfully upon physical media.
Different physical properties can be experienced differently, but
that's
not
what supervenience is about. Rather it says that two physically
identical
brains experiencing the same song will have identical
experiences.
Identical is not possible, but the more similar one thing is
physically to another, the more likely that their experiences will
also be more similar. That's not the only relevant issue though.
It
depends what the thing is. A cube of sugar compared to another
cube of
sugar is different than comparing twins or triplets of human
beings.
The human beings are elaborated to a much more unpredictable
degree.
It's not purely a matter of complexity and probability, there is
more
sensorimotive development which figures into the difference. We
have
more of a choice. Maybe not as much as we think, and maybe it's
more
of a feeling that we have more choice, but nevertheless, the
feeling
that smashing a person's head is different from smashing a coconut
I hope you don't speak from experience. ;-)
If the universe was only arithmetic, what would be the difference?
The difference between a primitively physical universe or the
difference
between a coconut and a human's head?
The difference between committing murder and making a Pina Colada.
Logic gates in a computer can detect and change according to their
detection. If this ability forms the "atom" of experience, then by
extension, computers possess the appropriate building blocks to
build any
form of experience.
Maybe, but I think that the computer might have to assemble those
building blocks into the experiences of actual living cells and
organisms first.
What about copying it, by translating this evolved history into a
different
physical substrate?
Possible, and I have considered that, but it's kind of like faux
antiques or pre-distressed jeans. I'm not sure that it works that way.
Having a book on your shelf isn't the same thing as having read it,
and it's certainly not the same thing if that book is your
autobiography and you haven't lived it.
Our consciousness is a community of a specific kind
of organic subjective agents. They perform logical functions but
they
are not limited to them. It's like saying we could build the Great
Barrier Reef out of Play Dough... maybe in theory, but not really.
Nothing
about the physical functions of the brain, neurons, or
electrons we
observe suggest the existence of a mind.
The particles in the brain model their external reality,
In what way? Where is this model located?
In the patterns of the neuron firings:
http://www.youtube.com/watch?v=MElU0UW0V3Q
That's cool technology, but the model being used is developed by
researchers. The patterns of the neurons are the experiences of the
person, not a model of them. They are the physical presentation that
corresponds to the psychological presentation. We have to reverse
engineer a Rosetta Stone of code equivalence to match up our first
person experience with the third person measurements. Without the
first person reports, there would be no suggestion of a mind.
The forest could be modeling
intergalactic p0rn for all we know, but without any experience
of the
result of that 'model', we can't really say that is what the
brain is
doing at all. The brain is just living cells doing the things that
living cells do.
analyze patterns,
process sensory information, digest it, share it with other
regions,
and
enable the body to better adapt and respond to its environment.
The immune system does that too. The digestive system. Bacteria
does
that.
For all you know, those systems could be conscious. The Craig
Weinberg I
am
communicating with on this list is not Craig Weinberg's immune
system, so
I
have no way to ask your immune system if it is conscious.
Oh, I agree. I think that awareness of different sorts is in
everything, but it wouldn't automatically be that way just to
fulfill
functional purposes. Even if there were a functional advantage,
there
isn't any functional material process which would or could discover
awareness if it wasn't already a built in potential.
These
behaviors and functions suggest the existence of a mind to me.
Only because you have a mind and you are reverse engineering it.
If a
child compares live brain tissue under a microscope to pancreas
tissue
or bacteria under a microscope, they would not necessarily be
able to
guess which one was 'modeling' a TV show and which was just
producing
biochemistry.
If you zoom in on anything too much you crop out all the
context. If you
zoomed in to the point where all you could see is a silicon atom,
you
have
no idea if it is part of an integrated circuit or a grain of sand
on a
beach.
So what context would you have to zoom out from or in to before the
existence of a mind presents itself in the absence of any pre-
existing
notion of 'mind'? Like what pattern besides red would make you see
red
if you had never seen it?
A similar computation and state compared to that of human subjects
who
report experiencing the sight of red.
What would that be though? What is similar to red but not a color?
The suggestion of a mind is purely imaginary, based upon
a particular interpretation of scientific observations.
When we build minds out of computers it will be hard to argue
that that
interpretation was correct.
Ah yes. Promissory Materialism. Science will provide. I'm confident
that the horizon on AGI will continue to recede indefinitely like a
mirage, as it has thus far. I could be wrong, but there is no reason
to think so at this point.
If you told any AI researcher in the 70s of the accomplishments
from the
links I provided they would break out the campaign bottles. The
horizon is
not receding, rather you are in the slowly warming pot not noticing
it is
about to boil.
I do think there is a lot of great science and technology coming out
of it, but I think we are no closer to true artificial general
intelligence than we were in 1975. We just understand more about
emulating certain functions of intelligence. When we approach it from
a 1-p:3-p sense based model rather than a 3-p computation model, I
think we will have the real progress which has eluded us thus far.
I think your analogy is in error. You cannot compare the strip
of metal
to
the trillion cell organism. The strip of metal is like a red-
sensing
cone
in your retina. It is merely a sensor which can relay some
information.
How that information is interpreted then determines the experience.
Aren't you just reiterating what I wrote? "because a strip of
metal is
so different from a trillion cell living being"
What I mean is that the metal strip is not the mind, and should not
be
equated with one. It is more like a temperature sensitive nerve-
ending. A
thermostat with the appropriate additional computational functions
could
feel, sense, be aware, think, be conscious, care, etc.
or it could just compute and report a-signifying data.
(I doubt we share the same sense of humor with
thermostats either).
In contrast, we understand what temperature means to us and
why we
care about it.
An appropriately designed machine could care about it too.
Why do you think that a machine can care about something?
We do. And we are molecular machines.
We are also sentient human beings. It's only the subjective view of
the thing as a whole that cares, not the vibrating specks that
make up
the tubes and filaments of the monkey body.
I can agree with this. Going slightly further, the composition of
those
tubes and filaments should make no difference in what the thing as
a whole
might be capable of feeling.
We know that it makes some difference, because diseases which change
the flexibility of those tubes or permittivity of those filaments make
differences in what we as a whole are capable of feeling. Why wouldn't
it? Why would a machine executed in semiconductor glass be any more
effective at reproducing the anguish of a suffering animal than a pile
of finely chopped scallions would be at running a spreadsheet
application? Why doesn't matter matter?
"There cannot be a Microsoft Windows difference without an Intel
chip
difference". To say that Windows determines what the chip does you
would say that Intel and AMD chips both supervene upon Windows. It
seems backwards at first but it sort of makes sense, sort of a
synonym
for 'rely upon'. It's still kind of an odious and pretentious way to
say something pretty straightforward, so I try to just say what I
mean
in simpler terms.
I see, it is defined confusingly. I can also see it interpreted as
follows:
The state of the Microsoft word program cannot change without a
change in
the state of the underlying computer hardware. But not all changes
in the
computer hardware correspond to changes in the state of the program.
Right, I can see that interpretation too. That's why I hate reading
philosophy, haha.
and reduces
our cognition to an unconscious chemical reaction.
If I say all of reality is just a thing, have I really reduced
it?
It depends what you mean by a 'thing'.
Does it?
Of course. If I say that an apple is a fruit, I have not reduced
it as
much as if I say that it's matter.
How you choose to describe it doesn't change the fact that it is an
apple.
I think the exact opposite. There is no such fact. It's only an
apple
to us. It's many things to many other kinds of perceivers on
different
scales. An apple is a fictional description of an intangible,
unknowable concordance of facts.
Likewise, saying the brain is a certain type of chemical reaction
does
not
devalue it. Not all chemical reactions are equivalent, nor are all
arrangements of matter equivalent. With this fact, I can say the
brain
is a
chemical reaction, or a collection of atoms. Neither of those
statements
is
incorrect.
I don't have a problem with that. You could also say the brain is a
certain type of hallucination.
Explaining something in no way reduces anything unless what you
really
value
is the mystery.
I'm doing the explaining. You're the one saying that an
explanation
is
not necessary.
Your explanation is that there is no explanation.
Not really.
An explanation, if it doesn't make new predictions, should at
least make
the
picture more clear, providing a more intuitive understanding of the
facts.
I think that mine absolutely does that.
Also, I don't think it is incorrect to call it an "unconscious
chemical
reaction". It definitely is a "conscious chemical reaction".
This
is
like
calling a person a "lifeless chemical reaction".
Then you are agreeing with me. If you admit that chemical
reactions
themselves are conscious,
Some reactions can be.
then you are admitting that awareness is a
molecular sensorimotive property and not a metaphysical illusion
produced by the brain.
Human awareness has nothing to do with whatever molecules may be
feeling,
if
they feel anything at all.
Then you are positing a metaphysical agent which supervenes upon
molecules to accomplish feeling. (which is maybe why you keep
accusing
me of doing that).
Yes, the mind is a computation which does the feeling and it
supervenes
on
the brain.
Why does the computation need to do any feeling?
When a process is aware of information it must have awareness.
I can be aware of Chinese subtitles, but I have no awareness of
Chinese. A CD player can play a sad song for us, but that doesn't mean
that it makes the CD player sad. Every physical thing has some kind of
'awareness' or sensorimotive content, however primitive, but
computation itself does not necessarily have it's own existence. It's
just a text in the context of our awarenss. A cartoon character
doesn't have any feelings. It can be seen to respond to it's cartoon
environment but it's not the awareness of the cartoon you are
watching, it's the awareness of the cartoonist, the producer, the
writer, the animator that you are watching.
Why have we not seen a single information processing system indicate
any awareness beyond that which it was designed to simulate?
Watson was aware of the Jeopardy clue being asked, was it not?
No. Watson is just a massive array of semiconductors eating power and
crapping out zillions of hierarchically distilled results. It's an
intelliformed organization, not an intelligent organism. It doesn't
care if it's right or wrong or how well it understands the clue, it's
just going to run it's meaningless algorithms on the meaningless data
it's being fed. No different from a doll that cries when you pick it
up. There may be a mercury switch that detects being picked up, and
there may be a chip that detects the mercury switch and plays the
audio sample of crying, but there is no sense making going on between
the two things. The doll as a whole doesn't know anything.
The Herbivores in the simulation I posted yesterday are aware of
nearby
predators and their color.
They are designed to simulate something, so they do. How does that
constitute indicating an awareness beyond their design?
What kind
of awareness does a book have without a reader? Information is
something I used to assume could exist on it's own, but now it's
like
a glaring red Emperor's New Clothes to me. A brick is nothing but
'information' and information is the really the brick. Um, yeah. I
understand the appeal, but it's a figment of a 21st century
Occidental
imagination.
How does it come to affect physical things?
Because the aware systems we are familiar with are supervening on
physical
objects.
So because awareness needs physical objects, that means objects are
affected by awareness? But then somehow that doesn't mean that human
awareness affects our neurological behaviors?
Changes in states of the mind are reflected by physical changes.
That's what I've been saying, but you insist that it's only changes in
the mind which are reflections of physical changes and not the other
way around. You say that if the mind's changes affect the physical
processes then it has to be magic.
If that were the
case then you could never have a computer emulate it without
exactly
duplicating that biochemistry. My view makes it possible to at
least
transmit and receive psychological texts through materials as
communication and sensation but your view allows the psyche no
existence whatsoever. It's a complete rejection of awareness
into
metaphysical realms of 'illusion'.
I think you may be mistaken that computationalism says
awareness
is
an
illusion. There are some eliminative materialists who say
this,
but
I
think
they are in the minority of current philosophers of mind.
How would you characterize the computationalist view of
awareness?
A process to which certain information is meaningful.
Information is
meaningful to a process when the information alters the states or
behaviors
of said process.
What makes something a process?
Rules, change, self-reference.
What makes something a rule,
Some invariant relation in some context.
So then invariance, relation, and context are more primitive than
rules. Which is the same conclusion I reach. Those are actually
synonyms for my three sense elements: invariance = essence, relation =
existence (sense²), context = Sense (sense³). Invariance =
sensorimotive coherence. Relation = sensorimotive-electromagnetic
variance of coherence and incoherence. Context = Relativity of
perception and perception of relativity. Inertial frames. The key
difference though is that I see the primitive unit of sense as an
*experience* from which the concept of invariance is derived. The
experience has no name, it's just isness. Self. A non-computable
vector of orientation.
or a change,
When one thing varies with another.
or a self, or a reference?
Self-reference is when one thing's definition refers to itself,
recursively
or iteratively.
Those describe the meaning of the terms, but not the physics of the
phenomenon. How does a 'self' come to 'refer' to something?
Are all processes equally meaningful?
No.
What makes the difference between something that is aware and
something that is not?
Minimally, if that thing possesses or receives information and is
changed
by
it. Although there may be more required.
We are changed by inputs and outputs all the time that we are not
aware of.
There may be other conscious parts within us which are
disconnected from
the
conscious part of us which does the talking and typing. For
example,
your
cerebellum performs many unconscious calculations affecting motor
control,
but is it really unconscious? Perhaps its information patterns and
processing are merely not connected to the part of the brain which
performs
speech. Similarly, a bisected brain becomes two minds by virtue
of their
disconnection from each other.
I agree, but it doesn't explain why the inputs and outputs we are
aware of are different from those we are not aware of.
For those we are not aware of, there is no integration into the
computational state of high dimensionality which includes most of the
functions and processes of the cortex.
Right, but what determines what gets integrated and what doesn't?
Ok, but the Taj Mahal is just made of mainly stone. Either way the
dynamics of either one won't ever get you closer to predicting the
shape of the Taj Mahal than anything else.
The stone model doesn't describe those that designed or built it,
while the
atomic model would.
I don't follow. The atomic model predicts India?
Human consciousness is a specific Taj Mahal of sensorimotive-
electromagnetic construction. The principles of it's
construction
are
simple, but that simplicity includes both pattern and pattern
recognition.
Pattern and pattern recognition, information and information
processing.
Are they so different?
Very similar yes, but to me information implies a-signifying
Could you define "a-signifying" for me?
Meaning that the information has no meaning to the system processing
it. A pattern of pits on a CD is a-signifying to the listener and
the
music being played is a-signifying to the stereo. In each case,
fidelity of the text is retained, but the content of the text is
irrelevant outside of the context of it's appropriate system. A TV
set
isn't watching TV, it's just scanning lines. That's information.
Handling data generically without any relevant experience..
This is the difference between a recording (or information being
sent over a
wire) compared to information being processed (in which it has
particular
meaning by virtue of the context and difference it makes in the
processing).
You're still hallucinating 'information' into wires. There's no
objective information there to the wire other than atomic collisions.
Information is just a way of saying external assistance to sense-
making. Whether the text has meaning in a particular context or not
depends on the relation between the two. A machine can't make sense of
feelings, it can only make sense of it's intended measurements in
terms of objective measurements. There is no private subjectivity
going on. It's all accessible publicly.
The self driving Google car's cameras which transmit the raw
input data possesses no meaning, but the software that determines
that it
sees a car, or a stop sign generates meaning from this
information. "Stop
sign *means* we need to decelerate"
No, the software doesn't know what a car or a stop sign is, it just
presents a an instruction set to a microprocessor switches the circuit
on that leads to the actuator that happens to lead to the accelerator
(it could lead to a toaster or a nuclear missile). Optical patterns
which satisfy the software's description of stop signs cause a circuit
to close. There is no meaning or choice involved. Turning on a water
faucet doesn't mean anything to the plumbing. There are consequences
on a physical level, but not one that leads on it's own to psychology.
A choice is being made from the 3-p view, but that isn't the one
that
matters. The computer has no knowledge of it's choices. It's just
executing an instruction set.
It does have knowledge. What you ascribe to having no knowledge of
the
decision is the underlying basis of the computation. Similarly, your
neurons (individually) have no idea of what stock you are
purchasing or
selling at the time you do. Only you (the higher level process
does). It
is the same with a computer-supported mind.
The difference is that our higher level processes arise autopoetically
from our neurology. A computer has is our higher level processes
imposed on semiconductors which have no capacity to develop their own
higher level processes - which is precisely why these kinds of
materials are used. Making a computer out of living hamsters in a maze
is not going to be very reliable. Hamsters have more of their own
agenda. Their behavior is less predictable. Humans even more so.
Craig
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to [email protected]
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
.
decision is the underlying basis of the computation. Similarly, your
neurons (individually) have no idea of what stock you are
purchasing or
selling at the time you do. Only you (the higher level process
does). It
is the same with a computer-supported mind.
The difference is that our higher level processes arise autopoetically
from our neurology. A computer has is our higher level processes
imposed on semiconductors which have no capacity to develop their own
higher level processes - which is precisely why these kinds of
materials are used. Making a computer out of living hamsters in a maze
is not going to be very reliable. Hamsters have more of their own
agenda. Their behavior is less predictable. Humans even more so.
Craig
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to [email protected]
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
.
hing-list?hl=en.