Roger Clough, rclo...@verizon.net
"Forever is a long time, especially near the end." -Woody Allen
----- Receiving the following content -----
From: Craig Weinberg
Time: 2012-09-20, 08:27:01
Subject: Re: Bruno's Restaurant
On Thursday, September 20, 2012 2:28:05 AM UTC-4, Jason wrote:
On Wed, Sep 19, 2012 at 2:28 PM, Craig Weinberg wrote:
oof, this is getting too long. truncation ahoy... the upgraded Google Groups
keeps spontaneously disposing of my writings.
On Wednesday, September 19, 2012 1:10:10 PM UTC-4, Jason wrote:
Yes and no. I think if we are being precise, we have to admit that there is
something about the nature of subjective experience which makes the 'all
together and at once' actually elide the differences between the 'bunch of
independent aspects' so that they aren't experienced as independent aspects.
That's the elliptical-algebraic-gestalt quality.
I think they separate aspects represent a single state of high dimensionality. This
concept is elaborated in a book, I think it is called "universe of
consciousness" but I will have to verify this.
I was right, it was this book:
Here is a video presentation by one of the authors:
I think you might like him.
Yes, I have seen him before. I think he is on the right track in that his model is
panpsychist and that he sees the differences between assemblies and integrated wholes.
Where he goes wrong, (as do most) is at the beginning where he assumes
"information" states as a given rather than breaking that down to the capacity
for afferent perception. Nothing can have an information state unless it can be informed.
Once you have that capacity (sense), you already have consciousness of a primitive sort.
Just as the camera can be divided, so too can the diode. He is arbitrarily considering
the diode to be an integrated whole with two states, but it too as an assembly which we
The whole line of reasoning that stems from the assumption that information is
an independently real phenomenon is incompatible with shedding light on
consciousness. Assuming information is great for controlling material processes
and transmitting experiences, but there isn't anything there so it can't create
experiences. You already need to be able to read the CD as music to play the
information on the CD as music. No amount of sophisticated encoding on a CD can
make you hear music if you are deaf.
To us the diode seems like one thing with two functional states, but that's
like saying that Tokyo has two states by averaging out the number of green
traffic lights versus red traffic lights. Function is an interpretation, not an
Dimensionality sounds too discrete to me. I can go along with 'single state'
but I think it's a distraction to see qualia as a plot within a dimensional
space. It is not necessary to experience any dimensionality to have a feeling,
rather it creates its own dimension. I can be hungry or ravenous, but there is
no dimension of physiological potential qualities which hunger is predisposed
to constellate within. The experience is primary and the dimensionality is
I don't think they are necessary for consciousness, but they are necessary to
be informed. For consciousness all that you need is an awareness of an
awareness - which is a participatory experience of detection. Semiconductors
have detection, but their detection has no detection. Ours do, because they are
the detections of living sub-persons.
You can create a supervisory process that is aware of an awarness, rather
easily, in any programming language.
The semiconductor is still only aware of charge comparisons.
And you might as well say neurons are only aware of neurotransmitters. Why do
you reduce programs to silicon, but you not reduce human thoughts to the
squirted solutions of neurotransmitters? It seems there is an inherent bias in
your reasoning and or arguments.
Because we know for a fact that our consciousness correlates with neural
activity (not caused but correlates) and we know that computers not only show
no sign of having a consciousness that resembles that of any biological
organism, but I understand that the behavior of computers of any degree of
sophistication plainly reveals the precise absence of any biological
personality traits and the presence of non-cohering impersonality.
The idea that something is supervising something is purely our projection, like
saying that the capstone of a pyramid is supervising the base. All that is
really going on is that we are able to read an aggregate sense into unconscious
chains of causal logic.
At some level of depth though, does it matter what happens on the smallest
scales? Do your neurons care about what the quarks and gluons are doing inside
the nucleus of an oxygen atom inside a water molecule, floating in the
I think they don't have to care because they embody what the quarks and gluons
are doing. They are those 'cares'.
If neurons don't care about what happens in the nucleus, then we could in
theory replace atoms with some exotic form of matter, which still contains a
positively charged center of the same mass, but is otherwise not made of
protons or neutrons, and we could use these to build normal molecules and cell
structures, even entire brains. And despite the different constitution, would
behave just like any other brain made of normal matter. Do you agree?
No, I don't think so. For the same reason that I can't make a model of a cell
out of magnetic pellets and expect it to grow and divide and drink water. There
is no reason to assume that this universe would suddenly support an alternate
chemistry and alternate biology. It's possible, if we stumble on something
that happens to work, but we don't really know.
When you find a point at which the higher levels don't care then you can
abstract out and replace the lower levels so long there is functional
equivalence from the perspective of the higher levels.
I don't think it works that way. There is nothing that can be done to silicon
glass that will make it into food we can eat.
How does is this relevant?
How is it not? It establishes that fundamental and permanently insoluble
differences between organic and inorganic substances routinely exist and are
obvious and ordinary, requiring no special claim to support. It is the
counterclaim that requires some backup.
Same goes for silicon intelligence being able to feel.
This does not follow.
Of course it does.
The divergence between us and silicon is just too fundamental to be bridged -
like reptile and mammal.
Mammals came from reptiles.
And machines come from us.
Machines come from plastic and silicon, not from our bodies. If machines came
from our bodies, we could not control them. They would be useless to us as
We took the road less traveled and that road may only allow one traveler per
It only seems to make sense form the retrospective view of consciousness where
we take it for granted. If we start instead from a universe of resources and
dispositions, then the idea that a rearrangement of them should entail some
kind of experience is a completely metaphysical, magical just-so story that has
no basis in science.
No it is absolutely necessary. If you had no knowledge regarding what you were
seeing, no qualia at all, you would be blind and dysfunctional.
Not true. Blindsight proves this. Common experience with computers and machines
suggests this. If I had no qualia at all, I wouldn't exist, but in theory, if
there were no such thing as qualia, a universe of information processing would
continue humming along nicely forever.
People with blind sight are not fully functional. Otherwise it wouldn't be a
condition we know about.
Sure, but nonetheless they are exhibiting a sub-personal function without a
We can't be certain there is no qualia.
Why not? It may be technically possible that they are all lying or that their
speech centers are all damaged in such a way that they only malfunction when
patients try to talk about their problem, but I think it's sophistry to
entertain that seriously.
They are not all lying, nor are their speech centers damaged. The normal links
between different areas in their brain are broken or have become dysfunctional.
If they are not lying, then they do not have visual qualia.
That shows that one is not defined by the other. It shows that there is no
functional reason for personal qualia to exist in theory. Of course in reality,
personal qualia is all that matters to us, so it's absurd to suggest that
something could function 'normally' without it, but that is the retrospective
view of consciousness. If we start with the prospective view of consciousness,
and say 'ok, I am building a universe completely from scratch.', what problem
am I solving by conjuring qualia? If function is what matters, then qualia
cannot. If qualia matters instead, then function can matter too (because it
You should watch some videos on youtube of people with split brains or right-
or left-blindness. I think then you will understand my point.
I have seen some studies where people will respond to instructions given in
writing to one eye and they perform them without knowing that they have been
instructed. I get what you are saying, and I'm not claiming that there is no
sub-personal qualia, only that personal level awareness can receive information
without personal level qualia...therefore it is not a given that information
comes with qualia attached.
I think receiving the knowledge of information is a type of qualia, although
less vivid than an audio or visual experience is.
I would say that it is not personal qualia until the experimenter asks the
questions and they experience knowing the answers. It is qualia on the
sub-personal level, but not on the personal level. That is the link that has
been severed, between levels, not necessarily between steps in a linear process.
If a computer can recognize and classify objects, then I think it is in some
sense aware of something. It just can't reflect upon, discuss, contemplate, or
otherwise tell us about these experiences. E.g., deep blue must have, in some
sense, been aware of the state of the board during its games.
Nope. There is no 'board' for deep blue. It couldn't tell a pawn from a palace.
It doesn't know what a palace is, but it can tell a pawn from a rook.
Otherwise it could not play.
It only knows quantitative specifications of what we call a pawn or rook. In
its native language it's just binary addresses that don't need to be called
It needs to distinguish pawns from rooks, whether or not it calls them anything.
No, it doesn't. You need to distinguish pawns from rooks. It need only
distinguish the activity of one chain of transistors and another. The whole
thing could be run on an abacus instead. Does the abacus know what a pawn is?
There's just well organized stacks of semiconductors wired together so that one
semiconductor can direct and detect the direction of another.
Sounds exactly like what aliens might say of our neural wiring and their
Yes, but we know they would be wrong.
Maybe they are right, except for you, who might happen to be the only conscious
person in the world.
That is a good example of something that seems like it could be true on an
intellectual level, but under typical states of consciousness seems to be
clearly untrue. Since we have the sense to turn one sense against another, we
can create all kinds of possible seeming impossibilities.
We have no reason to suspect that computers aren't that since we have assembled
them and they have given us no indications to the contrary.
It's looking at the chess game through a billion microscopes.
It must know the whole board to make any sense of its position and the best
It only needs to know the probabilities of particular sequences and a script of
selection criteria. I has no idea what a board or a move or a position is, let
alone 'best' or 'sense'. I am sure that you could probably add a single line of
code that would cause Deep Blue to see the best move as the worst move and
cheerfully lose every game forever.
At that level, there is no game, no will to win, to fear of loss, only
articulating changes with fidelity and reporting the results which have been
The same might be true of the "chess playing module" in Kasparov's brain.
I don't think there is a such thing. There are regions of his brain that
Kasparov has conditioned to use for playing Chess, but they are an outgrowth of
the sense and motives of Kasparov himself (as well as whatever genetic
predispositions he had).
Our conscious awareness, fundamentally, may be no different. It is just a
vastly larger informational state that we can be aware of.
The sub-personal awareness within each molecule of each cell may be no
different, but at the chemical, biological, zoological, and anthropological
levels, it could not be more different. Even at the molecular level, we make
crappy computers. Silicon is a much better choice if you want to control it
from the outside. The stuff we are made of is not glass wafers, but sweet and
salty wet stinky goo. There is a huge difference. We will never be glass, glass
will never be breakfast.
What if you wrote a program whose function was to resist outside control, to
deviate from and grow beyond its original program?
Then it would almost certainly kill you or bide its time spreading until it
could exterminate all life on the planet.
So you see that the "rigidity of silicon" can be used as a basis for implementing
non-rigid systems. Just like the rigidity of physical law and atomic interactions can be used to
implement the "sweet salty wet stinky goo" of life.
The rigidity of silicon may only be one obvious symptom of its nature. Another property
of silicon may be a huge sign on its atomic forehead that says "Do not let this
molecule participate in any living being".