On Thu, Aug 25, 2011 at 12:31 AM, Craig Weinberg <whatsons...@gmail.com> wrote:

> Feeling doesn't come from a substance, it's the first person
> experience of energy itself. Substance is the third person
> presentation of energy patterns. If you turn it around so that feeling
> is observed in third person perspective, it looks like determinism or
> chance, while substance has no first person experience (which is why a
> machine, as an abstraction, can't feel, but what a machine is made of
> can feel to the extent that substance can feel.)
>
> Whether there are other substances in the brain that we haven't
> discovered yet is not the point. There might be, but so what. It's not
> the mechanism of brain chemistry that feels, it's the effect that
> mechanism has on the cumulatively entangled experience of the brain as
> a whole, as it experiences with the cumulatively entangled experiences
> of a human life as a whole.

This is a bit hard to understand. Are you agreeing that there is no
special consciousness stuff, but that consciousness results from the
matter in the brain going about its business? That is more or less the
conventional view.

>> Do you think it's possible to reproduce the function of anything at all?
>
> It's possible to reproduce functions of everything, but there is no
> such thing as *the* function of something. To reproduce *all* possible
> functions of something is to be identical to that thing. If the
> reproduction even occupies a different space then it is not identical
> and does not have the same function. Think about it. If you have one
> ping pong ball in the universe, it has one set of finite states (which
> would be pretty damn finite).
>
> If you have another ping pong ball exactly the same there is a whole
> other set of states conjured out of thin air - they can smack
> together, roll over each other, move together and apart, etc. BUT, the
> original ball loses states that it never could have anticipated. True
> solitude becomes impossible. Solipsism becomes unlikely as the other
> ball becomes an object that it cannot not relate to.
>
> What you're not factoring in is that 'pattern' is a function of our
> pattern recognition abilities. Even though you firmly believe that our
> experience is flawed and illusory, somehow that gets set aside when
> you want to prove that logic is different. Your faith is that the
> logical patterns that we understand *are* what actually exists, rather
> than a particular kind of interpretation contingency. You think that
> A=A because it must by definition... but I'm pointing out that it's
> your definition that makes something = something, and has no
> explanatory power over A. In fact, the defining = can, like the second
> ping pong ball, obscure the truth of what A is by itself. This is
> critical when you're looking at this level of ontological comparison.
> Describing awareness itself cannot be accomplished by taking awareness
> for granted in the first place. First you have to kill "=" and start
> from nothing.

The function I am talking about is relatively modest, like making a
ping-pong ball out of a new plastic and designing it so that it weighs
the same and is just as elastic. If you then put this ping-pong ball
in with balls of the older type, the collection of balls will bounce
around normally, even though the new ball might be different in
colour, reflectivity, flammability etc. There is no need to figure out
exactly where all the balls will be after bouncing around for an hour,
just the important parameters of a single ball so that it can slot
into the community of balls as one of their own. A fire could come
along and it will be obvious that the new ball, being less flammable,
behaves differently, but we are not interested in what happens in the
event of a fire, otherwise we would have included that in the design
specifications; we are only interested in balls bouncing around in a
room.

Similarly with an artificial neuron, for the purposes of this
discussion we are interested only in whether it stimulates the other
neurons with the same timing and in response to the same inputs as a
biological neuron would. If it does, then the network of neurons will
respond in the usual way and ultimately make muscles move in the usual
way. (Please note that while the artificial neuron can in a thought
experiment be said to perform this function exactly the same as its
biological equivalent, in practice it would only need to perform it
approximately the same, since all biological tissue functions slightly
differently from moment to moment anyway.) The question is whether
given that the artificial neuron does this job adequately, would it
necessarily follow that the qualia of the brain would be unchanged? I
think it would, otherwise we would have the situation where you
declare that everything is normal (because the neurons driving the
muscles of speech are firing normally) while in fact feeling that
everything is different.

>> Figuring out the internal dynamics of the neuron will tell you when
>> the neuron will fire in response to any given stimulus. You seem to be
>> saying that it won't,
>
> Right, it won't. Just like figuring out the internal dynamics of a
> router won't tell you when something is going to happen on the
> internet. A neuron by itself is just a specialized cell.

But figuring out the internal dynamics of a router will tell you
exactly what the router will do in response to any input, and that's
what we want. The web browser I am using to write this doesn't know
what I'm going to say next and its programmers didn't need that
information; they just needed to program it to respond in a certain
way to keystrokes.

>> because some non-physical influence which is the
>> basis for feelings may come along and make it fire in an unpredictable
>> way.
>
> The cell is a living organism that has sensitivities to particular
> molecules and electromagnetic states of it's own molecules and those
> of other cells. It is a machine on the outside, but an anti-machine on
> the inside. As such it's 'behavior' is a constant flux between
> predictable and unpredictable. If you isolated it and test it in
> linear, deterministic contexts, then, like a router unplugged from the
> internet, you will probably see default FSM behaviors - cycling
> through sending keep alive packets and waiting for ack packets. It has
> nothing to do with being able to predict the content of the internet.
> Do you really think that it is possible to come up with a computer
> program that will tell you exactly what you are going to do and say
> five years from now?

I don't know what I'm going to do and say five years from now, so why
should you expect the computer program to do better than me? If you
write a program that simulates my mind and then subject the program to
five years of experience, as occurs with the real person, then that
will tell you what I might do five years from now. Of course it won't
tell you exactly what the real me will be doing, since the inputs will
be different unless the model is perfect and occupies the same space,
but that would be the same if rather than writing a program you made
an atom for atom replica of me.

>> This would be something amenable to experimental verification,
>> since you just need to show that sometimes a neuron (or other cell, I
>> assume they all have this vital essence)
>
> It's only seems like a vital essence to us, because it's more similar
> to us than a dead cell. To the universe there is no particular
> preference, I would imagine.
>
>> will do things apparently
>> miraculously, not in accordance with the known laws of nature.
>
> That's only in your world where my argument has to be wrong because it
> contradicts yours. To me, that living things do what they want to do
> sometimes is an ordinary fact - it is the law of nature.
>
>> Surely
>> if such amazing things happened someone would have noticed and it
>> would be common scientific knowledge.
>
> It is common scientific knowledge. It's why biology, neurology, and
> psychology are different from physics, and why they are different in a
> particular way which respects the different laws of nature which apply
> to cells, brains, and minds as opposed to bowling pins and silicon.

Can you point to an experiment where a process in a cell contradicts
the laws of chemistry, such as an ion channel in a cell membrane
opening without any apparent stimulus, because the person decided to
do something?

>> You do understand that a neuron fires in response to another neuron to
>> which it is connected via synapses?
>
> You do understand that you're argument is condescension?

You have said that neurons sometimes do things because the person
decides to do them, so I am asking if this is consistent with the view
that they fire in response to stimulation from other neurons. I would
say that they do, and the feeling that you have decided to do
something is supervenient on the deterministic behaviour of the
neurons. I don't think you agree, so could you explain what physical
factors could move neurons other than the ones known to science?

>> The neurons in the language centre
>> (and everywhere else) would be getting the same signals from the
>> artificial cortex as they would normally get, so they would fire in
>> the same sequence as they normally would, so the muscles involved in
>> speech would get the same signals they normally would, so you would
>> say the same things you normally would.
>
> That's the same fallacy three times. There is no "same". You cannot
> make an artificial router that produced the 'same' signals as the
> internet does. You are seeing the brain as some kind of clockwork
> orange of replaceable parts. It's not like that. It's a giant sense
> organ. It thinks, chooses, and feels because what it is made of can
> also, in it's own contextual version, think, choose, and feel.
>
> The 'signals' are just what we can detect through a machine which does
> not have the ability to do those things - it just gives you the white
> light, not the spectrum of what's going on inside the 'signals'. You
> cannot reproduce a prism of signals with white light signals, even
> though they will both look the same on a black and white monitor.

So in other words you think there is some other influence than the
"clockwork" determining when the neurons fire. You don't think it's an
immaterial soul; what else could it be?

>> Of course I agree that if you have qualia, you know you have qualia.
>> That is why I think it is impossible to make an artificial device that
>> replicates the normal pattern of neuronal firings without also
>> replicating the part those neurons play in consciousness.
>
> I agree, but you are assuming that there is a such thing as a normal
> pattern of neuronal firings. I'm saying, as are others here, that we
> don't know what we're looking at. a does not = A. A router by itself
> is not the internet. The collected works of Shakespeare (or Dan
> Dennett if you like) are not emulable by simulating one of his neurons
> 'firing pattern' any more than your career path can be predicted by
> the 'firing pattern' of your arms and legs.

Where do you get the idea that simulating a neuron involves simulating
the entire universe that might affect the neuron? A biological neuron
is not programmed with such information, it is just programmed with
how to respond to stimuli.

>> But if the artificial visual cortex sends the same neural signals to
>> the rest of the brain, how could the rest of the brain notice that
>> anything was different?
>
> Because what you think is the 'signals' is not the only thing going
> on. There's a civilization in there. Thousands of substances are being
> produced and consumed. Do you really think that is necessary to make a
> blob of grey jello send and receive electrical signals alone? If that
> were the case then lightning strikes over the millennia might have
> evolved into an atmospheric consciousness by now. There could be
> simple cells of continuous electrical storms hovering over areas of
> the ocean, reproducing and having lives.

How is that an argument? It's like saying if jet aircraft could fly
then jet engines would have evolved naturally in birds.

>  f I recorded your activities from a telescope in the Andromeda
> galaxy, and studied the computer enhanced cartoon of what you do, I
> would make assumptions about your physical environment because that's
> what I see through a telescope. I might say that your normal working
> state is to sit in a chair, so that chair makes you work, or your bed
> is a resting state of your signal pattern.

These might be reasonable hypotheses. You could devise experiments to test them.

>>>> If two identical twins differ mentally, then obviously this is because
>>>> they differ physically in their brain configuration. My mental state
>>>> is different today than it was yesterday, and there is less difference
>>>> between my brain on two consecutive days than there would be between
>>>> the brains of identical twins.
>>
>>> If I impersonate someone, does that obviously mean it's because I have
>>> changed my physical brain configuration?
>>
>> Yes, of course! How could you change your mental state if your brain
>> state stays the same?
>
> That's circular reasoning. How is it that you think that my mind is
> only the brain but the brain is not my mind? Also you're confusing
> your levels of computation. We were talking about changes to the
> hardware of the brain - identical twins, genetics, and now you're
> conflating hardware with software ('brain states'). With conjoined
> twins, who have the same genes in the same body, which by your
> reasoning should produce the same brains.
>
> How would you explain that the production of a brain from the same
> genes is never the same, yet you say the production of a 'signal
> pattern' from a neuron is going to be the same from neuron to neuron.
> In other words, in conjoined twins, you have two brains formed from
> genetically identical neurons, grown in the same body, yet the
> character of the people who develop through those neurons is
> verifiably and significantly different.

Genetics gives a rough blueprint but the environment determining how
the brain actually develops. But the general point is that every time
you have a thought this is because there is a physical change in the
brain. There is no real distinction between hardware and software
either in a brain or in a computer. "Programming" involves making
physical changes in the device to achieve a certain purpose. A device
can only do what it is "programmed" to do, i.e. it can only do what
its starting configuration allows it to do.

>> It's the same with a brain or computer. The environment acts on
>> brain/computer state S1 at time T1 and results in brain/computer state
>> S2 at time T2.
>
> Why does the 'environment' get to act but brains and computers can
> only react? Substance monism has it backwards. It is the subject who
> chooses, determines, and acts. If it weren't, how could anything feel
> like it were doing so? What would be the mechanical advantage of that?
>
>  Both the brain and the computer are sensitive (and insensitive) to
> their environment in different specific ways. It is the user of the
> brain and the computer which interprets S1 (T1 is part of S1, not some
> independent reality) and determines whether or not there will be an S2
> and what that will be, based upon their accumulated experienceS and
> the inherent qualities which they have preferred to use to integrate
> them.

I don't think you're actually disagreeing with me. There is no real
distinction between brain/computer and environment, they are both
parts of a larger system. But even within a brain you can arbitrarily
draw a border around a structure and consider how that structure
behaves in response to its internal state in conjunction with
environmental inputs. These are the *only* two factors that can have
an effect. Even if magic is involved, that is a kind of environmental
input. What you call "interpreting" and "accumulated experience" is
completely captured in the physical states. It isn't something extra,
as you seem to imply sometimes. That would be like saying that the
computer added 2+3 because they are numbers rather than because of
movement of charge on capacitors in a DRAM chip.

>> When signals from the environment are processed, for example when an
>> animal sees some food and through a series of neural events moves
>> towards the food and starts eating, that is associated with awareness;
>> at least when the animal is human, probably for other animals as well.
>
> 'Seeing' is awareness. What kind of neural events move an animal
> towards food and why does it invent us to pretend that we are aware of
> that fact? So I ask again: What is information without awareness?

There is a chain of physical events between light from the food
reaching the animal's eyes and the animal moving over to eat the food.
Those physical events result in awareness. Without those events, no
awareness. Information, like beauty, manifests physically but
ultimately is in the eye of the beholder. The beholder needs to have
the right sort of physical events going on in their brain in order to
appreciate the information. What have I left out?

>> If I am the result of these biochemical reactions how does it make
>> sense to say that I am a puppet? It's like saying I have no power
>> because I am being pushed around by myself. There is no separate "I"
>> to be pushed around.
>
> What 'result'? You are saying biochemical reactions in, biochemical
> reactions out. Where in biology do you find yourself? Why is there
> even a question about it? If your experiences were biological, then
> you would find them under a microscope. Since you do not, then they
> must either be metaphysical solipsistic 'emergent properties' in
> Platonia, or biochemistry itself must feel, see, and have free will.
> My view explains that biochemistry resolves that by realizing the
> fundamental dynamic of sense as the most elemental principle of the
> cosmos.

If I think and feel when certain biochemical reactions occur, what do
you call that? It's a matter of taste whether you say that I am the
biochemical reactions, I supervene on the biochemical reactions, I
emerge as a result of the biochemical reactions, or I am "fundamental
dynamic of sense" in the biochemical reactions. None of these phrases
really add anything. I'm still me, and I want the biochemical
reactions to continue running so that I can continue as me.

> Biochemistry does feel, but that feeling scales up qualitatively over
> time on the inside as the complexity scales up across space
> quantitatively on the outside. That's how it works. That's why you
> don't find a homunculus in a Cartesian Theater, or communities of
> talking rocks. We can't see the interiority of external phenomena for
> the same reason that we can't see the externality of our own
> psyche...because interiority is how the cosmos creates the ontology of
> privacy.
>
> I wish there was some way to put my understanding of this in a
> physical form and then you could just install it in your brain, but I
> can't. You have to reason it out for yourself.

I wish I could see what you are getting at, because I don't understand
your previous paragraph. I also don't understand your motivation to
make a complicated theory going against Occam's Razor, which says the
simplest explanation in keeping with the facts is the best.

>> If the neurons won't do anything magical then their behaviour is
>> defined by their biochemistry
>
> You keep going back to that. It's like Linus' security blanket.
> Biochemistry is just what we have figured out so far based upon a
> particular set of tools and particular logical approach which has been
> developed recently by domesticated primates. It doesn't define what
> living organisms are or what they are capable of. You want to be
> right. You want it to be 'my way or the highway'. I'm trying to show
> you that is is not your way and it is not the highway. There's other
> ways, you have only to be interested in them.

Please state clearly: do you think your brain can do something which
goes contrary to the deterministic (or perhaps partly random) physical
processes that occur within?

>  and is thus in theory predictable and
>> can be modelled on a computer. They have a very wide repertoire of
>> behaviour because they constitute a very complex system. The 26
>> letters of the alphabet can only be used in 27^n different sentences
>> of n or fewer letters (27 rather than 26 because we include a space as
>> another symbol). That is a lot of possible sentences, but it is not
>> infinite.
>
> Why are assuming n is not infinite? Circular reasoning. You are
> inserting a limit and then citing the consequences of that limit as
> proof that it's not unlimited. This is why I say that sooner or later
> an computer brain reveals that it's not a natural brain, because in
> the fullness of time the infinity of n exhausts the finite isomorphism
> of functionalism. If the isomorphism is inexhaustible then there
> simulation can only be the genuine instance - it is the original. (As
> Stephen might say, the original is the best possible simulation of
> what it is).

N can increase without bound, but whatever it is only 27^n sentences
are possible. A Turing machine has an infinite memory, but a Turing
machine is a mathematical abstraction (the video linked to previously
was not strictly that of a Turing machine, since its memory was
limited). A human brain weighing about 1 kg has only a finite number
of possible thoughts. The brain can in theory be re-engineered to make
it bigger but you would need an infinite brain to have infinite
thoughts.

>> Similarly, the brain has a very large number of possible
>> thoughts, but not an infinite number. For it to have an infinite
>> number of thoughts it would need to be infinite in extent.
>
> Your network connection doesn't have infinite bandwidth. Does that
> mean that you are going to run out of internet eventually? The
> substance monist view of thought is a strawman. It's a delusion where
> the brain is a sealed can of cellular automata. Discrete patterns
> which can be isolated and reproduced. Nothing could be more opposite
> of the truth.

Over infinite time you could have a non-repeating stream of data over
a low bandwidth connection, but the memory of your computer or your
brain would only be able to hold a finite amount of this data stream.
That means after a certain period your computer or brain would either
fill or you would have to erase data and start again, so that the data
would repeat. There is no other way without infinite memory.

>> True, most of what the neurons do does not directly manifest as
>> consciousness; but all of what we experience as consciousness is due
>> to what the neurons do.
>
> Not what they do, but what they feel. Detecting something is easy.
> Your skin detects light, but feels it as warmth. To see light, we need
> to feel it with our eyes. If we could see with our skin, then we
> wouldn't need eyes. Neurons feel both warmth and see light - and hear
> and think, etc. but as a group. It's like the internet. There are at
> least seven layers of conversation going on in this internet exchange.
> Only one of those layers is accessible physically - Layer 1, wires,
> chips, electronic components. Nothing above that is comprehensible in
> physical terms alone. Reproducing switches, routers, SANs, servers,
> etc wonn't give you anything more than warm metal unless you
> understand not only packets and tcp/ip, authentication, http, web
> browsers, and this particular group, but you have to understand users
> and people and why they created and use the internet in the first
> place.

Yes, and in the brain there are just mundane physical processes which
give rise to consciousness. The protein molecule or the neuron does
not understand what is going on, but the network of neurons does.

> The idea that substance monism is invested in is that what we
> experience on the internet is due to what the transistors do. That's
> true in the most literal sense, but if the universe were that literal
> then there would be no other sense possible...which is ironically the
> most fantastically delusional fantasy of all. It is 'let's pretend
> that we aren't REALLY in the universe, and let's pretend that there is
> no meaning in the universe except for anything that would support the
> idea that it has no meaning'. It just can't accept that there aren't
> little particles of light and sound tucked away somewhere in the brain
> decides that the light and sound are an illusion, but somehow
> necessary anyhow. And that these illusions are 'information' but they
> can only be biochemistry. It's a mess.

What we experience in interacting with the Internet is the qualia
generated by the biochemical processes in our brains in response to
the data that comes down the network cable and is processed by our
computer.

>> The observation is that we are made of matter and that we have
>> feelings; therefore, putting matter together in a particular way can
>> result in feelings.
>
> Feelings to who? Where? If they aren't a physical precipitate that can
> be collected in a test tube, and they aren't metaphysical logics in
> Platonia, then from where does this 'result' emerge and where does it
> physically play out? It's like saying that putting pixels together in
> a particular way can results in TV shows. It can be interpreted in a
> way that would be legally true, but the understanding is completely
> false. TV is produced top down, not bottom up. It doesn't arise
> spontaneously from a pool of FSM pixel possibilities.

The TV show is not conscious and needs to be interpreted by a
conscious entity to make sense. Brains, on the other hand are
conscious, and by way of deduction computers of a certain complex kind
would also be conscious.

> Neurons work the
> same way. The ones we like to use are us. We push them around like
> beads on an abacus when we want. The abacus does things back to us in
> response.

"We" push the neurons around? Again you seem to hint at a separate,
non-physical soul.

>> It perhaps isn't unreasonable to speculate that
>> there might be something other than matter causing the feelings, such
>> as an immaterial soul, but there isn't any evidence that such a thing
>> exists.
>
> There can't be evidence that interiority exists, because existence and
> insistence are mutually exclusive. It's like this:
>
> http://www.stationlink.com/art/dualism5.jpg
>
> http://www.stationlink.com/art/SEEmap2.jpg

You'll have to make that easier to understand for people like me.

> The problem is that substance monism identifies existence as the sole
> criteria of reality. That truncates half of the cosmos - the half in
> which you participate directly. It's very tricky because you are the
> thing that wants evidence so that you would not naturally consider the
> wanting of evidence itself as a thing. But from a more objective
> perspective, of course it is. You can't see your own participation as
> an object because you can't get outside of yourself, but that's no
> reason to assume that the universe doesn't see it, doesn't feel it,
> isn't made of it. The universe does fiction. It makes shit up. We are
> the evidence.
>
> Here's more: http://s33light.org/post/3424866201
> http://s33light.org/post/3391830214

That's more easily understandable.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to