On Aug 23, 9:21 am, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Mon, Aug 22, 2011 at 2:51 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> > But the problem I have with your idea of functional equivalence is
> > that you seem to treat it as an objective property when the reality is
> > that equivalence is completely contingent upon what functions you are
> > talking about. AC power works until there is a blackout. Batteries
> > work in a blackout but they wear out. They are different ways of
> > achieving the same effect as far as getting what we want out of a
> > radio amplifier, but there is no absolute 'functional equivalence'
> > property out there beyond our own motives and senses.
>
> Well yes, "functional equivalence" does depend on the function you are
> talking about. The function I am talking about for neurons is the
> ability to stimulate other neurons. An artificial neuron should
> stimulate the neurons to which it is connected with the same timing as
> the biological neuron it replaces. It doesn't have to be exactly the
> same, just close enough.

The ability to stimulate other neurons is not that hard. We can use
transcranial magnetic stimulation to do that already. The problem is
not with augmenting the brain with additional resources, it's with
replacing the parts of the brain that are actually who we are. You can
have input and output (maybe) emulation, but there's nothing in the
middle doing the feeling, experiencing, and understanding the meaning
of those inputs and outputs. That feeling is what determines the
outputs. There's no external 'timing' that controls our thoughts and
actions and those of our neurons, the timing arises out of first hand
experience - voluntary choices about unpredictable situations and
conflicting priorities.

> > Replicate, but not emulate. If you want to make muscles out of
> > something physical, what makes you so sure that it's possible to make
> > emotion without physical neurotransmitters? Our consciousness is human
> > meat. It exists nowhere else but in the context of living, healthy,
> > human tissue. That fact should be of interest to us. Sure, the things
> > that we do have patterns which we can neurologically abstract into
> > 'logic' and apply that logic to other substances, but that doesn't
> > ever have to mean that we can turn other substances into a living
> > human or buffalo psyche.
>
> If it's possible to reproduce basic neuronal function as described
> above without neurotransmitters then it should be possible to
> reproduce consciousness without neurotransmitters, as explained many
> times.

I know it's been explained many times, because I have explained just
as many times why it's not true. There is no such thing as a 'basic
neuronal function', but if there were, it would be made of
neurotransmitters. It's a living organism withing the context of a
massive civilization of living organisms, so what you suggest is like
saying that if it's possible to reproduce basic human function then it
should be possible to reproduce the major cities of the world without
humans.

>
> > No, it doesn't work like that. You're assuming that all neurons do is
> > the same thing over and over again. It's like saying you can write a
> > program that watches activity of cnn.com for a while and then replaces
> > the news stories with better news.
>
> Neurons will do the same thing over and over again, if they are in the
> same initial state and receive the same inputs. A neuron is a finite
> state machine. Given certain inputs at the dendrites, it either will
> or will not send an action potential down its axon. The trick is to
> work out the state transition table describing its behaviour.

People will do the same thing over and over again too, given similar
initial state and inputs, but that doesn't mean that makes a person
what it is. Behaviorism is important to understanding some aspects of
of what people or animals or neurons do, but it tells us nothing about
how they feel. A pen and paper are finite state machines too, but they
have infinite possible outputs together. Figuring out the alphabet of
the languages of neurons is great, but it doesn't get you any closer
to replacing the speakers of that language with a timing belt and a
spark plug.

> >> So are you agreeing that that the artificial neuron can in theory
> >> replicate almost 100% of the behaviour of the biological neuron? What
> >> would that be like if your entire visual cortex were replaced?
>
> > It depends 50% on what you replace it with. If it was nothing but pure
> > behavioral logic, maybe it would rapidly be compensated for with
> > synesthesia. You would get the same knowledge you do from vision, but
> > the qualia would be like using a GPS made of sound, feeling, smell,
> > taste, balance, etc. Your brain would learn to use the device.
>
> But your brain would be convinced that this ersatz vision was real,
> since it would be getting the same signals in the same sequence from
> the artificial visual cortex. For example, your language centre would
> be forced to say that your vision was perfectly normal.

You're assuming that the 'language center' is a monolithic logical
device rather than a community of tens of millions of autonomous
living entities. They aren't forced to say anything. They aren't
getting the same signals, at best they are getting tv screen images
instead of windows to the outside world, but more likely is that there
is nothing left in the visual cortex to make any images at all. The
rest of the brain gets pre-digested nutrient sludge instead of
expertly prepared meals. We can't tell the difference from the
neurology, just like we can't tell the difference between the color of
x-rays and the color of gamma rays, but that doesn't mean that the
brain can't tell the difference.

> In that case,
> how do you know that right now that you don't have GPS-like qualia
> rather than the normal ones?

These 'how do you know you're not blind now?' kinds of questions are
silly to me. It just means that you don't take consciousness seriously
and value third person views over the first person source of those
views in all cases and at all times. How do you know we're having a
conversation on the internet? How do you know that you exist? Meh.
Sophistry. We don't have to know what our experience is in objective
terms, if it had any. That doesn't stop us from being able to tell if
we've gone deaf or blind.

We know from our experience that sound qualia is different than visual
qualia. We know from accounts of synesthesia that these qualia are not
hardwired to the sense organs and that in fact optical stimulation can
be interpreted through the qualia of flavor, etc. This should give us
a clue that there is a qualitative difference which is independent
from function, otherwise we should not be able to notice the
difference between tasting food and seeing sound - the sense it makes
would be the same. It's not just an automatic property of the universe
that a wavelength of light automatically looks like something instead
of tastes like something. The wavelength of light in fact, has no
external properties whatsoever. It's all about how the receiver
interprets the transmitter.

>
> > As far as I know, even memories of visual qualia would be inaccessible
> > if the visual parts of the brain were damaged. If not the brain could
> > maybe recover partial visual qualia by re associating colors and forms
> > with the memories of colors and forms (which as you see is not the
> > same thing. staring at the sun is not possible for long, whereas you
> > can imagine staring at the sun as long as you want. I assume.)
>
> Yes, even the memories of visual qualia would be gone but you wouldn't
> know it, and you would describe everything you thought you could
> remember normally.

Why do you think that mind is mindless? That it senses nothing? The
visual cortex is made of neurons just like the optic nerve and the
prefrontal cortex (not the exact same neurons or types of neurons, but
still neurons). If any of them sense or make sense then they all make
some kind of sense. You would not be forced to describe memories which
you could no longer visualize, but you could find that you have access
to that information if the prosthetic was done well. How well the
prosthesis is done would determine what form that information would be
in - whether it would feel remote and command line based, or whether
it would be a super enhanced visual-esque modeling laboratory. I'm not
sure that you could get actual vision out of it though. You would need
new stem cells in there I think to specialize in feeling color and
shape.

> >> It seems to me that biology is sufficient since if you exactly
> >> replicate the biology, you would replicate awareness.
>
> > That's not the case. An identical twin is close to a biological
> > replicate the awareness is not at all 'replicated'. They will share
> > some personality traits but are by no means the same person. My own
> > dad has an identical twin who has a very different personality and
> > life path than he has, so I can verify that.
>
> If two identical twins differ mentally, then obviously this is because
> they differ physically in their brain configuration. My mental state
> is different today than it was yesterday, and there is less difference
> between my brain on two consecutive days than there would be between
> the brains of identical twins.

If I impersonate someone, does that obviously mean it's because I have
changed my physical brain configuration?

> > What biology gives you is access to awareness. Two computers can have
> > the same hardware, but entirely different contents on their HD and
> > entirely different users who put that content there.
>
> A change in content on the HD changes the computer physically.

But the change does not emerge from the HD or computer itself. It is
caused by the actions of the user for the user's natural language
semantic reasons, not for computer scientific reasons.

> >> >  It's hugely anthropocentric to say "We are the magic monkeys that
> >> > think we feel and see when of course we could only be pachincko
> >> > machines responding to complex billiard ball like particle impacts".
>
> >> Of course that's what we are. This is completely obvious to me and I
> >> have to make a real effort to understand how you could think
> >> otherwise.
>
> > You think that we are magic? That Homo sapiens invented awareness by
> > accident in a universe that has no awareness? That to me is like
> > saying that nuclear power plants invented radiation.
>
> No, I think awareness happens when certain types of information
> processing happen.

Haha. What is information without awareness?

> >> So, you agree: any device that acts just like a neuron has to be a
> >> neuron. It doesn't have to look like a neuron, it could be a different
> >> colour for example, it just has to behave like a neuron. Right?
>
> > No. My answer is not going to change. If it doesn't look like a
> > neuron, then there must be SOME difference. Whatever difference that
> > is could either result itself in different interiority (≠Ψ) which
> > results In different behavior pattern (BP) accumulations over time
> > (let's call that Z factor {≠Ψ->Sum(≠BP/Δt)} ), or the visible
> > difference (≠v) could be the tip of the iceberg of subtle
> > compositional differences which result in the same Z thing. It could
> > be a different color with no Z factor - it depends on why it's a
> > different color.
>
> It won't be *exactly* the same but if it's close enough it will do the
> job. We look at a bird and we make an aircraft; what we are interested
> in is a machine that flies, and it doesn't matter that the aircraft
> lacks feathers.

But the attempts at simulating birds failed completely. Airplanes
don't flap their wings. It took a much more basic understanding of
physics to grasp lift and drag, and it will take a much more elemental
understanding of sensory input and output to replicate human
consciousness. What you are saying about emulating consciousness by
computation is like saying 'if we record the wing flaps exactly, then
we should be able to make artificial birds out of concrete or glass'.

> >> If they are my opinions then I have control over them, don't I?
>
> > Only if you subscribe to a view like mine. In your view, you are
> > clearly stating that your opinions are biochemical processes, and
> > therefore any semantic conception of them is strictly metaphysical and
> > somewhat illusory.
>
> My opinions are determined by biochemical processes. If the
> biochemistry in my brain were different then my opinions would be
> different. Where's the problem with that?

The problem is that you are a blind powerless puppet of microscopic
masters you have no connection with, and your every thought and
experience is a meaningless delusion. You have not explained why this
conversation exists biochemically, or how 'you' come to 'imagine' that
you are 'participating' in it.

> > Relevant to who? (To paraphrase Suicidal Tendencies), how can You say
> > what My brain's best interests are? Until we know how to make the
> > color blue from scratch or find the mathematical ingredient that makes
> > a joke funny, we can't even come close to saying that we can know what
> > is relevant to the production of awareness. The brain appears to be
> > not much more than a big soft colony of coral. Nothing any cell does
> > looks like it can wind up being funny or blue.
>
> No, but put all the neurons together and they can.

What puts them together and where does this put-togetherness reside?

> >>If a person with a brain
> >> prosthesis can have a normal conversation with you for an hour on a
> >> wide range of topics, showing humour and emotion and creativity, that
> >> would say something about how well the prosthesis was working. So what
> >> if the prosthesis is a different colour and weighs a little bit more
> >> compared to the original?
>
> > In a real life medical situation, if a prosthesis was developed that
> > appeared to work by the reports of the subjects themselves and the
> > people around them, I would of course give it the benefit of the
> > doubt. I'm not asserting with certainty that no such appliance can
> > ever be developed. Philosophically however, there is no
> > epistemological support for it, since we can't observe someone else's
> > qualia. Like members of an isolated tribe being shown television for
> > the first time, our assumption that there are people in the television
> > set might be unfounded.
>
> We can't observe their qualia but we guess that they have qualia from
> the way they behave, even before we open up their heads to see if they
> have a brain like ours.

Sure, we do infer their qualia by what we observe of them, but so
what? That just means we assume they are like us because we don't know
any better. I know people who say that they are color blind, but I
would not have guessed.

> > Because of that, if we are to determine the course of development of
> > artificial neurology, in the face of compelling reasons to the
> > contrary, I would make biological replication at the genetic level a
> > top priority and computational simulation a distant second. Unless and
> > until we have any success whatsoever in creating a device which seems
> > on casual inspection to possess free will and feeling out of an
> > inorganic material, it's really only academic. If someone thinks that
> > there is no significant difference between living organisms and
> > computer programs, then let them prove it, even on the most basic
> > level.
>
> What is the basic difference in your view, and what sort of proof would 
> suffice?

The basic difference is the ability to feel. Literally proving it
would require a brain implant that remotes to the device, but I would
be very impressed if a machine could convincingly answer personal
questions like 'what do you want', or 'what's bothering you'. If they
could continue to converse fluently about those answers and reveal a
coherent personality which was not preconfigured in the software.

> >> If thoughts are generated by biochemical reactions in the brain
>
> > They aren't. No more than this conversation is generated by electronic
> > reactions in our computers.
>
> >> and
> >> only biochemical changes in the brain then there is a deterministic
> >> chain of biochemical events from one brain state to another. That is,
> >> we can say,
>
> > Even that is doubtful. The biochemical changes in the brain, whether
> > or not the exclusive cause of thought (which they aren't), are
> > probably just like baseball games. Contingent upon unknowable outcomes
> > of contests and motives on a molecular and cellular level that we
> > can't understand and even they can't predict. Groups of living
> > organisms are not just a machine, they are also a community. They
> > collectively make decisions, as in photosynthesis and quorum sensing
> > in bacteria.
>
> But a baseball game would be predictable if we had all initial
> conditions, unless there were a truly random element involved, as in
> radioactive decay. But even then we might be able to produce a very
> accurate probabilistic model. the difficulty in practice is (a) having
> the initial conditions, (b) having a good mathematical model, and (c)
> having sufficient computational power.

No, a baseball game is not predictable from initial conditions. There
are infinite conditions - an earthquake on the other side of the world
in the fifth inning could cancel the game. That billiard ball model of
the universe is good for some kinds of engineering, but otherwise I
think it should stay in the 19th century where it belongs.

>
> >> (a) Release of dopamine from Neuron A triggers an action potential in
> >> Neuron B which causes Muscle C to contract which causes Hand D to
> >> rise,
>
> > What caused Neuron A to release the dopamine in the first place?
> > Nothing in the brain - it was caused by an event in the mind, or, more
> > accurately an experience of the Self, which constellates as many
> > overlapping events on different levels of sensation, emotion, and
> > cognition. That's the reason the neuron fires, because something is
> > happening to us personally. The neuron has no reason to fire or not
> > fire on it's own. It doesn't care, it just wants to eat glucose and
> > participate in the society of the other neurons.
>
> Neuron A was triggered to fire by the other neurons or sense organs to
> which it is connected. There are also some neurons which fire
> spontaneously (eg.http://www.jneurosci.org/content/20/24/9004.full.pdf). But 
> even the
> spontaneously firing neurons do so because that is their the way their
> biochemistry makes them behave. They don't suddenly start doing
> bizarre and magical things. A table will move across the room because
> it's pushed or it may move across the room by itself if there is an
> earthquake, but it won't just move across the room by itself, with no
> external force.

Of course they don't need to do anything bizarre or magical. No more
than the internet needs to do anything bizarre to host new and
undreamed of content forever. Just because the alphabet is 26 letters
does not limit in any way what can be expressed with them. Human
consciousness is the equivalent of the collected works of Shakespeare
every second, and that's just the novelty. Every minute hour, day, and
week produce their own irreducible meta experiences which optimize and
deprive different neurological trends in the brain, dictating what is
pruned and what is beefed up. You can't simulate that any more than
you can make a foot by filling a shoe with plaster.
>
> >> or,
>
> >> (b) Release of dopamine from Neuron A generates a desire to lift one's
> >> hand up, the dopamine then triggers an action potential in Neuron B
> >> which is experienced as the intention of lifting one's hand up, and
> >> Neuron B stimulates Muscle C to contract which is experienced as one's
> >> hand actually rising.
>
> > Great. So we are dopamine puppets from a neuron puppet master. It's
> > not a legitimate possibility. If it were there would be no reason for
> > anything like a 'desire' to be generated. It's completely superfluous.
> > If Neuron A can trigger Neuron B without our help, then it surely
> > would. It's like saying that maybe your thermostat has a DVD player in
> > it that plays excerpts from the Wizard of Oz and then it turns on the
> > furnace and then the house is warmed up which makes the DVD player
> > choose a different scene of the movie.
>
> "Neuron A can trigger Neuron B without our help" - what does that
> mean? Do you think that we exist separately from our neurons, deciding
> whether this one or that one will trigger? The self is just the
> collection of neurons, acting together.

We don't exist separately from our neurons, but we only know that
because we are alive. If you look at a brain, there is nothing about a
neuron's behavior that necessitates the existence of some human entity
making decisions and living a 'life'. You take that to mean that there
is no entity, whereas I see that as not an option, and that in fact,
the self is not a collection of neurons but a collection of neuron
feelings. We are what the brain feels of the body and the body feels
of the world. It correlates to the actions of the neurons, but only to
some extent. Maybe 40%. Most of what goes on the neuron level has
nothing to do with 'us' and most of what goes on in our lives has
nothing to do with biochemistry. We remain ourselves even when we
change our diet radically, walk through powerful magnetic fields, get
electrocuted, etc.

>
> > I'm only continuing with this for the benefit of you or anyone else
> > who might be interested in reading it. There is nothing in your
> > arguments that I have not considered many times in many many long
> > discussions. It's all very old news to me. It does help me communicate
> > my view more clearly though so I don't mind, just don't get frustrated
> > that I'm not going to ever go back to my (our) old worldview. I think
> > that I mentioned that I used to hold the same views that you have now
> > only a few years ago? It's almost correct, it's just inside out.
>
> It seems that you have an emotional reaction to the idea that you are
> no more than the biochemical reactions in your body. But not wanting
> something to be true does not make it untrue.

Not at all. I know that you think that's true, because otherwise you
can't make sense of my position, but trust me, I have thought that the
universe was a simulation since I was five years old. It's only been
in the last five or ten years that I've seen the limitations of that
position. We are no more than the biochemical reactions in your body,
but you seem to have an emotional reaction to the idea that biology
and chemistry are real things having real experiences rather than
mathematical 'reactions'.

You can't have it both ways. Either cells are alive and have sense or
we are not made of cells. If you insist that what we are is
'collections' of the actions of cells, then you have to explain what
is collecting them and how that collection gives rise to sense from
something that has none whatsoever. Collections and information are
metaphysical ideas. You can't find them on the periodic table, so
where do you find them?

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to