On Wed, Jul 13, 2011 at 11:08 PM, Craig Weinberg <whatsons...@gmail.com>wrote:

> >If it is not present physically, then what causes a person to say "I
> >am imagining a blue chair"?
> A sensorimotive circuit. A sensory feeling which is a desire to
> fulfill itself through the motive impulse to communicate that
> statement.

But physical effects must come from physical causes unless your theory
involves some form of dualism.  The imagined image in the mind has some
physical representation, otherwise any communication regarding that imagined
image would be coming from no where.

> >Could you please define this term?  I looked it up but the
> >definitions  I found did not seem to fit.
> Nerves are referred to as afferent and efferent also. My idea is that
> all nerve functionality is sense (input) and motive (output). I would
> say motor, but it's confusing because something like changing your
> mind or making a choice is motive but not physically expressed as
> motor activity, but I think that they are the same thing. I am
> generalizing what nerves do to the level of physics, so that our
> nerves are doing the same thing that all matter is doing, just
> hypertrophied to host more meta-elaborated sensorimotive phenomena.
> >There is such a thing as too low a level.  What leads you to believe
> >the neuron is the appropriate level to find qualia, rather than the
> >states of neuron groups or the whole brain?
> I didn't say it was. I was just talking about the more similar you can
> get to imitating a human neuron, the more similar a brain based on
> that imitation will be to having the potential for human
> consciousness.
> >You would have to show that the presence of DNA in part determines the
> >evolution of the brains neural network.  If not, it is as relevant to
> >you and your mind as the neutrinos passing through you.
> Chromosome mutations cause mutations in the brain's neural network, do
> they not?

Perhaps very rarely it could, but this would be more a malfunction than
general behavior.  The question is, what does DNA have to do with the
function of an active brain which is thinking or experiencing?  If the
neurons behaved the same way without it, why should consciousness be

> btw, I don't interpret neutrinos, photons, or other massless
> chargeless phenomena as literal particles. QM is a misinterpretation.
> Accurate, but misinterpreted.

Whatever you consider them to be, they are physical but not thought to be
important to the general operation of the brain.  My original point is there
is a lot of noise, and perhaps included in that noise is all the
biochemistry itself going on in the background while neurons perform their
function.  And therefore, anything which is noise doesn't need to be
replicated in an artificial production of a brain.

> >> A digital simulation is just a pattern in an abacus.
> >The state of an abacus is just a number, not a process.  I think you
> >may not have a full understanding of the differences between a turing
> >machine and a string of bits.  A Turing machine can mimick any process
> >that is defineable and does not take an infinite number of steps.
> >Turing machines are dynamic, self-directed entities.  This
> >distinguishes them from cartoons, YouTube videos and the state if an
> >abacus.
> A pattern is not necessarily static, especially not an abacus, the
> purpose of which is to be able to change the positions to any number.
> Just like a cartoon.

Okay, but with an abacaus, or a cartoon, someone else is driving it, and
perhaps randomly.  A cartoon does not draw itself, nor an abacus perform
computations on its own.

> If you are defining Turing machines as self-
> directed entities then you have already defined them as conscious, so
> it's a fallacy to present it as a question.

Ignore the "self" in self-directed, it was intended to mean they are
autonomous, not define that they are conscious.

> Since I think that a
> machine cannot have a self, but is instead the self's perception of
> the self's opposite, I'm not compelled by any arguments which imagine
> that purely quantitative phenomena (if there were such a thing) can be
> made to feel.

"Purely quantitative" suggests that the only values that can be represented
by a machine are pure quantities (numbers, values, magnitudes).  Yet a
Turing machine can represent an infinite number of relations which are not
purely quantitative.  For example, an algorithm might determine whether an
input number is prime or not, and based on the result set a bit as a 1 or a
0.  Now if that bit is an input to another function, that bit no longer
represents the quantity of 1 or 0, but instead now represents the
"qualitative" property of the input number's primality or compositeness.
There may be other qualitative values that with the right processing and
interpretation by the right functions could correspond to qualitative
properties such as colors.  You can't write off Turing machines as only
dealing with numbers.  The possible relations, functions, and processes a
Turing machine can implement result in an infinitely varied, deep, complex,
and rich landscape of possibilities.

> >Then, if you deny the logical possibilitt of zombies, or fading
> >qualia, you must accept such an emulation of a human mind would be
> >equally conscious.
> These ideas are not applicable in my model of consciousness and it's
> relation to neurology.

Either zombies are possible within your model or they are not.  Either
fading qualia is possible in your model or it is not.  You can't define them
as irrelevant in your theory to  avoid answering the though questions. :-)

> >The idea behind a computer simulation of a mind is not to make
> >something that looks like a brain but to make something that behaves
> >and works like a brain.
> I think that for it to work exactly like a brain it has to be a brain.

> If you want something that behaves like an intelligent automaton, then
> you can use a machine made of inorganic matter.

Okay I agree with this so far.

> If you want something
> that feels and behaves like a living organism

I am confused, are you saying an inorganic machine can only behave like an
automaton, or can it behave like a living organism?  Do you believe it is
possible for an inorganic machine to exhibit identical external behavior to
a living organism in all situations?  (A YouTube video can't respond to
questions, and therefore would not count)

> then you have to create
> a living organism out of matter that can self replicate and die.

What does self-replication and death have to do with what a mind feels at
any point in time?  Aren't eunuchs conscious, what about someone who planned
to freeze himself so he wouldn't die?

> >Rejection requires the body knowing there is a difference, which is
> >against the starting assumption.
> If you are already defining something as biologically identical, then
> you are effectively asking 'if something non-biological were
> biological, would it perform biological functions?'

It was not identical, the interfaces, all the points that made contact with
the outside were identical but the insides were completely different.

> >I pasted real life counter examples to this.  Artificial cochlea and
> >retinas.
> Those are not replacements for neurons,

Actually the retina prosthesis replaces neurons which perform processing,
and thus those neurons are considered an extension of the brain.

> they are prostheses for a
> nervous system. Big difference.

What is different about neurons in the nervous system vs. neurons in the
brain?  Why is it we can substitute neurons in the nervous system without
problem, but you suggest this fails if we move any deeper into the brain?
To me, the only difference is the complex way in which they are connected.

> I can replace a car engine with
> horses, but I can't replace a horse's brain with a car engine.
> >At what point does the replacement magically stop working?
> At what point does cancer magically stop you from waking up?
Cancer cells don't serve as functional replacements for healthy cells, where
according to the thought experiment, the neural prosthesis would.  The
question of when consciousness suddenly disappears, fades, dances, etc., if
it does at all, during a neuron replacement is an interesting and
illuminating question for any theory of mind, and it is something you should
attempt to answer using your theory.

> >So it can use an artificial retina but not an artificial neuron?
> A neuron can use an artificial neuron but a person can't use an
> artificial neuron except through a living neuron.
Interesting.  So do you think a person could have every part of their brain
substituted with a prosthesis, with the exception of one neuron, and still
be conscious?  Why or why not?


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to