On Wednesday, October 24, 2012 11:17:43 AM UTC-4, John Clark wrote:
> On Mon, Oct 22, 2012  Craig Weinberg <whats...@gmail.com <javascript:>>wrote:
> >> it's also true that the letter "e" is not Shakespeare's play "Hamlet", 
>>> but its part of it.
>> > By that analogy, you are crediting the letter "e" for authoring Hamlet.
> The letter "e" did not write Hamlet and neither did one neuron inside 
> Shakespeare's bone box, it took 10^11 of them.

Ah, so you are saying that many incidents of letters wrote Hamlet.

> >>  You can have GABA and acetylcholine without paychecks and days off but 
>>> unless you move to electronics you can't have paychecks and days off 
>>> without GABA and acetylcholine.
>> > Even more reason that the dumb neurotransmitters need the high level 
>> teleology to inform them.
> If you had one scrap of experimental evidence that wishing or wanting can 
> alter chemical reactions, in other words if you could show that magical 
> thinking worked, then I would concede that you are right and you will have 
> won the argument, but there is no such evidence because that's not the way 
> things work.

What does voluntary control over your own fingers have to do with magical 
anything? The ability to directly move your fingers is evidence that you 
can voluntarily alter chemical reactions. How else do you suppose that we 
are having this conversation?

> >You can have bricks without having the Taj Mahal, but you can't have the 
>> Taj Mahal without bricks.
> Exactly, you can have neurotransmitters without paychecks and days off but 
> you can't have paychecks and days off without neurotransmitters.
> > That doesn't mean that bricks or even bricklayers are responsible for 
>> the Taj Mahal.
> You also need a architect to supply the information on where to put the 
> bricks, but it remains true that you can't have the Taj Mahal without 
> bricks and bricklayers.

The architect is the high level personal will. You are the one who decides 
where the bricks go directly. 

> > My neurons can influence my consciousness
> Yes.
> > but they cannot decide for me to get a better job.
> If you consciously decide to get a better job and if neurons can influence 
> your consciousness as you say then you're wrong, neurons CAN decide for you 
> to get a better job because what they do IS you.

If what they do IS you, then what you do IS them. That is my point this 
whole time. You want to have one without the other. You want to be able to 
say that neurons control you but you do not control neurons. If they are 
the same thing, then of course you control your neurons directly...of 
course your thoughts and intentions change millions of neurons chemical 
states simultaneously in different regions of the brain.

> > Why would the high level description level be different from the low 
>> level description?
> Now Craig, if you calm down and look at what you just said with a 
> dispassionate eye I think you will admit that it was not the brightest 
> question in the world. The short answer is that one is high and the other 
> is low. Saying "the temperature is 79 degrees" and saying "oxygen molecule 
> number 93475626636574074514574 hit your nose 1.0624221 seconds ago from a 
> south south east direction at a speed of 88.621 feet per second" are both 
> accurate descriptions of reality, but they are different. 

That's naive realism. It begs the question by assuming that the universe 
really is just as you, a human primate of a particular size and density, 
deem it to be. Without a perceiver though, these distinctions are arbitrary 
and senseless. There is no difference between either description except to 
a third party which relates to one more directly than the other. 

> > There is no unexpected gap between the behavior of those two levels of 
>> description. With subjectivity, the gap is infinite.
> The gap is certainly astronomically huge but I'm surprised to hear you say 
> it was literally infinite because you're the one pushing the idea that 
> everything can sense its environment

I'm arguing against the standard dumb particle mode here, so I am not 
considering my solution to the problem, I am only pointing out why the 
standard explanation doesn't work. In my model, there is no gap, only a 
perceptual relativity.

> and everything is at least a little bit conscious, so let's view the 
> submicroscopic world with your eyes; I personally find this sort of 
> description awkward and needlessly anthropomorphic however it is 
> functionally equivalent to the most impersonal hard nosed description of 
> any physicist:
> Two helium atoms are moving along until they sense they have made contact 
> and then they find that they don't like each other one bit so they both 
> decide to change the path they were moving in so they can get away from 
> each other.  And a chlorine and a sodium atom are moving along until they 
> sense they have made contact and then they find that they passionately love 
> each other so they decide to remain very close to each other and turn into 
> a salt molecule; however as the temperature gets higher their love gets 
> cooler until it gets so hot they decide to get a divorce and go their 
> separate ways. That's a very odd way to describe what's going on but it 
> doesn't conflict with any experimental test.  

Right. The key though, is that before they actually contact each other, 
barring other distractions, they will still sense each other, regardless of 
how far apart they are in a vacuum. If one atom heats up, the other one 
eventually will also because of this sense detection and motor imitation 
capacity. No sub-atomic particles are required, not literally. If anything, 
it is space and time which is being generated within the experience of the 
two helium experiences of each other.

> Generally speaking as the accumulation of matter increases, as more atoms 
> are involved, the range of possible behaviors gets larger and more complex, 
> and for some very specific types of structures, like brains or computers, 
> the growth is exponential and the range gets astronomically (but not 
> infinitely) larger.

Sure. That's not a problem though, it only seems daunting because human 
beings are optimized for experiencing quality, not quantity. The universe 
doesn't care how elaborate or complicated it is...the more the better.

> > What you are saying suggests a subjectivity where every time I almost 
>> put my hand on a hot stove a specific memory is called up and decoded 
>> (decoded into what?). I don't think it works that way,
> I don't think it works that way either, jerking your hand off a hot stove 
> is just a simple reflex, but you were talking about social experiences and 
> you have none of them except the ones where the information has been 
> encoded by neurons, in other words the ones you remember.

We still don't experience consciousness as a flood of pragmatic 
consequences of re-remembered events. It is not necessary to consciously 
re-experience a memory, say of learning English, to read these words.

> >>>  Let's compare. Does your computer worry about it's job?
>>> >>  I don't know for sure, I don't even know if you worry about your job 
>>> because all I can observe is behavior. I do have a theory that extrapolates 
>>> consciousness and emotion from behavior and I think it's a pretty good 
>>> theory but it's not proven and never will be, so I just do the best I can.
>> > If you had to bet though. If it really mattered and there was a right 
>> answer and you had to pick yes or no that you computer worries about its 
>> job, could you honestly say that it is likely?
> If I had to guess, and it's only a guess, I'd say that the range of 
> behavior that computers display is still not as rich as the behavior 
> displayed by humans so you are probably more conscious than a computer, but 
> computers are improving so rapidly (and humans are not) that if you asked 
> me again tomorrow my guess might be entirely different.

Improvement is a relative term. I see mainly trivial and cosmetic 
'improvements' in computer development which are really lateral in the 
scheme of things. What can you do with your computer that you couldn't do 
five years ago?


> John K Clark

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To view this discussion on the web visit 
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to