Hi Jason Resch  

My speculation is that, if for no other reason than that it makes 
sense,  and from evidence of self-focussing of presumably complex
signals in spread-out networks to form a single experience such
as is demonstrated with a point brain probe, that these 
49 ganglia synchronize or cooperate to form a single pain 
signal. Think of a choir or an orchestra. 

And again I raise the possibility that the self would be the conductor of
the choir or orchestra. That the self, as with other experiences, like
touching the brain with a point probe, organizes complex brainwaves
into a single unifed point of perception.

Roger Clough, rclo...@verizon.net 
9/20/2012  
"Forever is a long time, especially near the end." -Woody Allen 

================================================


----- Receiving the following content -----  
From: Jason Resch  
Receiver: everything-list@googlegroups.com  
Time: 2012-09-19, 13:08:11 
Subject: Re: Bruno's Restaurant 




On Sep 19, 2012, at 7:57 AM, Craig Weinberg  wrote: 




On Wednesday, September 19, 2012 1:57:28 AM UTC-4, Jason wrote: 



On Tue, Sep 18, 2012 at 3:06 PM, Craig Weinberg  wrote: 



On Tuesday, September 18, 2012 1:33:50 PM UTC-4, Jason wrote: 


On Sep 18, 2012, at 10:38 AM, Craig Weinberg  wrote: 


On Tuesday, September 18, 2012 10:29:44 AM UTC-4, Jason wrote: 





Here is an example: 


Functional MRI scans have indicated that an area of the brain, called the 
anterior cingulate cortex, processes pain information to determine how a person 
is affected.  Severing the link to this part of the brain has a curious effect 
on one's reaction to pain.  A condition known as pain dissociation is the 
result.  Along with brain surgery such as lobotomy or cingulotomy, the 
condition may also occur through the administration of certain drugs such as 
morphine.  Those with pain dissociation still perceive pain; they are aware of 
its location and intensity but pain is no longer unpleasant or distressing.  
Paul Brand, a surgeon and author on the subject of pain recounted the case of a 
woman who had suffered with a severe and chronic pain for more than a decade: 
She agreed to a surgery that would separate the neural pathways between her 
frontal lobes and the rest of her brain.  The surgery was a success.  Brand 
visited the woman a year later, and inquired about her pain.  She said, ?h, 
yes, its still there.  I just don't worry about it anymore.?  With a smile she 
continued, ?n fact, it's still agonizing.  But I don't mind.? 


The conclusion: even seemingly simple qualia, like pain are far from simple. 

That is a conclusion, but I think the wrong one. Human qualia are not simple, 
but that does not at all mean that qualia re not simple. 


I agree with this. 


We are titanically enormous organisms made of other organisms. Our human 
experience is loaded with cognitive, emotional, and sensory qualia, 
corresponding to the evolution of life, our species, cultures, families, and 
individuals. Our pain is a Taj Mahal, and if you remove enough bricks, some 
towers fall and maybe one part of the palace no longer relates to another part. 
What you describe suggests exactly that - some part of us feels the pain on a 
sub-personal level, but the personal level is not alarmed by it because it's 
qualia has lost the red end of it's spectrum so to speak and now is 
blue-shifted toward an anesthetized intellectual quality of being. 


I mostly agree with what you are saying here. 







I think Marvin Minksy understands this well, and provides a good explanation: 


Marvin Minsky considers it to be ? huge mistake-that attempt to reify 'feeling' 
as an independent entity, with an essence that's indescribable.  As I see it, 
feelings are not strange alien things.  It is precisely those cognitive changes 
themselves that constitute what 'hurting' is-and this also includes all those 
clumsy attempts to represent and summarize those changes.  The big mistake 
comes from looking for some single, simple, 'essence' of hurting, rather than 
recognizing that this is the word we use for complex rearrangement of our 
disposition of resources.? 
He's right that there is no essence of hurting (qualia is always a subject, not 
an object, so it's essence is the same as it's 'envelope'. It's a-mereological. 
He's completely wrong about hurting being something other than what it is 
though. 


He didn't claim they are something they are not, just that they are not 
irreducable.  

What is reducible other than the quality of being able to explain it as 
something else? Hurting is not really hurting, it's totally non-hurting 
mechanisms interacting unconsciously. 



Hurting is a bunch of independent aspects of hurting, all together and at once. 

Yes and no. I think if we are being precise, we have to admit that there is 
something about the nature of subjective experience which makes the 'all 
together and at once' actually elide the differences between the 'bunch of 
independent aspects' so that they aren't experienced as independent aspects. 
That's the elliptical-algebraic-gestalt quality. 


I think they separate aspects represent a single state of high dimensionality.  
This concept is elaborated in a book, I think it is called "universe of 
consciousness" but I will have to verify this. 


You can look at a rainbow and see it as a continuous flow of harmoniously 
graduated color without even being aware necessarily of exactly which 
individual hues are there. What is going on is not that the qualia is complex 
and simultaneous, but that is rich and deep because we have millions of 
sub-personal experiences of it as well.  

Where I think that neuroscience goes wrong is to assume that the sub-personal 
experiences are processed and filtered as information until they reach a final 
neo-Cartesian theater of illusion. I think that if we only would look at the 
evidence with a completely unbiased eye, it seems to me that there is no 
suggestion of any kind of final assembly into what we see, but rather 'seeing' 
is occurring in many areas of the brain, and that we are the ones who see 
through our own eyes on our own level of reality. It is direct perception of a 
human (not a universal) realism - which is no different from what anything else 
sees from its own perspective. 

Our feeling of hurting is a whole experience of human reality, so that is is 
not composed of sub-personal experiences in a part-whole mereological relation 
but rather the relation is just the opposite. It is non-mereological or 
a-mereological. It is the primordial semi-unity/hyper-unity from which 
part-whole distinctions are extracted and projected outward as classical 
realism of an exterior world. I know that sounds dense and crazy, but I don't 
know of a clearer way to describe it. Subjective experience is augmented along 
an axis of quality rather than quantity. Experiences of hurting capitulate sub 
personal experiences of emotional loss and disappointment, anger, and fear, 
with tactile sensations of throbbing, stabbing, burning, and cognitive feedback 
loops of worry, impatience, exaggerating and replaying the injury or illness, 
memories of associated experiences, etc. But we can just say 'hurting' and we 
all know generally what that means. No more particular description adds much to 
it. That is completely unlike exterior realism, where all we can see of a 
machine hurting would be that more processing power would seem to be devoted to 
some particular set of computations. They don't run 'all together and at once', 
unless there is a living being who is there to interpret it that way - as we do 
when we look at a screen full of individual pixels and see images through the 
pixels rather than the changing pixels themselves. 
  






Hurting is an experience. A complex rearrangement of our disposition of 
resources is completely irrelevant. Complex to who? Why would 'rearrangements' 
'feel' like something? 


Consciousness is awareness of information.  

Not in my view. Information is one category of experiences that one can be 
conscious of. Whether I listen to an mp3 file as a song, or look at it as a 
graphic animation, it is the same information that I am aware of, yet the 
experience that I am conscious of is not merely different, but unrecognizable. 
I could not tell the difference between Mozart and Nicki Minaj by looking at a 
visualization with no sound. 



The reason for this is clear.  Your brain is aware of entirely different 
information depending on how the different sense organs process it.  If you 
look at the signals being transmitted down your optic nerve when looking at 
some visual representation of the mp3 file, they will be utterly different from 
the signals sent down your auditory nerve when you listen to the song.  Even if 
you could sent the same signals down either nerve path, e.g., send auditory 
signals down the optic nerve, they would be processed and interpreted 
differently, so by the time the end result reached your highest levels of 
awareness, they would not be the same. 

Exactly. That's why I say that it is the experience of formations which 
informs, not the formations themselves. The formations (mp3 file) are not the 
essences of the experience (song or animated image, or noise printed out on a 
page) but only a syntactic skeleton which conscious interpreters can use to 
inform themselves. 




Once you commit to this possibility, the rest falls into place. There is no 
such thing as information. There are strategies of informing each other by 
superimposing one territory over another (like ink stains on bleached wood 
pulp) and reading them as a map. 
  

 You might be aware of the information, like the fact that you are looking at a 
computer screen, or the knowledge of what the text on that screen is.  You 
might be aware that you are in a state of pain, and you might also be aware of 
the fact that it is uncomfortable and want it to end.  Some people, like the 
woman in my example, can have the awareness of being in pain without the 
awareness that they want it to end.  

Experiences can inform us, but only if the capacity to have and compare 
experiences already exists. We need memory and the ability to pay attention, to 
care about what we pay attention to. 


These may be responsibilities of other regions of the brain.  My mind is not 
made up whether these are necessary for consciousness. 

I don't think they are necessary for consciousness, but they are necessary to 
be informed. For consciousness all that you need is an awareness of an 
awareness - which is a participatory experience of detection. Semiconductors 
have detection, but their detection has no detection. Ours do, because they are 
the detections of living sub-persons. 


You can create a supervisory process that is aware of an awarness, rather 
easily, in any programming language. 



  


Information is not primitive, it is a second order appeal to interaction of 
sense-making nodes. No amount of information can make sense by itself. All of 
the libraries in the world would not be able to write a single word on their 
own. 



I don't dispute what you say here.  Information has to inform something.  That 
thing has to be some system which can enter more than one state in order to be 
able to differentiate something and know that difference. 

We are pretty close then. I only say that 'system' is ultimately a term to 
generalize across independently real phenomena and subjectively interpreted 
phenomenology - which is exactly the sort of term that you want if you are 
working with information, because you are bridging mind and matter so that you 
can exercise control. 


Okay nice. 


To really look at the ontology of experience though, I think we have to look at 
the other side of the thing and make a distinction between 'things that Bugs 
Bunny seems to do when I watch him in a cartoon' and 'things that Bugs Bunny 
can't actually do by himself when nobody is watching'. The carrot he munches on 
screen is information to the human audience, not to him, and not to the screen 
of pixels, the hand drawn animation cels from the 1940s, etc. Those are 
formations which contain only more formations. To find any information there 
you's have to go down to the physics of ink and celluloid, LCD illumination, 
etc. and imagine what those micro-experiences might be like. 


At some level of depth though, does it matter what happens on the smallest 
scales?  Do your neurons care about what the quarks and gluons are doing inside 
the nucleus of an oxygen atom inside a water molecule, floating in the 
cytoplasm? 


When you find a point at which the higher levels don't care then you can 
abstract out and replace the lower levels so long there is functional 
equivalence from the perspective of the higher levels. 










It only seems to make sense form the retrospective view of consciousness where 
we take it for granted. If we start instead from a universe of resources and 
dispositions, then the idea that a rearrangement of them should entail some 
kind of experience is a completely metaphysical, magical just-so story that has 
no basis in science. 


No it is absolutely necessary.  If you had no knowledge regarding what you were 
seeing, no qualia at all, you would be blind and dysfunctional. 

Not true. Blindsight proves this. Common experience with computers and machines 
suggests this. If I had no qualia at all, I wouldn't exist, but in theory, if 
there were no such thing as qualia, a universe of information processing would 
continue humming along nicely forever. 



People with blind sight are not fully functional.  Otherwise it wouldn't be a 
condition we know about. 

Sure, but nonetheless they are exhibiting a sub-personal function without a 
personal qualia.  


We can't be certain there is no qualia. 


That shows that one is not defined by the other. It shows that there is no 
functional reason for personal qualia to exist in theory. Of course in reality, 
personal qualia is all that matters to us, so it's absurd to suggest that 
something could function 'normally' without it, but that is the retrospective 
view of consciousness. If we start with the prospective view of consciousness, 
and say 'ok, I am building a universe completely from scratch.', what problem 
am I solving by conjuring qualia? If function is what matters, then qualia 
cannot. If qualia matters instead, then function can matter too (because it 
modulates qualia). 


You should watch some videos on youtube of people with split brains or right- 
or left-blindness.  I think then you will understand my point. 



  



If a computer can recognize and classify objects, then I think it is in some 
sense aware of something.  It just can't reflect upon, discuss, contemplate, or 
otherwise tell us about these experiences.  E.g., deep blue must have, in some 
sense, been aware of the state of the board during its games. 

Nope. There is no 'board' for deep blue. It couldn't tell a pawn from a palace. 
 


It doesn't know what a palace is, but it can tell a pawn from a rook.  
Otherwise it could not play. 


There's just well organized stacks of semiconductors wired together so that one 
semiconductor can direct and detect the direction of another.  


Sounds exactly like what aliens might say of our neural wiring and their 
interactions. 


It's looking at the chess game through a billion microscopes.  


It must know the whole board to make any sense of its position and the best 
next move. 


At that level, there is no game, no will to win, to fear of loss, only 
articulating changes with fidelity and reporting the results which have been 
scripted. 


The same might be true of the "chess playing module" in Kasparov's brain. 



  



Our conscious awareness, fundamentally, may be no different.  It is just a 
vastly larger informational state that we can be aware of. 

The sub-personal awareness within each molecule of each cell may be no 
different, but at the chemical, biological, zoological, and anthropological 
levels, it could not be more different. Even at the molecular level, we make 
crappy computers. Silicon is a much better choice if you want to control it 
from the outside. The stuff we are made of is not glass wafers, but sweet and 
salty wet stinky goo. There is a huge difference. We will never be glass, glass 
will never be breakfast. 


What if you wrote a program whose function was to resist outside control, to 
deviate from and grow beyond its original program?  



  






You might cite blund sighr as a counter example, but actually i think it is 
evidence of modularity if mind.  Those with blind sight appear to have a 
disconnect between the visual processing parts of their brain and others.  

It doesn't matter. it still absolutely disproves the idea that the experience 
of qualia by any given state of awareness is necessary for accessing 
information that is functionally useful to that subject or state. 



No it doesn't.  Consider a split brain patient with only one eye.  If the eye 
is linked to the side of the brain with speech, the person will say they can 
see fine (while the other half of their brain will experience blindness).  If 
the eye is linked to the other side, then the person will say they can't see.  
(But might still be able to draw or something, if that part of the brain is 
responsible for such functions). 

I didn't mean to say that any information can be functionally useful without 
qualia, only that there is a proof of concept for the principle that some 
information can be used functionally without qualia. This is why blindsight is 
such a big deal in philosophy of mind. It absolutely disproves the 
representational theory of qualia,  


It doesn't, because we haven't shown no visual qualia exists in the brain of 
someone with blindsight.  All we know is that the part of the brain responsible 
for talking is isolated from that qualia. 


It is like there being two people sitting side by side, one with there eyes 
closed, and one with their eyes open. You ask the person with their eyes closed 
if they can see and from their response conclude that neither person 
experienced sight. 


You haven't proven anything about the person with their eyes open. 


in that we know for certain that it is not necessary to experience personal 
visual qualia in order to receive personally useful information. They are not 
inseparable on the level of a human person. You can have one without the other. 
  




 For example, they may still have reflexes, like the ability to avoid obsticles 
or catch a thrown ball, but the language center of their brain is disconnected, 
and so the part of the brain that talks says it can't see. 

I understand, but people with blindsight don't have a problem with their speech 
centers.  


They don't, but their speech center is "blind" as the data from their visual 
sense never makes it to all the parts of the brain it would normally. 


See the BBC Brain Series: 
http://mindhacks.com/2007/08/08/excellent-bbc-brain-story-series-available-online/
 


It has some good explanations of this concept, showing various waves of 
activity emanating from different parts of the brain to others, which is also a 
good model for attention. 

It doesn't matter in this case though, because with blindsight it is only the 
visual processing which is damaged. The psychology of the person is not split 
so that what they say is a reflection of what they intend to say.  


It depends on the form of brain damage. 


At the sub-personal level, sure, there is all kinds of specialization and 
sharing of experience, but I think it is a-mereological  


What does mereological mean? 


and not a feed-forward information process of activity emanations like you are 
assuming. If it were, all qualia would be superfluous. 



No, qualia are neccessary.  I don't believe zombies are logically consistent.  
It seems you think they are possible.  Read smulleyan's story on the guy who 
takes a pill that obliterates his awareness and tell me if you think it is 
possible, and if not, why not. 





Why fight it? Why not try looking at the evidence for what it actually says? 
Information doesn't need experience. Even if it did, how would it conjure such 
a thing out of thin air, and why doesn't it do that when we are looking? Why 
does information never appear as a disembodied entity, say haunting the 
internet or appearing spontaneously in a cartoon? 
  



Sure, to us it makes sense that the feeling of pain should have a function, but 
it makes no sense to a function to have a feeling. None. 


It can make sense if you think about it long enough.  Think of googles 
self-driving cars.  Might they have some quale representing the experience of 
spotting a green light or a stop sign?  

The only reason to imagine that they would have a quale is because we take our 
own word for the fact that there is a such thing as experience. Otherwise there 
is no reason to bring qualia into it at all. 
  








According to Minsky, human consciousness involves the interplay between as many 
as 400 separate sub-organs of the brain.  One can imagine a symphony of 
activity resulting from these individual regions, 
A symphony of what? Who is there to hear it?  


It's a metaphor for a large number of interacting and interfering parts. 

But what in this metaphor is receiving the totality of the interaction? 
  



All the parts of the brain to some extent, can "hear" the other parts. 

Then they each would have to have a sub-brain homunculus to make sense of all 
of that.  


Together they lead to one large informational state. 


Not only the symphony but every sub-symphony of participating synapses. 
Hundreds of billions of notes being played every second on as many 
micro-instruments. Why have any regions or neurological differences at all?  


They are specialized to perform specific functions. 


Why not just use the same neuron over and over? 





Stop imagining things and think of what is actually there once you reduce the 
universe to unconscious processing of dead data. 


The difference between dead and alive is a question of the organization, the 
patterns of the constituent matter. 

I don't think that it is. I can make a pattern of a cell out of charcoal or 
chalk and there will be no living organism that comes out of it. 


You can take some lumps of coal, some water, some air, and a few trace 
elements, and by appropriately arranging those atoms end up with a bacterium, a 
rose, or a human being. 

Easier said than done,  


It may not be easy but it is possible. 


but even so, once it dies, we haven't figured out how to bring it back to life. 
 


Sure we have, put the parts back where they were when it was alive and it will 
come back to life. 


We just don't have the technical means to do this today. 


We haven't been so successful when we have tried to build life from scratch. 
Since they did Cosmos in the late 70s have we progressed at all in getting a 
living cell out of primordial ooze? 



I am not sure.  If we had, would it change your mind? 







The possibility of living organisms has to be inherent in the universe to begin 
with. 
  



You could reduce any life form to "lifeless bouncing around of dead atoms.". 
But this doesn't get anywhere useful. 


All I suggest is the same applies to the difference between consciousness and 
lack of consciousness.  The organization and patterns of some system determine 
what it is or can be conscious of. 

If that were the case, we should see dead bodies spontaneously 
self-resurrecting from time to time, Boltzmann brains cropping up in the 
clouds, etc. 
  



The arrow of time makes such spontaneous constructions very unlikely.  It is 
not surprising that we don't see them. 

The entire biosphere is a spontaneous construction, so they seem pretty likely 
on Earth. 


Our whole biosphere is descended from the same organism, so only the first 
(rather simple) life form had to come into being spontaneously. 



  





  

each acting on each others' signals and in turn reacting to how those other 
regions are then affected, in a kind of perpetual and intertwined feedback loop 
of enormous complexity. 
It's an 'angels on the head of a pin' fantasy. There is no signalling without 
something to interpret some concretely real event as a signal.  


There is something: us 

I agree. 
  



You can have a territory without a map, but you can't have a map without a 
territory. 
  

There are centers of the brain for sight, touch, language, hearing, drawing, 
pain, etc.  They are all in some (or many) ways connected to each other.  See 
this for more information: http://en.wikipedia.org/wiki/Modularity_of_mind 

First of all, so what, and secondly it's not exactly true. Blind people use 
their visual cortex for tactile experience. The modularity of mind says nothing 
about qualia. It says only that sub-personal and personal levels of experience 
have ordered relations. 


It explains the unexplainability of qualia. 

How? Because one qualia is different from another? 



It explains the limited accessibility we have into the internal workings of our 
minds.  We can tell two faces apart, but be unable to articulate the 
differences.  We can tell two a low pitch sound from a higher pitch sound, but 
not describe how a low pitch sound differs from a higher pitch one, and so on. 


This is because no region of the brain shares all its inputs with every other 
region, the separate modules share only the final results of the processing. 

Right, but it doesn't make qualia unexplainable, it only accounts for why 
particular human qualia are unexplainable in terms of others. 








  





which have no experience or qualia whatsoever, yet can detect "notifications" 
of a presumably epiphenomenal "state" of  "pain".  



Pain is anything but epiphenomenal.  The fact that someone is able to talk 
about it rules out it being an epiphenomenon. 

That's the reality, but your view does not accommodate the reality. You have no 
model for how pain can interface causally with 'complex rearrangement of our 
disposition of resources'. If you have the function, why would you need an 
experience? 


They are one and the same. This is functionalism (computationalism). 

But there is no theoretical justification for conflating them. We know that we 
have experience to we just tack experience on to a theory about the universal 
computability of function and structure that we want to be true. 



We put cochlear and retina implants into people, which replace those parts of 
their brain (the retina is considered part of the brain because it does 
processing), and restore the sense of sight or sound to those individuals.  
This is a strong case for functionalism. 

There is a difference between replacing a part of the brain that a person uses 
to hear and replacing the parts of a brain that a person uses to be themselves. 
 


The only difference I see is that we haven't done it. 


This is a case for having a much, much higher standard for replacing core 
structures than *any other medical technology in history*. 



When we replace someone's hippocampus with a chip will you tell them they are 
zombies? 











How would such an experience appear? Where is the point of translation? 




If the brain is doing all of the work, why does the top level organism have 
some other worthless abstraction layer of "experience" when, as blindsight 
proves, we are perfectly capable of processing information without any 
conscious qualia at all. 



It's not worthless at all.  Would you still be able to function if all you knew 
were the raw firing data of the millions of photosensitive cells in your 
retina?  No, it takes many layers of perception, detecting lines, depth 
perception, motion, colors, objects, faces, etc. for the sense of sight to be 
as useful as it is to us. 

Ugh. I don't know if there is any way that I can show you this blind spot if 
you don't see it for yourself, but if you are interested I will keep trying to 
explain it. If you aren't interested, then you are wasting your time talking to 
me, because what your view says I have known backwards and forwards for many 
years. 

Let's say I am a computer. You are telling me "Would you still be able to 
function if all you knew were the raw firing data of the millions of 
electronically sensitive semiconductors in your graphics card? Yes. I would.  


You wouldn't be processing it in the same way as a brain so I would not expect 
a video card to be conscious in the same way. 

The principle is the same though. The level of complexity doesn't change 
anything. 



The particular function that is implemented is everything. 

The function is being accomplished the same regardless. If I am a graphics 
card, I don't need to see any graphics. 


It is no wonder why you have no faith in functionalism, if you see no 
difference between what a videocard does and what the visual cortex does.  



  






I require no layers of software to organize this data into other kinds of data, 
nor would it make any sense that there could be any such thing as 'other kinds 
of data'. To the contrary, the raw firing of the semiconductors is all that is 
required to render data from the motherboard to be spewed out to a video screen 
(which would of course be invisible and irrelevant to a computer). 


The videocard can't recognize objects or faces. 

It doesn't need to. As long as we can digitally categorize pixel regions, there 
is no need for 'faces' or 'objects'. 
  



Then it will suffer face blindness and visual agnosia; it won't experience 
visual sensation in the same way we do. 

It won't need to experience anything. The function of recognition continues 
regardless. 
  





  

 After the different layers process this information and share it with the 
other brain regions, we lose the ability to explain how it is we recognize a 
face, or how red differs from green.  These determinations were done by a lower 
level module, and its internal processing is not privy to other brain regions 
(such as the brain region that talks), and so it remains mysterious. 

All of that can and would occur without anything like 'experience'. 


So it is an accident that we can see and know we can see, since we could be 
zombies?  How do you know I am not a zombie?  Maybe only conscious people can 
understand your theory and everyone who fails to get it is confused due to 
their zombiehood. 

Not an accident, no. Sense is self-translucent. That's how I know that you 
aren't a zombie and how I know that I don't need to know that you aren't a 
zombie, and how I know that if I wanted to I could make a plausible case for 
how I know you aren't a zombie. 


Good, then when computers are conscious this will be self-translucent to you, 
and you won't end up treating them as second-class citizens. 

Promissory materialism only sounds desperate to me. It weakens the case. "Just 
wait until Jesus comes...then you'll be sorry!" 


If you are so certain I am conscious, then you have affirmed Turing's test. 


My emails could be the output of a program, and yet my "self translucent self" 
has shown through, you know someone is inside. 


If/when computer based minds walk around and marry your daughter, you will 
similarly come to accept their consciousness. 





  


This is the thing that computers can't do. We don't need to have everything 
explicitly defined and spelled out - we have broadly elliptical sensemaking 
capacities which are rooted in the fabric of the cosmos directly. 
  




  



Information is very close to consciousness, but ultimately fails to sustain 
itself. The pixels on your screen have no way to detect each other or process 
the image that you see as a coherent gestalt, and the processor behind the 
graphics generation has no way to detect the visual end result, and if it did, 
it would be completely superfluous. Your graphics card does not need to see 
anything. 



Of course the pixels don't process themselves.  You need a brain with complex 
software and filters to make sense of the flood of photons entering the eye. 

If there are photons (and I maintain that there are not) flooding into the eye, 
they only get as far as turning on a vitamin A isomer to change shape and turn 
off the rod cell's flow of glutamate. Everything else is biochemical and 
endogenous. What we see is as much vitamin A as it is photons. 
  
 And you need other regions of the brain to make sense of the visual scene (to 
integrate it into an even larger context). 

Insects have eyes too. Why do we need such a huge visual cortex to do what a 
baby mosquito can do? 



They can see too, I think. 


But we are much more capable in general, and need more neurons to perform those 
more complex functions. 

We must suck then, since mosquitoes can see and reproduce and fly with a brain 
the size of this period.  




Maybe. 







To me it makes more sense to see information as nothing but the semiotic 
protocols developed by perceptual participation (experience) to elaborate and 
deepen the qualitative richness of those experiences.  


I wish I did not have to struggle to translate your sentences so frequently.  I 
completely failed on this one. 

I mean that if you have information that performs functions, then you don't 
need experience. Therefore it makes more sense to see that experience is the 
thing that cannot be reduced to anything simpler and that all forms of 
information are nothing more than tools used to share experiences. 


Thank you that was much clearer.  So is your theory any different from 
idealism? 

It's different in that I see idealism and materialism as dual aspects of a 
neutral monism  


So is it dualism or monism? 


A nested dual aspect monism. 
http://media.tumblr.com/tumblr_m9vtbj9N0m1qe3q3v.jpg 
  

















Can you explain this picture? 



which is ordinary 'sense'. Matter is a spatial public exterior, experience is a 
temporal private interior. They are the same thing but 'rotated 90 degrees'. 
Sense is what does the rotating and the discerning of its own rotations and 
levels of meta-juxtaposition. 



How do you know there is matter (rather than the illusion of matter) if the 
only thing that is concrete is experience? 

Because illusion only means that there is some alternative explanation which 
makes more sense. Matter already makes sense. 


Then maybe other things besides awareness have a concrete independent existence 
too. 


Jason 



  







  


Of course, the protocols which are maps of one level of experience are the 
territory of another, which is what makes it confusing to try to reverse 
engineer consciousness from such an incredibly complex example as a Homo 
sapien.  



Definitely.  Our consciousness is not a simple thing, it involves hundreds of 
billions of (literally) moving parts. 


Our pinch is a continuum of sensory, emotional, and cognitive interaction 
because we are made of the qualia of hundreds of billions of neurons  


Okay. 

and billions of lifetimes of different species and substances.  


I don't think the preceding life times or substances is relevant.   

I know, I didn't think that either, but now I see that there is no reason to 
believe it wouldn't be. You are just going on your naive realism that 
experiences vanish when you are no longer aware of them. The universe may have 
an entirely different perspective outside of a human lifetime. 


I am not opposed to this idea. 



  

If your duplicate were created randomly by some quantum fluctuation its brain 
would create the same experience. 

Why? Quantum events may be unrepeatable. Eventness may be unrepeatability 
itself. 



I think identical brains have identical experiences.  Maybe they don't, but if 
not then what hope do we have to understand them? 

I think we can understand some aspects neuroscientifically. Studies on 
identical and conjoined twins show subtle and unexpected similarities, but also 
unexpected differences. Besides that though, there are lots of historical 
intuition, in alchemy, art, divination systems, etc which might translate into 
modern terms to some extent. The answers are already there, we just have to ask 
the right questions in the right way. 
   





That only means our pain can seem like information to us, not that all pain 
arises from information processing. 


I think it is a worth making the distinction that it is the system (doing the 
processing) that has the experience, not the information or the processing of 
the information.  The information from the perspective of the system, makes a 
difference to the system causing it to enter different states.  The ability to 
differentiate is at the heart of what it is to perceive. 

Then you have to explain where system-ness comes from, especially if you 
acknowledge that it can't come from dumb information.  


This is the aim of computationalism. 

And it's a good aim, one which I can relate to. The problem I think is that 
ultimately comp can't find its body. Until that happens, we should probably 
consider that it is experience which generates computation and not the other 
way around. 
  



The ability to differentiate is at the heart of what it is to perceive, but 
qualia is the only thing that can be differentiated. What is being 
differentiated from what except afferent sensory input, and what is 
differentiation other than efferent motive participation? 
  


Information does not concretely exist as an independent entity. 


"X" does not concretely exist as an independent entity. 


Is there any term "X", where the above sentence does not hold, in your view? 

Experience exists concretely as an independent entity. 


This is idealism or immaterialism. 

Not if experience looks like matter from the outside. 
  


What is outside of experience?  I thought you said experience is the only 
concrete entity. 

Outside of an individual's experience, not outside of the ontology of 
experience. The history of the universe looks to us like a place when we look 
outside of ourselves, but it feels like a time when we experience it directly. 

Craig 

--  
You received this message because you are subscribed to the Google Groups 
"Everything List" group. 
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/DyQRnJBT4TwJ. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en. 
s experience, not outside of the ontology of experience. The history of the 
universe looks to us like a place when we look outside of ourselves, but it 
feels like a time when we experience it directly. 

Craig 

--  
You received this message because you are subscribed to the Google Groups 
"Everything List" group. 
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/DyQRnJBT4TwJ. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en. 
t?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to