On Sep 16, 3:41 am, Jason Resch <[email protected]> wrote: > On Sep 14, 2011, at 10:36 AM, Craig Weinberg <[email protected]> > wrote:
> > > The more a creature or plant is like what we are, the more 'conscious' > > it will appear to be. > > A system of milk bottles could seem just as conscious as you or I, if > it controlled a human body and was sped up sufficiently. 'Seem' is the key word. The natural default sense we have can be misled intentionally for some period of time, depending on the observer and the circumstances. We participate in this, so that there would be a placebo effect of attributing consciousness to the unconsciousness and vice versa if the possibility of doubt were introduced experimentally. I would say that just as we can tap on other perceptual frames (molecular microcosm, geophysical macrocosm, etc) to augment our sense and motives, we can also project our sense and motives through other perceptual frames, but whether our target (computer program, robot) can make those senses and motives their own (literally to under-stand; to settle within) depends on how close the thing's perceptual frame is to begin with. It may be easier for us to program a computer than train a dog, but the dog will always understand more of the meaning behind the trick then the computer will, even though the computer can be trained much faster and more directly. The perceptual frame of semiconductors is so primitive, ie, the mechanisms they rely upon are maximally probable relations rather than harnessing improbability as living organisms do, that there is an infinitesimally low change of the thing ever noticing or making sense of what we have programmed it to do. Just as we can't see ultraviolet light but it still gives us a sunburn, the instruction set of a computer or robot falls on deaf ears but (slavishly) willing limbs. > > > It's not a binary distinction. > > I agree. > > > Even with people, > > those that remind us more of ourselves are deemed to be more > > conscious. > > I don't think it is a matter of being more or less conscious, but > rather a question of what one is conscious of. Possibly, but who would be a counter example of someone that we think is more conscious than someone else but which is less like us? Here I think we can see how 'consciousness' is like 'quality of being alive' so that we can always find some excuse for our prejudices. We suspect that unfamiliar lifestyles or cultures, even when seemingly 'better' than our own ideal lifestyle have some hidden flaw, some missing elements that do not make as much sense as our own, and that makes them 'other'. There could be exceptions I suppose, I just haven't really thought of any. The less I can see of myself in someone else, the less I can relate to them and their reality as 'real'. > > > > >> what is an appropriate substitution level to > >> bet on before uploading one's brain, can computers be conscious in > >> the same > >> way as biological brains, etc. > > > That can only be ascertained through experiment, > > Your understanding of consciousness is far from complete if we still > need to conduct experiments to answer these questions. Hopefully you > see now there is more to consciousness then it's simple appearance. No, just the opposite. My understanding of consciousness is specifically and unequivocally that subjective qualia can only ever be experienced first hand. All non experimental hypotheses of consciousness are doomed to failure. The simple appearance is the computations which are a part of cognition, but the fullness of human cognition itself is either experienced first hand or non-existent. What's wrong with experiments? > > > and it would > > undoubtedly be different for different people at different times. What > > is the appropriate substitution level to bet on for an artificial > > kidney? > > One that can adequately filter the blood, like any dialysis machine. Some people do better on dialysis than others, do they not? > > > It just depends. People's bodies react differently. > > > Perception contradicts mechanism directly. > > How? Because mechanism is based upon reliability of the probable and perception bets on the significance of the improbable. > > > > > My view is the universe is that contradiction. The inherent > > polarization of it is such that it cannot be resolved and that it must > > be resolved. That is the engine of the cosmos. On the micro and > > macrocosmic levels (relative to us), the polarity is arithmetic, but > > on the mesocosmic level (isomorphic to us) the polarization is > > blurred, ambiguous, and figurative. That's another polarity entirely, > > but they arise from each other logically. > > >>>>> Isn't it obvious that > >>>>> different levels of perception yield different novel > >>>>> possibilities? > >>>>> That a ripe peach does something that a piece of charcoal doesn't? > >>>>> That yellow is different from just a bluer kind of red? > > >>>> I believe that the sensations you describe are equivalent to > >>>> certain > >>>> computations. > > >>> What is equivalent? Is an apple equivalent to an orange? It's a > >>> matter > >>> of pattern recognition. If you recognize a common pattern, you can > >>> project equivalence, but objectively, there is no equivalent to > >>> yellow. You either see it or it does not exist for you. No > >>> computation > >>> can substitute for that experience. It has no equivalent. It can be > >>> created in people who can see yellow by exposure to certain optical > >>> conditions, but also by maybe pushing on your eyeball or falling > >>> asleep. Yellow is associated with various computations, but it is > >>> not > >>> itself a computation. It is a sensorimotive subjective presence. > > >> Perhaps your "sensorimotive subject" supervenes on these > >> computations. > > > If it did, then why have yellow at all? Why not just have the > > computations? > > If we could not tell yellow from other colors we would be color > blind. If we could not see at all we would be blind. Awareness is > not dispensible. There aren't any colors objectively. It's just a uniform spectrum of wavelength/frequency within a certain range. It's not a matter of being able to tell one color from another, it's a matter of seeing colors where there are none. Some animals are blind, so visual sense is definitely not indispensable to animal life, and fungi, bacteria, plants, etc don't have eyes. I agree that some form of awareness is always present, but not because it's necessary but because it is ontologically primitive. Whether the particular elaborations of awareness are present or absent in one organism or another, or one mineral or celestial body or another are a matter of context and accumulated significance. > > > > >>>> Thus consciousness, and computation are higher-level > >>>> phenomenon, and accordingly can be equivalently realized by > >>>> different > >>>> physical media, or even as functions which exist platonically in > >>>> number > >>>> theory. > > >>> Human consciousness is a higher level phenomenon of neurological > >>> awareness, which is a higher level phenomena of biology, genetics, > >>> chemistry, and physics. > > >> I think you are on to something with this. > > > cool. if you do end up getting what I'm talking about, It's possible > > that you'll find it pretty interesting. All of this bickering over AGI > > and zombies is really not at all what I'm here to talk about. > > Okay. That is understandable but they are imporant tools in thought > experiments. I would say that their importance is exaggerated because the model that they come out of is looking at the wrong basic units. > > > Speculating on the consciousness of non-human subjects is really the > > least valuable implication of my hypothesis. What my idea lets you to > > is to look out of your own eyes and see what you actually see > > (meaning, image, feeling) without compulsively translating it > > intellectually into the opposite of what it is (generic, arithmetic > > mechanism). Then you can get a firm handle on what the difference is, > > why it's important, and how they can coexist without one disqualifying > > the other. > > I am not attempting to disqualify first person experience, only > understand it. By understand it though, you mean understand it in third person terms. Isolate the mechanism. That's the opposite of how it works. > > > > > My hope is that there is a threshold where is is possible for someone > > will reach a supersaturated tipping point and crystallize an > > understanding of what I'm talking about, like those 'When You See > > it..." memes (http://static.black-frames.net/images/when-you-see- > > it_____________.jpg). Once you realize that what we perceive is both > > fact and fiction > > Do you think something can be both true and false? Of course. True or false: Rain is beneficial. It's ok to eat peanuts. Zero is a number. Y is a vowel. I am old. I am freaking out. I am Iron Man. True and false are only appropriate in artificially constrained literal contexts (which are important and powerful foundations of science and reason, of course) but the real universe is also made of 'maybe', 'it depends', and 'nobody knows'. > > > and that both fact and fiction are themselves a > > matter of perception then it gives you the freedom to appreciate the > > cosmos as it is, in all it's true demented genius, rather than as a > > theoretical construct to support the existence of fact at the expense > > of fiction (or vice versa). > > >>> It is also a lower level phenomenon of > >>> anthropology, zoology, ecology, geology, and astrophysics-cosmology. > >>> Some psychological functions can be realized by different physical > >>> media, some physical functions, like producing epinephrine, can be > >>> realized by different psychological means (a movie or a book, > >>> memory, > >>> conversation, etc). > > >>>>>>> How do you get 'pieces' to 'interact' and obey > >>>>>>> 'rules'? The rules have to make sense in the particular context, > >>> and > >>>>>>> there has to be a motive for that interaction, ie sensorimotive > >>>>>>> experience. > > >>>>>> If there were no hard rules, life could not evolve. > > >>>>> 'Hard rules' can only arise if the phenomena they govern have a > >>>>> way of > >>>>> being concretely influenced by them. Otherwise they are > >>>>> metaphysical > >>>>> abstractions. The idea of 'rules' or 'information' is a human > >>>>> intellectual analysis. The actual thing that it is would be > >>>>> sensorimotive experience. > > >>>> Are you advocating subjective idealism or phenomenalism now? > > >>> I'm advocating a sense monism encapsulation of existential-essential > >>> pseudo-dualism. > > >> Could you please restate this using words with a conventional > >> meaning? > > > I'm advocating a universe based entirely on sense, sense being the > > unresolvable tension between, yet unity among, subjective experiences > > and objective existence. > > This is a Buddhist idea. > > It is also quite similar to Bruno's explanation of the appearence of > the physical world. Yes but to anchor it in sense specifically, so that physics and consciousness can be reconciled through the relation between interior pattern recognition and exterior substance interaction I think takes the idea in a new direction. > > >> No, the mind which supervenes on the computation of the milk > >> bottles will > >> experience red. > > > A mind arises from a collection of milk bottles? > > If they perform the right computation. Have you considered that might not be true? If you are being evaluated in a psychiatric hospital, do you think it would be a good idea to make that assertion? > > > Automatically? Does > > it think about anything other than the one momentary experience of red > > that occurs somehow from bottles knocking each other down in some > > particular configuration? > > It depends on what the computation is. > > > > >>> Will I see red if I look at the milk > >>> bottles? > > >> No. > > >>> How can you seriously entertain that as a reality? > > >> You won't see red when you look at a neuron involved in the > >> processing of > >> that sensory data, nor will the individual neurons which serve as > >> the basis > >> for that processing know the experience of red. > > > I agree. So what is it exactly that does know the experience of red? > > The mind. Which is where? In the milk bottles? Does it radiate physically around them like a field? Where does it get the red from? > > > > >> Entertaining the idea of > >> milk bottles having a private experience is no more a leap than > >> entertaining > >> the idea that the cells in your brain can do the same. > > > On one level that's true, since we have no direct access to what other > > things experience, but it doesn't mean that it's very likely that the > > experience it has could ever be comparable to that of our brain cells. > > I thought you believed that we are a higher level process than our > brain cells. And that our experience is not that of our brain cells. The higher level process of our brain cells is the process of the brain as a whole. We are what is experienced through the whole brain processes - composed *not* of the lower level processes of the brain cells, but of the lower level *experiences* of the brain cell processes. Big difference. Huge. Because what brain cells do is constrained by literal existence - it has to be discrete. matter. in a public. space. What can be experienced *through* those brain cells is a totally different (opposite) ontology. It's a continuous. energy. in a private. time. It's sensorimotive, not electromagnetic, so it does things that literally have to be seen to be believed, have to felt to be understood. That's not metaphor, that is it's actual architecture. The matter and space which hosts this energy-time works different kinds of wonders - computations, mass productions, infinite methodical patience. etc. The sensorimotive has a different skill set. It does imagination, feeling, storytelling. It presents to us a kind of profound omnipotence within the context of our own fictive subjective process that is almost completely cancelled out by the corresponding objective process - almost but not completely. The underlap is extruded through time as significance accumulation and across space as entropy. It's really pretty simple I think, you just have to really grasp that the interior of everything is almost exactly the opposite of the way it seems, except for a degree of overlap which is...sense. Reality. Mundane truth. > > > > > If it were, there would be no reason to have brain cells at all. > > We need something to perform the right computations. Why not milk bottles or dust? > > > We > > could just be a giant amoeba or pile of sand and have any experience > > possible - human or otherwise. > > There is no reason to expect a pile of sand will perform the right > computations. I agree. Why do you expect a pile of silicon will be able to perform the right computations though, just because it's in a particular shape? > > > Instead of needing eyes we could just > > drill a hole in our skull. Something makes humans different from non- > > humans, I think that it's related to the experiences of organisms over > > time as well as the consequences of the physical conditions local to > > their bodies. > > So if you stepped into a star trek style transporter would you no > longer be human because the copy lost it's history as an evolved > organism? It may not be that simple that who we are can be captured literally as a static configuration. We are be-ings expressed through specific material configurations. There is no guarantee that the material configuration will reconstruct the same being. You may die and what is transported is your identical twin, just like a genetically identical twin - a truly different person that other people will notice is different, and whose personality and decisions will immediately diverge from yours as soon as teleportation is accomplished. We may be able to be 'walked off' from one brain to another, or from our brain to a computer if the computer is sufficiently similar to our own, but that may actual live experience. We may have to literally grow into the new environment over years. Immortality could be fun. > > > > >>>> This computation could be performed by any kind of matter that > >>>> can be arranged into a functional Turing machine. This > >>>> computation also > >>>> exists in mathematics already. > > >>> I'm confident that no computation generated by a Turing is > >>> equivalent > >>> to seeing red. > > >> We should have an answer in a few decades, when you can ask those > >> with > >> digital brains what color a ripe strawberry has. > > > Promises promises. > > >>>>>>> but physical properties can be multiply realized > >>>>>>> in psychological properties as well. Listening to the same song > >>> will > >>>>>>> show up differently in the brain of different people, and > >>>>>>> different > >>>>>>> even in the same person over time, but the song itself has an > >>>>>>> essential coherence and invariance that makes it a recognizable > >>>>>>> pattern to all who can hear it. The song has some concrete > >>> properties > >>>>>>> which do not supervene meaningfully upon physical media. > > >>>>>> Different physical properties can be experienced differently, but > >>> that's > >>>>> not > >>>>>> what supervenience is about. Rather it says that two physically > >>>>> identical > >>>>>> brains experiencing the same song will have identical > >>>>>> experiences. > > >>>>> Identical is not possible, but the more similar one thing is > >>>>> physically to another, the more likely that their experiences will > >>>>> also be more similar. That's not the only relevant issue though. > >>>>> It > >>>>> depends what the thing is. A cube of sugar compared to another > >>>>> cube of > >>>>> sugar is different than comparing twins or triplets of human > >>>>> beings. > >>>>> The human beings are elaborated to a much more unpredictable > >>>>> degree. > >>>>> It's not purely a matter of complexity and probability, there is > >>>>> more > >>>>> sensorimotive development which figures into the difference. We > >>>>> have > >>>>> more of a choice. Maybe not as much as we think, and maybe it's > >>>>> more > >>>>> of a feeling that we have more choice, but nevertheless, the > >>>>> feeling > >>>>> that smashing a person's head is different from smashing a coconut > > >>>> I hope you don't speak from experience. ;-) > > >>> If the universe was only arithmetic, what would be the difference? > > >> The difference between a primitively physical universe or the > >> difference > >> between a coconut and a human's head? > > > The difference between committing murder and making a Pina Colada. > > I don't see how that question is any different than: this universe is > only physics, so what is the difference between smashing a skull and > smashing a coconut? Right, that's the question. Why do we, as physical parts of the physical universe 'care' more about one than the other. What physically cares and why does it seem impossible in inanimate objects but vitally important to living things? If there is no physical difference between living beings and inorganic phenomena, why and how could it ever seem like there was? > > > What would that be though? What is similar to red but not a color? > > The experience of red has very little to to with photons of a certain > fequency. I agree. The experience of red has very little to do with anything except the ability to experience red. That's why I say it is not a computation. The experience is conducted through and modulated by computation, but those computations do not automatically produce the experience of red. They have to occur in something that can possible see red to begin with (i.e. not a collection of milk bottles or a silicon chip) > > > > >>>>> The suggestion of a mind is purely imaginary, based upon > >>>>> a particular interpretation of scientific observations. > > >>>> When we build minds out of computers it will be hard to argue > >>>> that that > >>>> interpretation was correct. > > >>> Ah yes. Promissory Materialism. Science will provide. I'm confident > >>> that the horizon on AGI will continue to recede indefinitely like a > >>> mirage, as it has thus far. I could be wrong, but there is no reason > >>> to think so at this point. > > >> If you told any AI researcher in the 70s of the accomplishments > >> from the > >> links I provided they would break out the campaign bottles. The > >> horizon is > >> not receding, rather you are in the slowly warming pot not noticing > >> it is > >> about to boil. > > > I do think there is a lot of great science and technology coming out > > of it, but I think we are no closer to true artificial general > > intelligence than we were in 1975. > > We are creeping up the evolutionary tree. We are about at insects > now, and coming up on mice. From fruit flies to mice is the same jump > from mice to cats, and from mice to cats is the same jump from cats to > humans. We are making insect puppets, not insects. I like what Brent said about Big Dogs needing to find their own energy source and to produce Little Dogs. That's a good criteria, although I still don't know that they will feel or understand what it is to be alive if the experiences of their components are not also the experiences of living organisms. Instead of just looking to move up the evolutionary tree, we should also focus on making a reaally good nanotech stem cell. That's if we want to create true AGI. I don't think we want that at all. We want servants. True AGI is not going to be our servant. > > > We just understand more about > > emulating certain functions of intelligence. When we approach it from > > a 1-p:3-p sense based model rather than a 3-p computation model, I > > think we will have the real progress which has eluded us thus far. > > >>>> I think your analogy is in error. You cannot compare the strip > >>>> of metal > >>> to > >>>> the trillion cell organism. The strip of metal is like a red- > >>>> sensing > >>> cone > >>>> in your retina. It is merely a sensor which can relay some > >>>> information. > >>>> How that information is interpreted then determines the experience. > > >>> Aren't you just reiterating what I wrote? "because a strip of > >>> metal is > >>> so different from a trillion cell living being" > > >> What I mean is that the metal strip is not the mind, and should not > >> be > >> equated with one. It is more like a temperature sensitive nerve- > >> ending. A > >> thermostat with the appropriate additional computational functions > >> could > >> feel, sense, be aware, think, be conscious, care, etc. > > > or it could just compute and report a-signifying data. > > A modern day argument against souls of those that are different from us. Nah. There was never a reason to believe that their 'souls' were in any meaningful way similar to us. They don't scream when you damage them. They don't grow or change or reproduce or evolve. You really think that the TV could be watching shows with you? Do envelopes read the mail they hold? Seriously, this line of thinking is sophomoric sophistry to me. > > > We know that it makes some difference, because diseases which change > > the flexibility of those tubes or permittivity of those filaments make > > differences in what we as a whole are capable of feeling. Why wouldn't > > it? Why would a machine executed in semiconductor glass be any more > > effective at reproducing the anguish of a suffering animal than a pile > > of finely chopped scallions would be at running a spreadsheet > > application? Why doesn't matter matter? > > Because function is what matters, not how many protons are stuck > together in the pieces that make something. No, it's much more the fact that they are protons and not neutrons that make them something. Seventy nine protons make gold. What do 79 neutrons make? What do 79 ping pong balls make? What does the equation 70+8+1 make? Nothing significant. > > > > >>> "There cannot be a Microsoft Windows difference without an Intel > >>> chip > >>> difference". To say that Windows determines what the chip does you > >>> would say that Intel and AMD chips both supervene upon Windows. It > >>> seems backwards at first but it sort of makes sense, sort of a > >>> synonym > >>> for 'rely upon'. It's still kind of an odious and pretentious way to > >>> say something pretty straightforward, so I try to just say what I > >>> mean > >>> in simpler terms. > > >> I see, it is defined confusingly. I can also see it interpreted as > >> follows: > >> The state of the Microsoft word program cannot change without a > >> change in > >> the state of the underlying computer hardware. But not all changes > >> in the > >> computer hardware correspond to changes in the state of the program. > > > Right, I can see that interpretation too. That's why I hate reading > > philosophy, haha. > > >>>>>>>>> and reduces > >>>>>>>>> our cognition to an unconscious chemical reaction. > > >>>>>>>> If I say all of reality is just a thing, have I really reduced > >>> it? > > >>>>>>> It depends what you mean by a 'thing'. > > >>>>>> Does it? > > >>>>> Of course. If I say that an apple is a fruit, I have not reduced > >>>>> it as > >>>>> much as if I say that it's matter. > > >>>> How you choose to describe it doesn't change the fact that it is an > >>> apple. > > >>> I think the exact opposite. There is no such fact. It's only an > >>> apple > >>> to us. It's many things to many other kinds of perceivers on > >>> different > >>> scales. An apple is a fictional description of an intangible, > >>> unknowable concordance of facts. > > >>>> Likewise, saying the brain is a certain type of chemical reaction > >>>> does > >>> not > >>>> devalue it. Not all chemical reactions are equivalent, nor are all > >>>> arrangements of matter equivalent. With this fact, I can say the > >>>> brain > >>> is a > >>>> chemical reaction, or a collection of atoms. Neither of those > >>>> statements > >>> is > >>>> incorrect. > > >>> I don't have a problem with that. You could also say the brain is a > >>> certain type of hallucination. > > >>>>>>>> Explaining something in no way reduces anything unless what you > >>>>> really > >>>>>>> value > >>>>>>>> is the mystery. > > >>>>>>> I'm doing the explaining. You're the one saying that an > >>>>>>> explanation > >>> is > >>>>>>> not necessary. > > >>>>>> Your explanation is that there is no explanation. > > >>>>> Not really. > > >>>> An explanation, if it doesn't make new predictions, should at > >>>> least make > >>> the > >>>> picture more clear, providing a more intuitive understanding of the > >>> facts. > > >>> I think that mine absolutely does that. > > >>>>>>>> Also, I don't think it is incorrect to call it an "unconscious > >>>>> chemical > >>>>>>>> reaction". It definitely is a "conscious chemical reaction". > >>> This > >>>>> is > >>>>>>> like > >>>>>>>> calling a person a "lifeless chemical reaction". > > >>>>>>> Then you are agreeing with me. If you admit that chemical > >>>>>>> reactions > >>>>>>> themselves are conscious, > > >>>>>> Some reactions can be. > > >>>>>>> then you are admitting that awareness is a > >>>>>>> molecular sensorimotive property and not a metaphysical illusion > >>>>>>> produced by the brain. > > >>>>>> Human awareness has nothing to do with whatever molecules may be > >>> feeling, > >>>>> if > >>>>>> they feel anything at all. > > >>>>> Then you are positing a metaphysical agent which supervenes upon > >>>>> molecules to accomplish feeling. (which is maybe why you keep > >>>>> accusing > >>>>> me of doing that). > > >>>> Yes, the mind is a computation which does the feeling and it > >>>> supervenes > >>> on > >>>> the brain. > > >>> Why does the computation need to do any feeling? > > >> When a process is aware of information it must have awareness. > > > I can be aware of Chinese subtitles, but I have no awareness of > > Chinese. > > Okay. > > > A CD player can play a sad song for us, but that doesn't mean > > that it makes the CD player sad. > > I wouldn't expect it to. So why expect a computer to be sad just because it's acting sad for us? > > > Every physical thing has some kind of > > 'awareness' or sensorimotive content, however primitive, but > > computation itself does not necessarily have it's own existence. It's > > just a text in the context of our awarenss. A cartoon character > > doesn't have any feelings. > > Do you really see no difference between a computer and a cartoon? Sure I do, but I'm trying to show you that you don't see the similarities. It's a reductio ad absurdum to expose why the principle is flawed. The difference is just a degree of complexity that glamorizes the computation. If you make a really great simulation of a person - say that you import the human genome into The Game of Life, and in the course of your Beta Testing, you end up with some human carnivores eating some human herbivores. Should those carnivores be prosecuted for murder and cannibalism? Should you be arrested? > > > It can be seen to respond to it's cartoon > > environment but it's not the awareness of the cartoon you are > > watching, it's the awareness of the cartoonist, the producer, the > > writer, the animator that you are watching. > > >>> Why have we not seen a single information processing system indicate > >>> any awareness beyond that which it was designed to simulate? > > >> Watson was aware of the Jeopardy clue being asked, was it not? > > > No. Watson is just a massive array of semiconductors eating power and > > crapping out zillions of hierarchically distilled results. > > Sounds a lot like a brain. It is a lot like a brain, but it's nothing like a person. It's a model of the (external, public, generic, electromagnetic) material behaviors of the brain, not the (internal, private, proprietary, sensorimotive) experience through the brain's energy. It's a glass brain - all form and no content. > > > It's an > > intelliformed organization, not an intelligent organism. It doesn't > > care if it's right or wrong or how well it understands the clue, > > How well it understood it determined if it attempted an answer. That's the consequence being mistaken for the cause. It doesn't understand anything, it just reports the findings of it's search, like Google. We can project onto it through HADD/prognosia that there is some difference in how it feels when a query is more or less successful - we can imagine that a question is 'hard' or 'easy' for it, but that's just ventriloquism. All questions are easy for it. Some take longer, some take forever so it is programmed to give up. Either way it doesn't care, it will keep answering questions for no reason like an idiot until we unplug it. > > > it's > > just going to run it's meaningless algorithms on the meaningless data > > it's being fed. > > What makes the data in it's memory meaningless but the data in your > brain meaningful? What's in my brain isn't data, it's living neurochemistry. What's in my memory is signified residue of perceptual experience. What's in a computer is semiconductor electronics. What's in a computer memory is boolean algebra. 'Data' is a semantic concept of logic that can be extracted from any of those forms, but the forms are not fully described by that arithmetic extraction. Each phenomenon is a specific text of sensorimotive-electromagnetism (energized matter) in a context of perceptual relativity (localized time). The context isn't a thing really, it's a channel of sense - an inertial frame which arises as a result of ingroup sensemaking on different ranges of scale which clump together, like colors in the visible spectrum appear to us. It's the de facto glue between things that make sense together despite their separation by space (matter) and time (energy, events, experiences). Each channel of sense overlaps to some extent with ever other one, so that every inertial niche has a categorical range of stereotypes to present the otherness of other sense channels. If we think machines are cold and soulless because the sense they seem to make is very different from the way we make sense, although it may be amplified from the objective difference, and vary subjectively from person to person. We may not be able to prove that they *are* different from us, but we feel that they would be different because our sense channel bifurcates from silicon and steel at such a primitive level on the tree. Snakes freak some people out. Spiders for some. Prosthetic limbs. Different ways that what we are is reflected in what makes us uncomfortable. Some people embrace reptiles or insects. Some people prefer calculus problems over dancing. > > > No different from a doll that cries when you pick it > > up. There may be a mercury switch that detects being picked up, and > > there may be a chip that detects the mercury switch and plays the > > audio sample of crying, but there is no sense making going on between > > the two things. The doll as a whole doesn't know anything. > > The switch knows whether or not the circuit is completed by the mercury. Yes! There is probably an experience of being 'on'. The presence of power. Although it's not clear if while the circuit is switched it knows that there was ever any other state to be in. While it is off it probably has no anticipation of being turned on. Your computer isn't waiting for you to use it. Something like DNA is crazily more complex in what states it can be in - primary, secondary, tertiary properties, etc. > > > > >> The Herbivores in the simulation I posted yesterday are aware of > >> nearby > >> predators and their color. > > > They are designed to simulate something, so they do. How does that > > constitute indicating an awareness beyond their design? > > They weren't programmer to hide or flee from danger, but they do. In a sense they were, just not explicitly. If they suddenly sang a song or burned a hole into the screen then that would indicate an awareness beyond their design. > > > > >>> What kind > >>> of awareness does a book have without a reader? Information is > >>> something I used to assume could exist on it's own, but now it's > >>> like > >>> a glaring red Emperor's New Clothes to me. A brick is nothing but > >>> 'information' and information is the really the brick. Um, yeah. I > >>> understand the appeal, but it's a figment of a 21st century > >>> Occidental > >>> imagination. > > >>>>> How does it come to affect physical things? > > >>>> Because the aware systems we are familiar with are supervening on > >>> physical > >>>> objects. > > >>> So because awareness needs physical objects, that means objects are > >>> affected by awareness? But then somehow that doesn't mean that human > >>> awareness affects our neurological behaviors? > > >> Changes in states of the mind are reflected by physical changes. > > > That's what I've been saying, but you insist that it's only changes in > > the mind which are reflections of physical changes and not the other > > way around. You say that if the mind's changes affect the physical > > processes then it has to be magic. > > No if the mind does something against what the underlying laws suggest > would happen that would be a miracle. This is not to say the mind > foes not have physical effects. That's been my position all along. So how do you describe how physical effects of the mind work? How does thinking about gambling affect the amygdala? > > > > >>>>>>>>> If that were the > >>>>>>>>> case then you could never have a computer emulate it without > >>>>> exactly > >>>>>>>>> duplicating that biochemistry. My view makes it possible to at > >>>>> least > >>>>>>>>> transmit and receive psychological texts through materials as > >>>>>>>>> communication and sensation but your view allows the psyche no > >>>>>>>>> existence whatsoever. It's a complete rejection of awareness > >>> into > >>>>>>>>> metaphysical realms of 'illusion'. > > >>>>>>>> I think you may be mistaken that computationalism says > >>>>>>>> awareness > >>> is > >>>>> an > >>>>>>>> illusion. There are some eliminative materialists who say > >>>>>>>> this, > >>> but > >>>>> I > >>>>>>> think > >>>>>>>> they are in the minority of current philosophers of mind. > > >>>>>>> How would you characterize the computationalist view of > >>>>>>> awareness? > > >>>>>> A process to which certain information is meaningful. > >>>>>> Information is > >>>>>> meaningful to a process when the information alters the states or > >>>>> behaviors > >>>>>> of said process. > > >>>>> What makes something a process? > > >>>> Rules, change, self-reference. > > >>> What makes something a rule, > > >> Some invariant relation in some context. > > > So then invariance, relation, and context are more primitive than > > rules. Which is the same conclusion I reach. Those are actually > > synonyms for my three sense elements: invariance = essence, relation = > > existence (sense²), context = Sense (sense³). Invariance = > > sensorimotive coherence. Relation = sensorimotive-electromagnetic > > variance of coherence and incoherence. Context = Relativity of > > perception and perception of relativity. Inertial frames. The key > > difference though is that I see the primitive unit of sense as an > > *experience* from which the concept of invariance is derived. The > > experience has no name, it's just isness. Self. A non-computable > > vector of orientation. > > >>> or a change, > > >> When one thing varies with another. > > >>> or a self, or a reference? > > >> Self-reference is when one thing's definition refers to itself, > >> recursively > >> or iteratively. > > > Those describe the meaning of the terms, but not the physics of the > > phenomenon. How does a 'self' come to 'refer' to something? > > E.g., > > F(n) = F(n-1) + F(n-2) > F(0) = 1 > F(1) = 1 That's an equation to help us imagine a kind of quantitative self reference, but how in the real world does F(n) come to equal anything other than F(n)? What is "=" in reality? I think that we can only conclude that = is in the eye of the beholder. It's a matter of subjective pattern recognition and not an objective mechanism. There is no function that causes one physical thing to 'refer to' another physical thing. It is perception that 'infers' the reference. > > > > >>>>> Are all processes equally meaningful? > > >>>> No. > > >>>>>>> What makes the difference between something that is aware and > >>>>>>> something that is not? > > >>>>>> Minimally, if that thing possesses or receives information and is > >>> changed > >>>>> by > >>>>>> it. Although there may be more required. > > >>>>> We are changed by inputs and outputs all the time that we are not > >>>>> aware of. > > >>>> There may be other conscious parts within us which are > >>>> disconnected from > >>> the > >>>> conscious part of us which does the talking and typing. For > >>>> example, > >>> your > >>>> cerebellum performs many unconscious calculations affecting motor > >>> control, > >>>> but is it really unconscious? Perhaps its information patterns and > >>>> processing are merely not connected to the part of the brain which > >>> performs > >>>> speech. Similarly, a bisected brain becomes two minds by virtue > >>>> of their > >>>> disconnection from each other. > > >>> I agree, but it doesn't explain why the inputs and outputs we are > >>> aware of are different from those we are not aware of. > > >> For those we are not aware of, there is no integration into the > >> computational state of high dimensionality which includes most of the > >> functions and processes of the cortex. > > > Right, but what determines what gets integrated and what doesn't? > > If it is communicated, if it is ignored, and if not how the > information is used and shared. I think that's just rewording the question. Communication, use, and sharing is what integration is. I ask what determines what is integrated and what isn't, and you answer that what is integrated is integrated and what is ignored is not. It still leaves a big hole right where we live. > > > > >>> Ok, but the Taj Mahal is just made of mainly stone. Either way the > >>> dynamics of either one won't ever get you closer to predicting the > >>> shape of the Taj Mahal than anything else. > > >> The stone model doesn't describe those that designed or built it, > >> while the > >> atomic model would. > > > I don't follow. The atomic model predicts India? > > Yes, India is a possible outcome of the atomic model. Oh I see, because India is made only of atoms and not only of stones. Still I would hardly say that any model of atoms anything other than basic chemical relations. I don't think even biology is a possible outcome of an atomic model. The event of life emerging redefines what is possible for molecules retroactively. Think of it like a hand. No model of the physiology of your hand is going to have a physical outcome of say, communicating sign language. You can't anticipate that four fingers and an opposable thumb is going to wind up being useful as a communication method to creatures whose ears don't work well. It's a wild overconfidence in the power of theory I think to imagine that kind of reach in a concrete model of a particular phenomena. Most every microcosmic physical phenomenon relates to atoms, but atomic phenomena don't describe every other phenomenon in the universe. > > > > >>>>>>> Human consciousness is a specific Taj Mahal of sensorimotive- > >>>>>>> electromagnetic construction. The principles of it's > >>>>>>> construction > >>> are > >>>>>>> simple, but that simplicity includes both pattern and pattern > >>>>>>> recognition. > > >>>>>> Pattern and pattern recognition, information and information > >>> processing. > >>>>>> Are they so different? > > >>>>> Very similar yes, but to me information implies a-signifying > > >>>> Could you define "a-signifying" for me? > > >>> Meaning that the information has no meaning to the system processing > >>> it. A pattern of pits on a CD is a-signifying to the listener and > >>> the > >>> music being played is a-signifying to the stereo. In each case, > >>> fidelity of the text is retained, but the content of the text is > >>> irrelevant outside of the context of it's appropriate system. A TV > >>> set > >>> isn't watching TV, it's just scanning lines. That's information. > >>> Handling data generically without any relevant experience.. > > >> This is the difference between a recording (or information being > >> sent over a > >> wire) compared to information being processed (in which it has > >> particular > >> meaning by virtue of the context and difference it makes in the > >> processing). > > > You're still hallucinating 'information' into wires. There's no > > objective information there to the wire other than atomic collisions. > > Information is a physical concept, as Claude Shannon showed. In fact > I think it is a more fundamental concept than physics. Information has physical requirements, but isn't itself physical. I would say that what information refers to is *as* fundamental as physics but not more. Sense is more fundamental than either physics or information. Information is essentially second hand experience. Sensorimotive qualities which are intentionally treated as quantities. > > > > > Information is just a way of saying external assistance to sense- > > making. Whether the text has meaning in a particular context or not > > depends on the relation between the two. A machine can't make sense of > > feelings, it can only make sense of it's intended measurements in > > terms of objective measurements. There is no private subjectivity > > going on. It's all accessible publicly. > > >> The self driving Google car's cameras which transmit the raw > >> input data possesses no meaning, but the software that determines > >> that it > >> sees a car, or a stop sign generates meaning from this > >> information. "Stop > >> sign *means* we need to decelerate" > > > No, the software doesn't know what a car or a stop sign is, > > You don't know what a stop sign is. Sure I do. It's in my native perceptual niche. It's designed expressly so that I will know what it is. My eyeballs don't know what it is. My foot doesn't know what it is, but I myself know exactly what it is supposed to be. > > > it just > > presents a an instruction set to a microprocessor > > Your neurons just present a neurotransmitter to a neuron A neurotransmitter doesn't know what a stop sign is either. The instruction set is the only definition of the stop sign that the computer has. They are opposites. We see the stop sign through the inside of our synapses, but a microprocessor has an arithmetic pattern imposed upon it externally which doesn't resemble any of it's subjective references. > > > switches the circuit > > on that leads to the actuator > > Which causes a nerve to fire that leads to a muscle in your foot No, it causes you to become aware of the necessity of moving your foot. You have to comply with that sensory input with a motive output of your own choosing (conditioned and reflexive as it may become, you can still voluntarily override conditioning and run the stop sign. The Google car can't do that). > > > that happens to lead to the accelerator > > (it could lead to a toaster or a nuclear missile). Optical patterns > > which satisfy the software's description of stop signs cause a circuit > > to close. There is no meaning or choice involved. Turning on a water > > faucet doesn't mean anything to the plumbing. There are consequences > > on a physical level, but not one that leads on it's own to psychology. > > Now who is denying "the other side of the coin"? Not that I ever > denied a first person perspective. Perhaps you think I am because you > confuse my belief that computers can be conscious to imply we are as > souless as you believe computers to be. I don't think that you believe people to be soulless, but I think that you would have to if you followed substance monism to it's logical conclusion. I'm not denying that there is sensorimotive content to matter - I think that there has to be, I just think that there are channels of perception and that our naive apprehensions of foreign channels has some validity. It may be exaggerated, it may be stereotype, but the fact that it exists may be extremely important in understanding how all of this works. Why do we so not care what a water faucet is thinking? > > > > >>> A choice is being made from the 3-p view, but that isn't the one > >>> that > >>> matters. The computer has no knowledge of it's choices. It's just > >>> executing an instruction set. > > >> It does have knowledge. What you ascribe to having no knowledge of > >> the > >> decision is the underlying basis of the computation. Similarly, your > >> neurons (individually) have no idea of what stock you are > >> purchasing or > >> selling at the time you do. Only you (the higher level process > >> does). It > >> is the same with a computer-supported mind. > > > The difference is that our higher level processes arise autopoetically > > from our neurology. A computer has is our higher level processes > > imposed on semiconductors which have no capacity to develop their own > > higher level processes - which is precisely why these kinds of > > materials are used. Making a computer out of living hamsters in a maze > >is not going to be very reliable. Hamsters have more of their own >> agenda. Their behavior is less predictable. Humans even more so. > All processes need a reliable foundation, be they the physical laws ir > a chips instruction set. Any processes executed by a hamster do not, apparently, need as reliable a foundation as do boolean logic computers. This difference must be addressed. If everything is founded on reliability, why and how does anything become unreliable? Craig -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

