(I'm sorry that I make some unclear statements on semantics/meaning, I'll probably get to the description of this perspective later on the blog (or maybe it'll become obsolete before that), but it's a long story, and writing it up on the spot isn't an option.)
On Sat, Nov 15, 2008 at 2:18 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Taking the position that consciousness is an epiphenomenon and is therefore > meaningless has difficulties. Rather p-zombieness in atom-by-atom the same environment is an epiphenomenon. > > By saying that it is an epiphenomenon, you actually do not answer the > questions about instrinsic qualities and how they relate to other things in > the universe. The key point is that we do have other examples of > epiphenomena (e.g. smoke from a steam train), What do you mean by smoke being epiphenomenal? > but their ontological status > is very clear: they are things in the world. We do not know of other > things with such puzzling ontology (like consciousness), that we can use as > a clear analogy, to explain what consciousness is. > > Also, it raises the question of *why* there should be an epiphenomenon. > Calling it an E does not tell us why such a thing should happen. And it > leaves us in the dark about whether or not to believe that other systems > that are not atom-for-atom identical with us, should also have this > epiphenomenon. I don't know how to parse the word "epiphenomenon" in this context. I use to to describe reference-free, meaningless concepts, so you can't say that some epiphenomenon is present here or there, that would be meaningless. >> Jumping into molecular framework as describing human cognition is >> unwarranted. It could be a description of AGI design, or it could be a >> theoretical description of more general epistemology, but as presented >> it's not general enough to automatically correspond to the brain. >> Also, semantics of atoms is tricky business, for all I know it keeps >> shifting with the focus of attention, often dramatically. Saying that >> "self is a cluster of atoms" doesn't cut it. > > I'm not sure of what you are saying, exactly. > > The framework is general in this sense: its components have *clear* > counterparts in all models of cognition, both human and machine. So, for > example, if you look at a system that uses logical reasoning and bare > symbols, that formalism will differentiate between the symbols that are > currently active, and playing a role in the system's analysis of the world, > and those that are not active. That is the distinction between foreground > and background. Without a working, functional theory of cognition, this high-level descriptive picture has little explanatory power. It might be a step towards developing a useful theory, but it doesn't explain anything. There is a set of states of mind that correlates with experience of apples, etc. So what? You can't build a detailed edifice on general principles and claim that far-reaching conclusions apply to actual brain. They might, but you need a semantic link from theory to described functionality. > As for the self symbol, there was no time to go into detail. But there > clearly is an atom that represents the self. *shug* It only stands as definition, there is no "self"-neuron, or something easily identifiable as "self", it's a complex thing. I'm not sure I even understand what "self" refers to subjectively, I don't feel any clear focus of self-perception, my experience is filled with thoughts on many things, some of them involving management of thought process, some of external concepts, but no unified center to speak of... >> Bottoming out of explanation of experience is a good answer, but you >> don't need to point to specific moving parts of a specific cognitive >> architecture to give it (I don't see how it helps with the argument). >> If you have a belief (generally, a state of mind), it may indicate >> that the world has a certain property, that world having that property >> caused you to have this belief, or it can indicate that you have a >> certain cognitive quirk that caused this belief, a loophole in >> cognition. There is always a cause, the trick is in correctly >> dereferencing the belief. >> http://www.overcomingbias.com/2008/03/righting-a-wron.html > > Not so fast. There are many different types of "mistaken beliefs". Most of > these are so shallow that they could not possibly explain the > characteristics of consciousness that need to be explained. > > And, as I point out in the second part, it is not at all clear that this > particular issue can be given the status of "mistaken" or "failure". It > simply does not fit with all the other known examples of "failures" of the > cognitive system, such as hallucinations, etc. > > I thin it would be intellectually dishonest to try to sweep it under the rug > with those other things, because those are clearly breakdowns that, with a > little care, could all be avoided. But this issue is utterly different: by > making the argument that I did, I think I showed that it was a kind of > "failure" that is intrinsic to the design of the system, and not avoidable. > > Part 2 of the paper is, I agree, much more subtle. But I think it is > important. The point is that in general, any experience at all can be reduced to its causal history, and can be given semantics of a model of that history. It applies to correct beliefs, to shallow errors, and to deepest mysteries of subjective experience. It's a blanked explanation, it doesn't go into details of said causal histories, of models of different kinds of experience, but it's an important point to keep in mind, to avoid saying that some kinds of experience are inherently mysterious or unexplainable, or beyond the reach of science. The arguments answers this particular limitation, and isn't intended to explain away specific characterizations of different kinds of experience. >> Subjective phenomena might be unreachable for meta-introspection, but >> it doesn't place them on different level, making them "unanalyzeable", >> you can in principle inspect them from outside, using tools other then >> one's mind itself. You yourself just presented a model of what's >> happening. > > No, I don't think so. Most philosophers would ask you what you meant by > "inspecting them from outside", and then when you gave an answer they woudl > say that you had changed the subject to a Non-Hard aspect of consciousness. Maybe I have, and maybe they didn't have a meaningful explanation of what hard problem is, and that it exists at all. > Now, what I did was not to inpsect them from the outside, but to > *circumscribe* them. I did not breach the wall of subjectivity, did I? I > do not think anyone can. I think the trick is that meaning can't escape from frame of reference of a mind, but you can describe any phenomenon, including subjective ones, from a physical level, making it effectively objective, even if in principle you can't ground objectivity completely (but it's a problem on a level of obtaining absolute certainty in something, not really a practical issue). You can start from a subjective frame of reference, present your subjective experience in it, then present semantics of physical world in the same basis, and convert meaning of experience to semantics of physical world. It's counterintuitive, descriptions are too different, but they are descriptions of the same event, by construction. So, you can't break out of subjectivity, but in the same sense everything is subjective, including objectivity. Objectivity provides a different basis, from which again nothing can't break out, including subjectivity. >> Meaning/information is relative, it can be represented within a basis, >> for example within a mind, and communicated to another mind. Like >> speed, it has no absolute, but the laws of relativity, of conversion >> between frames of reference, between minds, are precise and not >> arbitrary. Possible-worlds semantics is one way to establish a basis, >> allowing to communicate concepts, but maybe not a very good one. >> Grounding in common cognitive architecture is probably a good move, >> but it doesn't have fundamental significance. > > This is a deeper issue than we can probably address here. But the point > that an Extreme Cognitive Semanticist would make is that the System Is The > Semantics. > > That is very different from claiming that some other semantics exists, > except as a weak approximation. Possible-worlds semantics is incredibly > weak: it cannot work for most of the concepts that we use in our daily > lives, and that is why there are whole books on Cognitive Semantics, such as > the one I referenced. > Stopping the recursive buck is important here, but when you are describing a model of semantics, you can't escape from presenting information as ultimately interpreted by you. Most of the internal semantics, of details present in the model, can be closed within a described mind, stopping the regress, but understanding the mind would require linking at least some of the semantics of what's going on inside it, at the level of physical processes maybe, to what you understand when you describe it. > >> "Predictions" are not described carefully enough to appear as >> following from your theory. They use some terminology, but on a level >> that allows literal translation to a language of perceptual wiring, >> with correspondence between qualia and areas implementing >> modalities/receiving perceptual input. > > I agree that they could be better worded, but do you not think the intention > is clear? The intention is that, in the future, we look for the analysis > mechanisms, and then we look for the boundaries beyond which it cannot go. > At that point we conduct our test. > No, I don't see that. For me, it sounds like suggesting to test general relativity by throwing apples and measuring their trajectories with a clepsydra. > >> You didn't argue about a general case of AGI, so how does it follow >> that any AGI is bound to be conscious? > > But I did, because I argued that there will always be an "analysis > mechanism" that allows the system to unpack its own concepts. Even though I > gae a visualization for how it works in my own AGI design, that was just for > convenience, because exactly the same *type* of mechanism must exist in any > AGI that is powerful enough to do extremely flexible things with its > thoughts. > > Basically, if a system can "reflect" on the meanings of its own concepts, it > will be aware of its consciousness. > > I will take that argument further in another paper, because we need to > understand animal minds, for example. It's hard and iffy business trying to recast a different architecture in the language that involves these bottomless concepts and qualia. How do you apply your argument to AIXI? It doesn't map even on my design notes, where architecture looks much more like yours, with elements of description flying around and composing a scene or a plan (in one of the high-level perspectives). In my case, the problem is with semantics of elements of description being too fleeting, context-dependent, and with description not being hierarchical, so that when you get to the bottom, you find yourself on the top, in the description of the same scene seen now from a different aspect. Inference goes across the events in the environment+mind system considered in time, so there is no intuitive counterpart to unpacking, it all comes down to inference of events, what connects to what, what can be inferred from what, what indicates what. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
