Russell, this is my personal way of thinking in realization of the continual epistemic enrichment what earlier authors missed. I do not vouch for correctness of my ideas, they are like a level in an advancement I found followable in view of the latest epistemic additions in a continuously changing world(view). Self-awareness is definitely at the level of human complexity. I like to realize that lower levels of complexity (maybe starting down from apes?) can have 'awareness' and consciousness as well. When I arrived at my generalized formula of "response to information" it involved also a conscious acknowledgement of something outside the 'self'' without a postulate of 'self'. E.g. a figment of a physical entity can 'react' to a relation affecting its 'relations' (an ion vs. an electrode) without being "humanly aware" of Ohm's laws. So it IS (physically?) conscious and the response calls for its consciousness. If we measure our own complexity (in mental aspect) by the number of neurons participating in the thought-process (say >10^11 per brain) then we may question our words if a 'brain has lass than that - if ANY. "Life processes" (a name what I would like to get a good handle for) involve 'aware' observations (e.g. 'light' upon microorganisms, etc.) and respond to such without having neurons. This is in disagreement with the (Dennet and others' restriction of consciousness) "human only identification (human self aggrandizement) and I go for a 'general' term on development of ideas AFTER the Dennet et al. assigned level.
As for the ominous authentic-autonomous-(automatic?) "FREE WILL"? whatever we 'decide' FREELY(?) is a consequence of circumstances with two factors applied from within: 1. the ways our 'mind' (thinking, mentality, you name it) works by the genetic built of the TOOL (the neuronal brain) we use for this domain and 2. the sum total of our experience (memory?) acquired before such decision has been made, accumulated over our lifetime. (If you like: add inherited memes to it) We can disregard #1 and #2 and *'for other reasons'* decide differently, but that would not be FREE: it would be forced by those *other reasons* upon us (upon our FREE? - better? choice). #1 works many times (sub)-unconsciously, #2 provides many times unformulated changes to naturally occurring (free?) choices even without our conscious involvement. the 'robot-like' free will is not likely in a 'non-robot-like' thinking person with memory etc. I submit these ideas without a claim to defend them. They are not theories (which, btw. are applicable as long as in the further epistemic evolvement they are not deemed obsolete/false). John Mikes On Tue, May 3, 2011 at 11:29 AM, Stephen Paul King <stephe...@charter.net>wrote: > Hi Russell, > > *From:* Russell Standish <li...@hpcoders.com.au> > *Sent:* Monday, May 02, 2011 7:25 PM > *To:* email@example.com > *Subject:* Re: Max Substitution level = Min Observer Moment? > Stephen King wrote: > > > >>PS, to Russell: I think that you are conflating consciousness > > >>with self-awareness in section 9.5 of your book. <wlEmoticon- > > >>sadsmile.png> The two are not the same thing. Consciousness > > >>is purely passive. Self-awareness is active in that is involves > > >>the continuous modeling (passive consciousness) with the > > >>continuous act of choosing between alternatives (free will). > > I missed this comment earlier. It surprised me, as I do not conflate > the two (self awareness requires consciousness, but consciousness > without self-awareness is at least conceivable). > > Section 9.5 is about evolutionary explanations of self-awareness and > free will, so is not about the more general phenomenon of > consciousness at all. However, maybe you thought I was conflating the two > in the very last sentence: "I can conclude in agreement with Dennett that > consciousness is an extremely rare property ion the animal > kingdom". If this were only based on section 9.5, then this would be > overreaching conclusions from the mirror test and Macchiavellian > theory. But I also base the comment on the anthropic "ants are not > conscious argument" (section 5.4), which is about the more general > concept of consciousness, not just self-awareness, and also the Occam > catastrophe argument (page 84) leads to a (tentative) conclusion that > self-awareness is actually required for consciousness after all. It is > one of the more contentious conclusions of the book, so I'm happy for > that to be pulled apart and debated, > > > -- > > > ---------------------------------------------------------------------------- > Prof Russell Standish Phone 0425 253119 (mobile) > Principal, High Performance Coders > Visiting Professor of Mathematics hpco...@hpcoders.com.au > University of New South Wales http://www.hpcoders.com.au > > ---------------------------------------------------------------------------- > > > Umm, no. It was not just the last sentence. I would like to expand on > your thought process that lead to the conclusion. AFAIK, my idea of bare > consciousness does not involve any form of selection (other than the 1p > appearance of collapse of superpositions) and should apply to even a quark! > I am taking an idea from Chalmers and pushing it to see where and if it > breaks down; “that consciousness is a fundamental property > ontologically<http:///wiki/Ontology>autonomous of any known (or even > possible) physical properties, and that > there may be lawlike rules which he terms "psychophysical laws" that > determine which physical systems are associated with which types of qualia.” > > > As I see it any entity that has continuation in time has consciousness. > I know that I run the risk of crack-pottery, but let’s go back over the > Ant’s are not Conscious” argument. Your argument follows the form of the > Doomsday argument, which I have serious doubts about per QTI, but setting > that aside, why does not the complexity of the organism not factor into the > consideration about the presence or absence of qualia in ants? AFAIK, Bruno > has argued effectively that amoeba are conscious... The factor that I think > that the argument ignores (as does the Doomsday argument) is the ability to > communicate and the level of complexity that can be communicated. > When we consider the question “what is our expected body mass if we > are randomly sampled from the reference class of conscious beings?” are we > covertly selecting only that subsample of entities that we would consider as > conscious, say per a Turing test, like ourselves? What would be an > appropriate Turing test for an ant? How does body mass inform us of the > possession of qualia? Surely very small body mass limits the number of brain > states that we can claim supervene mentality, but what is that considering? > Are we missing something perhaps in thinking that only a certain size brain > or number of interconnected neurons supervenes consciousness? I think we > might be making the inverse of Searle’s mistake of the Chinese room! > I think that we need to nail down exactly what we mean by > “consciousness”. The definition of “possesses qualia” is only the first > step. I need to read Nagel's papers some more ... > > Onward! > > Stephen > > > -- > You received this message because you are subscribed to the Google Groups > "Everything List" group. > To post to this group, send email to firstname.lastname@example.org. > To unsubscribe from this group, send email to > everything-list+unsubscr...@googlegroups.com. > For more options, visit this group at > http://groups.google.com/group/everything-list?hl=en. > -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to email@example.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.