Re: Consciousness is information?

2009-05-17 Thread Brent Meeker

Kelly Harmon wrote:
 I think your discussing the functional aspects of consciousness.  AKA,
 the easy problems of consciousness.  The question of how human
 behavior is produced.
 
 My question was what is the source of phenomenal consciousness.
 What is the absolute minimum requirement which must be met in order
 for conscious experience to exist?  So my question isn't HOW human
 behavior is produced, but instead I'm asking why the mechanistic
 processes that produce human behavior are accompanied by subjective
 first person conscious experience.  The hard problem.  Qualia.
 
 I wasn't asking how is it that we do the things we do, or, how did
 this come about, but instead given that we do these things, why is
 there a subjective experience associated with doing them.

Do you suppose that something could behave just as humans do yet not be 
conscious, i.e. could there be a philosophical zombie?

 
 So none of the things you reference are relevant to the question of
 whether a computer simulation of a human mind would be conscious in
 the same way as a real human mind.  If a simulation would be, then
 what are the properties that those to two very dissimilar physical
 systems have in common that would explain this mutual experience of
 consciousness?

The information processing?

Brent


 
 
 
 On Sat, May 16, 2009 at 3:22 AM, Alberto G.Corona agocor...@gmail.com wrote:
 No. Consciousness is not information. It is an additional process that
 handles its own generated information. I you don´t recognize the
 driving mechanism towards order in the universe, you will be running
 on empty. This driving mechanism is natural selection. Things gets
 selected, replicated and selected again.

 In the case of humans, time ago the evolutionary psychologists and
 philosophers (Dennet etc) discovered the evolutionary nature of
 consciousness, that is double: For social animals, consciousness keeps
 an actualized image of how the others see ourselves. This ability is
 very important in order to plan future actions with/towards others
 members. A memory of past actions, favors and offenses are kept in
 memory for consciousness processing.  This is a part of our moral
 sense, that is, our navigation device in the social environment.
 Additionally, by reflection on ourselves, the consciousness module can
 discover the motivations of others.

 The evolutionary steps for the emergence of consciousness are: 1) in
 order to optimize the outcome of collaboration, a social animal start
 to look the others as unique individuals, and memorize their own
 record of actions. 2) Because the others do 1, the animal develop a
 sense of itself and record how each one of the others see himself
 (this is adaptive because 1). 3) This primitive conscious module
 evolved in 2 starts to inspect first and lately, even take control of
 some action with a deep social load. 4) The conscious module
 attributes to an individual moral self every action triggered by the
 brain, even if it driven by low instincts, just because that´s is the
 way the others see himself as individual. That´s why we feel ourselves
 as unique individuals and with an indivisible Cartesian mind.

 The consciousness ability is fairly recent in evolutionary terms. This
 explain its inefficient and sequential nature. This and 3 explains why
 we feel anxiety in some social situations: the cognitive load is too
 much for the conscious module when he tries to take control of the
 situation when self image it at a stake. This also explain why when we
 travel we feel a kind of liberation: because the conscious module is
 made irrelevant outside our social circle, so our more efficient lower
 level modules take care of our actions


 
  
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-17 Thread Alberto G.Corona

The hard problem may be unsolvable, but I think it would be much more
unsolvable if we don´t fix the easy problem, isn´t? With a clear idea
of the easy problem it is possible to infer something about the hard
problem:

For example, the latter is a product of the former, because we
perceive things that have (or had) relevance in evolutionary terms.
Second, the unitary nature of perception match well with the
evolutionary explanation My inner self is a private reconstruction,
for fitness purposes, of how others see me, as an unit of perception
and purpose, not as a set of processors, motors and sensors, although,
analytically, we are so. Third, the machinery of this constructed
inner self sometimes take control (i.e. we feel ourselves capable of
free will) whenever our acts would impact of the image that others may
have of ourselves.

If these conclusions are all in the easy lever, I think that we have
solved a few of moral and perceptual problems that have puzzled
philosophers and scientists for centuries. Relabeling them as easy
problems the instant after an evolutionary explanation of them has
been aired is preposterous.

Therefore I think that I answer your question: it´s not only
information; It´s about a certain kind of information and their own
processor. The exact nature of this processor that permits qualia is
not known; that’s true, and it´s good from my point of view, because,
for one side, the unknown is stimulating and for the other,
reductionist explanations for everything, like the mine above, are a
bit frustrating.


On May 16, 8:39 pm, Kelly Harmon harmon...@gmail.com wrote:
 I think your discussing the functional aspects of consciousness.  AKA,
 the easy problems of consciousness.  The question of how human
 behavior is produced.

 My question was what is the source of phenomenal consciousness.
 What is the absolute minimum requirement which must be met in order
 for conscious experience to exist?  So my question isn't HOW human
 behavior is produced, but instead I'm asking why the mechanistic
 processes that produce human behavior are accompanied by subjective
 first person conscious experience.  The hard problem.  Qualia.

 I wasn't asking how is it that we do the things we do, or, how did
 this come about, but instead given that we do these things, why is
 there a subjective experience associated with doing them.

 So none of the things you reference are relevant to the question of
 whether a computer simulation of a human mind would be conscious in
 the same way as a real human mind.  If a simulation would be, then
 what are the properties that those to two very dissimilar physical
 systems have in common that would explain this mutual experience of
 consciousness?

 On Sat, May 16, 2009 at 3:22 AM, Alberto G.Corona agocor...@gmail.com wrote:

  No. Consciousness is not information. It is an additional process that
  handles its own generated information. I you don´t recognize the
  driving mechanism towards order in the universe, you will be running
  on empty. This driving mechanism is natural selection. Things gets
  selected, replicated and selected again.

  In the case of humans, time ago the evolutionary psychologists and
  philosophers (Dennet etc) discovered the evolutionary nature of
  consciousness, that is double: For social animals, consciousness keeps
  an actualized image of how the others see ourselves. This ability is
  very important in order to plan future actions with/towards others
  members. A memory of past actions, favors and offenses are kept in
  memory for consciousness processing.  This is a part of our moral
  sense, that is, our navigation device in the social environment.
  Additionally, by reflection on ourselves, the consciousness module can
  discover the motivations of others.

  The evolutionary steps for the emergence of consciousness are: 1) in
  order to optimize the outcome of collaboration, a social animal start
  to look the others as unique individuals, and memorize their own
  record of actions. 2) Because the others do 1, the animal develop a
  sense of itself and record how each one of the others see himself
  (this is adaptive because 1). 3) This primitive conscious module
  evolved in 2 starts to inspect first and lately, even take control of
  some action with a deep social load. 4) The conscious module
  attributes to an individual moral self every action triggered by the
  brain, even if it driven by low instincts, just because that´s is the
  way the others see himself as individual. That´s why we feel ourselves
  as unique individuals and with an indivisible Cartesian mind.

  The consciousness ability is fairly recent in evolutionary terms. This
  explain its inefficient and sequential nature. This and 3 explains why
  we feel anxiety in some social situations: the cognitive load is too
  much for the conscious module when he tries to take control of the
  situation when self image it at a stake. This also explain why when we
  

Re: Consciousness is information?

2009-05-17 Thread John Mikes
Let me please insert my remarks into this remarkable chain of thoughts below
(my inserts in bold)
John M

On Sun, May 17, 2009 at 2:03 AM, Brent Meeker meeke...@dslextreme.comwrote:


 Kelly Harmon wrote:
  I think your discussing the functional aspects of consciousness.  AKA,
  the easy problems of consciousness.  The question of how human
  behavior is produced.


*I believe it is a 'forced artifact' to separate any aspect of a complex
image from the entire 'unit' we like to call 'conscious behavior'. In our
(analytical) view we regard the 'activity' as separate from the initiation
and the process resulting from it through decision(?) AND the assumed
maintaining of the function. *


 
  My question was what is the source of phenomenal consciousness.
  What is the absolute minimum requirement which must be met in order
  for conscious experience to exist?  So my question isn't HOW human
  behavior is produced, but instead I'm asking why the mechanistic
  processes that produce human behavior are accompanied by subjective
  first person conscious experience.  The hard problem.  Qualia.


*We are 'human' concentrated and slanted in our views. *
*Extending it not only to other 'conscious' animals, but to phenomena in the
so (mis)called 'inanimate' - and reversing our logical habit (see below to
Brent) brings up different questions so far not much discussed. The 'hard
problem' is a separation in the totality of the phenomenon  -*
*[from its physical/physiological observation within our so far
outlined  figment of viewing the 'physical world' separately and its
reduced, conventional ('scientific')  explanations] - *
* into assuming (some) undisclosed other aspects of the same complex. From
'quantized' into some 'qualia'. *

 
  I wasn't asking how is it that we do the things we do, or, how did
  this come about, but instead given that we do these things, why is
  there a subjective experience associated with doing them.


*And we should exactly ask what you wasn't asking. *
**


 Brent: Meeker:
 Do you suppose that something could behave just as humans do yet not be
 conscious, i.e. could there be a philosophical zombie?


*Once we consider the totality of the phenomenon and do not separate aspects
of th complexity, the zombie becomes a meaningless artifact of the
primitive ways our thinking evolved. *


 Kelly:
 
  So none of the things you reference are relevant to the question of
  whether a computer simulation of a human mind would be conscious in
  the same way as a real human mind.  If a simulation would be, then
  what are the properties that those to two very dissimilar physical
  systems have in common that would explain this mutual experience of
  consciousness?


*A fitting computer simulation would include ALL aspects involved - call it
mind AND body, 'physically' observable 'activity' and 'consciousness as
cause' -- but alas, no such thing so far. Our embryonic machine with its
binary algorithms, driven by a switched on (electrically induced) primitive
mechanism can do just that much, within the known segments designed 'in'. *
*What we may call 'qualia' is waiting for some analogue comp, working
simultaneously on all aspects of the phenomena involved (IMO not practical,
since there cannot be a limit drawn in the interrelated totality, beyond
which relations may be irrelevant). *
**


 Brent:
 The information processing?


*Does that mean a homunculus, that 'processes' the (again separated) aspect
of 'information' into a format that fits our image of the aspectwise
formulated items? *
*What I question is the 'initiation' and 'maintenance' of what we call the
occurrence of phenomena. We do imagine a 'functioning' world where
everything just does occur, observed by itself and in no connection to the
rest of the world. *
*I am looking for 'relations' that 'influence' each other into aspects we
consider as 'different' (from what?) and call such relational
interconnectedness the world. *
*We are far from knowing it all, even further from any 'true' understanding
so we fabricted in our epistemic enrichment over the millennia  a stepwise
approach to 'explain' the miracles. *
*Learning of acknowledged(?) relational aspects (call it decisionmaking?)
and realization of ramifications upon such (call it process, function,
activity) is the basis of our (now still reductionistic) physical
worldview.  *
*Please excuse my hasty writing in premature ideas I could not detail out or
even justify using inadequate old words that should be relaced by a fitting
vocabulry. ((Alberto (below) even mentions 'memory' - that could as well be
a re-visiting of relations in the a-temporal totality view we coordinate as
a time - space physics)). *



 Brent

*John M*

 
  On Sat, May 16, 2009 at 3:22 AM, Alberto G.Corona agocor...@gmail.com
 wrote:
  No. Consciousness is not information. It is an additional process that
  handles its own generated information. I you don´t recognize the
  driving mechanism towards order in the 

Re: Victor Korotkikh

2009-05-17 Thread John Mikes
I read in this exchange:
I have a problem with infinite time (or something of such meaning).

Since IMO time is an auxiliary coordinate to 'order the view from the inside
of this (our) universe and in view of  the partial knowledge we so far
obtained about it, it is (our?) choice HOW we construct our concept of that
'time'.
Reminds me of my son, who - at 5 - did not dare to fall asleep because of
'sorcerers'  he learned about in the Kindergarten and was afraid that in
dreamland they come up. So I said: you little stupid kid, why don't you
choose a dreamland in which there are NO sorcerers? He looked at me OK
and sweetly went to sleep. 

We can change our ID of time into a format in which there is no problem with
its infinity. (Maybe not so easy, but who said 'everything' is easy?) In my
'narrative' about the world I have problems how to handle the timeless
(a-temporal) world and its concepts. I cannot 'change' the no-time into
another one.
G

John M

(PS: also waiting for a 'readable' new version of UDA).  JM



On Sat, May 16, 2009 at 7:44 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 Hi Ronald,


 On 15 May 2009, at 14:25, ronaldheld wrote:

 
  Bruno:
   I will wait for your most recent UDA to be posted here.

 All right.



 
   I have problems with infinite time and resources for your
  computations, if done in this physical Universe.

 Sure. Note that I use unbounded physical resources only in the step
 seven, to make the argument smoother, but the step 8 eliminates the
 need of that assumption. All you have to believe in is that a
 mathematical Turing machine either stop or not stop.


 Best,

 Bruno



 
 
 
  On May 14, 12:22 pm, Bruno Marchal marc...@ulb.ac.be wrote:
  Ronald,
 
  On 14 May 2009, at 13:19, Ronald (ronaldheld) wrote:
 
  Can you explain your Physics statement in more detail, which I can
  understand?
 
  UDA *is* the detailed explanation of that physics statement. So it
  would be simpler if you could tell me at which step you have a
  problem
  of understanding, or an objection, or something. You can search UDA
  in
  the archives for older or more recent versions,  or read my SANE2004
  paper:
 
  http://iridia.ulb.ac.be/~marchal/publications/
  SANE2004MARCHALAbstract...
 
  In a nutshell, the idea is the following. If we are machine we are
  duplicable. If we distinguish the first person by their personal
  memories, sufficiently introspective machine can deduce that they
  cannot predict with certainty they personal future in either self-
  duplicating experience, or in many-identical-states preparation
  like
  a concrete universal dovetailer would do all the time.
  So, if you are concretely in front of a concrete universal
  dovetailer,
  with the guaranty it will never stop (in some steady universe à-la
  Hoyle for example), you are in a high state of first person
  indeterminacy, given that the universal dovetailer will execute all
  the computations going through your actual state. Sometimes I have to
  remind the step 5 for helping the understanding here. In that state,
  from a first person perspective you don't know in which computational
  history you belong, but you can believe (as far as you are willing to
  believe in comp) that there are infinitely many of them. If you agree
  to identify an history by its infinite steps, or if you accept the
  Y =
  II principle (that if a story bifurcate, Y , you multiply their
  similar comp-past, so Y gives  II), then you can understand that the
  cardinal (number) of your histories going through you actual state is
  2^aleph_zero. It is a continuum. Of course you can first person
  distinguish only a enumerable quotient of it, and even just a finite
  part of that enumeration.  Stable consciousness need deep stories
  (very long yet redundant stories, it is deep in Bennett sense) and a
  notion of linear multiplication of independent stories.
  Now the laws of arithmetic provides exactly this, and so you can,
  with
  OCCAM just jump to AUDA, but you have to study one or two book of
  mathematical logic and computer science before. (the best are Epstein
   Carnielli, or Boolos, Burgess and Jeffrey).
  Or, much easier, but not so easy, meditate on the eighth step of UDA,
  which shows that form their first point of view universal machine
  cannot distinguish real from virtual, but they cannot distinguish
  real from arithmetical either, so that the arithmetical realm
  defines the intrinsic first person indeterminacy of any universal
  machine. Actually the eighth step shows that comp falsifies the usual
  mind/physical-machine identity thesis, but it does not falsify a
  weaker mind/many-mathematical machines thesis.
 
  If interested I suggest you study UDA in Sane2004, and ask any
  questions, or find a flaw  etc.
  (or wait for a more recent version I have yet to put on my page)
 
  Thanks for the reference to Kent's paper (it illustrates very well
  the
  knotty problems you get into when you keep Everett, materialism 

Re: Consciousness is information?

2009-05-17 Thread Kelly Harmon

On Sun, May 17, 2009 at 2:03 AM, Brent Meeker meeke...@dslextreme.com wrote:

 Do you suppose that something could behave just as humans do yet not be
 conscious, i.e. could there be a philosophical zombie?

I think that somewhere there would have to be a conscious experience
associated with the production of the behavior, THOUGH the conscious
experience might not supervene onto the system producing the behavior
in an obvious way.

Generally I don't think that what we experience is necessarily caused
by physical systems.  I think that sometimes physical systems assume
configurations that shadow, or represent, our conscious experience.
But they don't CAUSE our conscious experience.

So a computer simulation of a human brain that thinks it's at the
beach would be an example.  The computer running the simulation
assumes a sequence of configurations that could be interpreted as
representing the mental processes of a person enjoying a day at the
beach.  But I can't see any reason why a bunch of electrons moving
through copper and silicon in a particular way would cause that
subjective experience of surf and sand.

And for similar reasons I don't see why a human brain would either,
even if it was actually at the beach, given that it is also just
electrons and protons and neutrons.moving in specific ways.

It doesn't seem plausible to me that it is the act of being
represented in some way by a physical system that produces conscious
experience.

Though it DOES seem plausible/obvious to me that a physical system
going through a sequence of these representations is what produces
human behavior.


 The information processing?


Well, I would say information processing, but it seems to me that many
different processes could produce the same information.  And I would
not expect a change in process or algorithm to produce a different
subjective experience if the information that was being
processed/output remained the same.

So for this reason I go with consciousness is information, not
consciousness is information processing.

Processes just describe ways that different information states CAN be
connected, or related, or transformed.  But I don't think that
consciousness resides in those processes.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-17 Thread Kelly Harmon

On Sun, May 17, 2009 at 8:07 AM, John Mikes jami...@gmail.com wrote:

 A fitting computer simulation would include ALL aspects involved - call it
 mind AND body, 'physically' observable 'activity' and 'consciousness as
 cause' -- but alas, no such thing so far. Our embryonic machine with its
 binary algorithms, driven by a switched on (electrically induced) primitive
 mechanism can do just that much, within the known segments designed 'in'.
 What we may call 'qualia' is waiting for some analogue comp, working
 simultaneously on all aspects of the phenomena involved (IMO not practical,
 since there cannot be a limit drawn in the interrelated totality, beyond
 which relations may be irrelevant).


So you're saying that it's not possible, even in principle, to
simulate a human brain on a digital computer?  But that it would be
possible on a massively parallel analog computer?  What extra
something do you think an analog computer provides that isn't
available from a digital computer?  Why would it be necessary to run
all of the calculations in parallel?


 'consciousness as cause'

You are saying that consciousness has a causal role, that is
additional to the causal structure found in non-conscious physical
systems?  What leads you to this conclusion?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-17 Thread Kelly Harmon

On Fri, May 15, 2009 at 12:32 AM, Jesse Mazer laserma...@hotmail.com wrote:

 I don't have a problem with the idea that a giant lookup table is just a
 sort of zombie, since after all the way you'd create a lookup table for a
 given algorithmic mind would be to run a huge series of actual simulations
 of that mind with all possible inputs, creating a huge archive of
 recordings so that later if anyone supplies the lookup table with a given
 input, the table just looks up the recording of the occasion in which the
 original simulated mind was supplied with that exact input in the past, and
 plays it back. Why should merely replaying a recording of something that
 happened to a simulated observer in the past contribute to the measure of
 that observer-moment? I don't believe that playing a videotape of me being
 happy or sad in the past will increase the measure of happy or sad
 observer-moments involving me, after all. And Olympia seems to be somewhat
 similar to a lookup table in that the only way to construct her would be
 to have already run the regular Turing machine program that she is supposed
 to emulate, so that you know in advance the order that the Turing machine's
 read/write head visits different cells, and then you can rearrange the
 positions of those cells so Olympia will visit them in the correct order
 just by going from one cell to the next in line over and over again.


What if you used a lookup table for only a single neuron in a computer
simulation of a brain?  So actual calculations for the rest of the
brain's neurons are performed, but this single neuron just does
lookups into a table of pre-calculated outputs.  Would consciousness
still be produced in this case?

What if you then re-ran the simulation with 10 neurons doing lookups,
but calculations still being executed for the rest of the simulated
brain?  Still consciousness is produced?

What if 10% of the neurons are implemented using lookup tables?  50%?
90%?  How about all except 1 neuron is implemented via lookup tables,
but that 1 neuron's outputs are still calculated from inputs?

At what point does the simulation become a zombie?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-17 Thread Kelly Harmon

On Sun, May 17, 2009 at 9:13 PM, Brent Meeker meeke...@dslextreme.com wrote:

 Generally I don't think that what we experience is necessarily caused
 by physical systems.  I think that sometimes physical systems assume
 configurations that shadow, or represent, our conscious experience.
 But they don't CAUSE our conscious experience.


 So if we could track the functions of the brain at a fine enough scale,
 we'd see physical events that didn't have physical causes (ones that
 were caused by mental events?).


No, no, no.  I'm not saying that at all.  Ultimately I'm saying that
if there is a physical world, it's irrelevant to consciousness.
Consciousness is information.  Physical systems can be interpreted as
representing, or storing, information, but that act of storage
isn't what gives rise to conscious experience.


 You're aware of course that the same things were said about the
 physio/chemical bases of life.


You mentioned that point before, as I recall.  Dennett made a similar
argument against Chalmers, to which Chalmers had what I thought was an
effective response:

---
http://consc.net/papers/moving.html

Perhaps the most common strategy for a type-A materialist is to
deflate the hard problem by using analogies to other domains, where
talk of such a problem would be misguided. Thus Dennett imagines a
vitalist arguing about the hard problem of life, or a neuroscientist
arguing about the hard problem of perception. Similarly, Paul
Churchland (1996) imagines a nineteenth century philosopher worrying
about the hard problem of light, and Patricia Churchland brings up
an analogy involving heat. In all these cases, we are to suppose,
someone might once have thought that more needed explaining than
structure and function; but in each case, science has proved them
wrong. So perhaps the argument about consciousness is no better.

This sort of argument cannot bear much weight, however. Pointing out
that analogous arguments do not work in other domains is no news: the
whole point of anti-reductionist arguments about consciousness is that
there is a disanalogy between the problem of consciousness and
problems in other domains. As for the claim that analogous arguments
in such domains might once have been plausible, this strikes me as
something of a convenient myth: in the other domains, it is more or
less obvious that structure and function are what need explaining, at
least once any experiential aspects are left aside, and one would be
hard pressed to find a substantial body of people who ever argued
otherwise.

When it comes to the problem of life, for example, it is just obvious
that what needs explaining is structure and function: How does a
living system self-organize? How does it adapt to its environment? How
does it reproduce? Even the vitalists recognized this central point:
their driving question was always How could a mere physical system
perform these complex functions?, not Why are these functions
accompanied by life? It is no accident that Dennett's version of a
vitalist is imaginary. There is no distinct hard problem of life,
and there never was one, even for vitalists.

In general, when faced with the challenge explain X, we need to ask:
what are the phenomena in the vicinity of X that need explaining, and
how might we explain them? In the case of life, what cries out for
explanation are such phenomena as reproduction, adaptation,
metabolism, self-sustenance, and so on: all complex functions. There
is not even a plausible candidate for a further sort of property of
life that needs explaining (leaving aside consciousness itself), and
indeed there never was. In the case of consciousness, on the other
hand, the manifest phenomena that need explaining are such things as
discrimination, reportability, integration (the functions), and
experience. So this analogy does not even get off the ground.

--

 Though it DOES seem plausible/obvious to me that a physical system
 going through a sequence of these representations is what produces
 human behavior.

 So you're saying that a sequence of physical representations is enough
 to produce behavior.

Right, observed behavior.  What I'm saying here is that it seems
obvious to me that mechanistic computation is sufficient to explain
observed human behavior.  If that was the only thing that needed
explaining, we'd be done.  Mission accomplished.

BUT...there's subjective experience that also needs explained, and
this is actually the first question that needs answered.  All other
answers are suspect until subjective experience has been explained.


 And there must be conscious experience associated
 with behavior.

Well, here's where it gets tricky.  Conscious experience is associated
with information.  But how information is tied to physical systems is
a different question.  Any physical systems can be interpreted as
representing all sorts of things (again, back to Putnam and Searle,
one-time pads, Maudlin's Olympia example, Bruno's movie graph

Re: Consciousness is information?

2009-05-17 Thread Brent Meeker

Kelly Harmon wrote:
 On Sun, May 17, 2009 at 9:13 PM, Brent Meeker meeke...@dslextreme.com wrote:
   
 Generally I don't think that what we experience is necessarily caused
 by physical systems.  I think that sometimes physical systems assume
 configurations that shadow, or represent, our conscious experience.
 But they don't CAUSE our conscious experience.

   
 So if we could track the functions of the brain at a fine enough scale,
 we'd see physical events that didn't have physical causes (ones that
 were caused by mental events?).

 

 No, no, no.  I'm not saying that at all.  Ultimately I'm saying that
 if there is a physical world, it's irrelevant to consciousness.
 Consciousness is information.  Physical systems can be interpreted as
 representing, or storing, information, but that act of storage
 isn't what gives rise to conscious experience.

   
 You're aware of course that the same things were said about the
 physio/chemical bases of life.

 

 You mentioned that point before, as I recall.  Dennett made a similar
 argument against Chalmers, to which Chalmers had what I thought was an
 effective response:

 ---
 http://consc.net/papers/moving.html

 Perhaps the most common strategy for a type-A materialist is to
 deflate the hard problem by using analogies to other domains, where
 talk of such a problem would be misguided. Thus Dennett imagines a
 vitalist arguing about the hard problem of life, or a neuroscientist
 arguing about the hard problem of perception. Similarly, Paul
 Churchland (1996) imagines a nineteenth century philosopher worrying
 about the hard problem of light, and Patricia Churchland brings up
 an analogy involving heat. In all these cases, we are to suppose,
 someone might once have thought that more needed explaining than
 structure and function; but in each case, science has proved them
 wrong. So perhaps the argument about consciousness is no better.

 This sort of argument cannot bear much weight, however. Pointing out
 that analogous arguments do not work in other domains is no news: the
 whole point of anti-reductionist arguments about consciousness is that
 there is a disanalogy between the problem of consciousness and
 problems in other domains. As for the claim that analogous arguments
 in such domains might once have been plausible, this strikes me as
 something of a convenient myth: in the other domains, it is more or
 less obvious that structure and function are what need explaining, at
 least once any experiential aspects are left aside, and one would be
 hard pressed to find a substantial body of people who ever argued
 otherwise.

 When it comes to the problem of life, for example, it is just obvious
 that what needs explaining is structure and function: How does a
 living system self-organize? How does it adapt to its environment? How
 does it reproduce? Even the vitalists recognized this central point:
 their driving question was always How could a mere physical system
 perform these complex functions?, not Why are these functions
 accompanied by life? It is no accident that Dennett's version of a
 vitalist is imaginary. There is no distinct hard problem of life,
 and there never was one, even for vitalists.

 In general, when faced with the challenge explain X, we need to ask:
 what are the phenomena in the vicinity of X that need explaining, and
 how might we explain them? In the case of life, what cries out for
 explanation are such phenomena as reproduction, adaptation,
 metabolism, self-sustenance, and so on: all complex functions. There
 is not even a plausible candidate for a further sort of property of
 life that needs explaining (leaving aside consciousness itself), and
 indeed there never was. In the case of consciousness, on the other
 hand, the manifest phenomena that need explaining are such things as
 discrimination, reportability, integration (the functions), and
 experience. So this analogy does not even get off the ground.

 --
   

On the contrary, I think it does.  First, I think Chalmers idea that 
vitalists recognized that all that needed explaining was structure and 
function is revisionist history.  They were looking for the animating 
spirit.  It is in hind sight, having found the function and structure, 
that we've realized that was all the explanation available.  And I 
expect the same thing will happen with consciousness. We will eventually 
be able to make robots that behave as humans do and we will infer, from 
their behavior, that they are conscious.  And we, being their designers, 
will be able to analyze them and say, Here's what makes R2D2 have 
conscious experiences of visual perception and here's what makes 3CPO 
have self awareness relative to humans.  We will find that there are 
many different kinds of conscious and we will be able to invent new 
ones.  We will never solve Chalmers hard problem, we'll just realize 
it's a non-question.

   
 Though it DOES seem plausible/obvious to me that a physical