On 19 May 2009, at 11:37, Alberto G.Corona wrote:
> That is also my case. I wonder how the materialist hypothesis has
> advanced in a plausible explanation of consciousness, and I think that
> this is the right path, and I follow it. But at the deep level, my
> subjective experience tells me that I must remain dualist.
I am glad you are not an eliminative materialist. But you tell us that
you remain a weak-materialist? You believe there is a primary physical
or material world, and that physics is the fundamental science. OK?
The point of most of my posts here is to explain this does not work,
at least once we accept the computationalist hypothesis in the
> I think however that for evolutionary purposes, the consciousness,
> being designed by natural selection for keeping an accurate picture of
> how the others see us, must naturally reject a materialist explanation
> because this is not an accurate picture.
You mean "must reject eliminative materialism". I agree with you. All
sentient beings do that naturally.
> The other people do not see
> us as a piece of evolved mechanisms, but as moral beings.
As person, yes.
> An adaptive
> self must be, and is, fiercely dualist, with a strong notion of self
> autonomy and unit of purpose. So all of us feel that way when not
> thinking about that.
Why dualist? Well, I do agree even animals feel themselves implicitly
dualist, they believe they are hungry and that food exist. They does
not reflect much on the difference between the appearance of
substantial food and they first person hungriness.
But comp forces us to abandon weak materialism, like I think most of
the greek and Indian philosophers already did intuit. The appearance
of matter is appearance of something else. The days I believe in comp,
I feel myself fiercely monist: I believe, those days, that matter is a
construction of the mind. Not the human mind, but the universal
machines mind. UDA is an argument showing that the current
paradigmatic chain MATTER => CONSCIOUSNESS => NUMBER is reversed: with
comp I can explain too you in details (it is long) that the chain
should be NUMBER => CONSCIOUSNESS => MATTER. Some agree already that
it could be NUMBER => MATTER => CONSCIOUSNESS, and this indeed is more
locally obvious, yet I pretend that comp forces eventually the
Here I agree with Kelly, and probably some others: idealism, or
spiritual/mental/informational/number-theoretical monism is where we
go, and have to go, once we bet we can survive with a digital brain. I
don't pretend this is obvious, but I have an argument, called UDA. It
is a constructive argument, it shows how to explicitly derive the
physical laws from a theory of mind (computer science), so that we can
test comp empirically, by comparing the physics from comp and the
physics from usual observation of our neighborhood. Would the world
still look Newtonian, I would never dare to suggest that comp is
possible. Thanks to QM, the possibility of comp remains. (And QM's MWI
prevent comp from solipsism, in case you worried).
> Thus, maybe if ever a robot is made to simulate our behavior must
> incorporate an inner rejection of materialist explanation about the
> nature of his higher level circuits, and a vivid notion of subjective
Note that even physicalist explanations are more and more
mathematical, and does never really refer to metaphysical materialism.
But such beliefs lives in the background, and when defended leads
often to eliminativism (of person), or dualism, which are rarely
intelligible, or epiphenomenalism, where consciousness loss its grip
> That is not difficult at a certain level of technology, to
> create a central “self” module that receives the filtered, relevant
> information, plus information of the commands and actions of other
> decision modules. This self module must be capable of "inventing" (and
> that´s the tricky thing) a self centered, socially plausible, moral
> history that link together such perceptions and such actions.
In our case, we can bet we belong to deep computational histories,
which give serious hints.
> when someone ask him "do you have subjective experience, qualia and so
> on" the robot will answer, “of cause, yes, I have a very strong
> sensation of unity of mind, perception and I´m a moral subject capable
> of self determination”. Otherwise, he will be inconsistent or non
> functional as human simulation.
This is a bit tautological, but OK.
> By the way, the role of the self process as a creator of self centered
> histories that are credible for the rest of us, that tend to show a
> favorable moral image of the self has been checked in different
> experiments, especially with lobotomized people (that invent two
> different histories of the same perception-action in each hemisphere).
> It also explains many mental disorders: compulsive liars and crazy
> overhyped egos made of fantastic histories (reincarnations of
> Napoleon) for example. It also explains many effects in social life of
> sane people. How hard is to achieve objectivity, for example?
Right. I agree here. And objectivity exists only as far as we can
doubt it, by being clear o sharable hypotheses and questions, so that
all person can confirm the theories locally or refute it globally. OK.
I think Kelly's point is a defense of monism, but idealist monism, not
materialist monism. (Kelly: correct me if I was wrong). I think that
if we accept the computationalist hypothesis, we must indeed abandon
materialism, even weak materialism: the metaphysical doctrine of the
material ontological commitment. (some people are "religious" about
Substantial matter can subsist, but it looses all explanatory powers,
because it appears to be a reification of a projection of infinities
of computations or number relations (or combinators relation , or
relations on any finite concepts as rich as numbers you want to use)
as "seen", observed, betted, etc. by the machine/numbers/finite
> On May 18, 4:50 am, Kelly Harmon <harmon...@gmail.com> wrote:
>> On Sun, May 17, 2009 at 9:13 PM, Brent Meeker
>> <meeke...@dslextreme.com> wrote:
>>>> Generally I don't think that what we experience is necessarily
>>>> by physical systems. I think that sometimes physical systems
>>>> configurations that "shadow", or represent, our conscious
>>>> But they don't CAUSE our conscious experience.
>>> So if we could track the functions of the brain at a fine enough
>>> we'd see physical events that didn't have physical causes (ones that
>>> were caused by mental events?).
>> No, no, no. I'm not saying that at all. Ultimately I'm saying that
>> if there is a physical world, it's irrelevant to consciousness.
>> Consciousness is information. Physical systems can be interpreted as
>> representing, or "storing", information, but that act of "storage"
>> isn't what gives rise to conscious experience.
>>> You're aware of course that the same things were said about the
>>> physio/chemical bases of life.
>> You mentioned that point before, as I recall. Dennett made a similar
>> argument against Chalmers, to which Chalmers had what I thought was
>> effective response:
>> Perhaps the most common strategy for a type-A materialist is to
>> deflate the "hard problem" by using analogies to other domains, where
>> talk of such a problem would be misguided. Thus Dennett imagines a
>> vitalist arguing about the hard problem of "life", or a
>> arguing about the hard problem of "perception". Similarly, Paul
>> Churchland (1996) imagines a nineteenth century philosopher worrying
>> about the hard problem of "light", and Patricia Churchland brings up
>> an analogy involving "heat". In all these cases, we are to suppose,
>> someone might once have thought that more needed explaining than
>> structure and function; but in each case, science has proved them
>> wrong. So perhaps the argument about consciousness is no better.
>> This sort of argument cannot bear much weight, however. Pointing out
>> that analogous arguments do not work in other domains is no news: the
>> whole point of anti-reductionist arguments about consciousness is
>> there is a disanalogy between the problem of consciousness and
>> problems in other domains. As for the claim that analogous arguments
>> in such domains might once have been plausible, this strikes me as
>> something of a convenient myth: in the other domains, it is more or
>> less obvious that structure and function are what need explaining, at
>> least once any experiential aspects are left aside, and one would be
>> hard pressed to find a substantial body of people who ever argued
>> When it comes to the problem of life, for example, it is just obvious
>> that what needs explaining is structure and function: How does a
>> living system self-organize? How does it adapt to its environment?
>> does it reproduce? Even the vitalists recognized this central point:
>> their driving question was always "How could a mere physical system
>> perform these complex functions?", not "Why are these functions
>> accompanied by life?" It is no accident that Dennett's version of a
>> vitalist is "imaginary". There is no distinct "hard problem" of life,
>> and there never was one, even for vitalists.
>> In general, when faced with the challenge "explain X", we need to
>> what are the phenomena in the vicinity of X that need explaining, and
>> how might we explain them? In the case of life, what cries out for
>> explanation are such phenomena as reproduction, adaptation,
>> metabolism, self-sustenance, and so on: all complex functions. There
>> is not even a plausible candidate for a further sort of property of
>> life that needs explaining (leaving aside consciousness itself), and
>> indeed there never was. In the case of consciousness, on the other
>> hand, the manifest phenomena that need explaining are such things as
>> discrimination, reportability, integration (the functions), and
>> experience. So this analogy does not even get off the ground.
>>>> Though it DOES seem plausible/obvious to me that a physical system
>>>> going through a sequence of these representations is what produces
>>>> human behavior.
>>> So you're saying that a sequence of physical representations is
>>> to produce behavior.
>> Right, observed behavior. What I'm saying here is that it seems
>> obvious to me that mechanistic computation is sufficient to explain
>> observed human behavior. If that was the only thing that needed
>> explaining, we'd be done. Mission accomplished.
>> BUT...there's subjective experience that also needs explained, and
>> this is actually the first question that needs answered. All other
>> answers are suspect until subjective experience has been explained.
>>> And there must be conscious experience associated
>>> with behavior.
>> Well, here's where it gets tricky. Conscious experience is
>> with information. But how information is tied to physical systems is
>> a different question. Any physical systems can be interpreted as
>> representing all sorts of things (again, back to Putnam and Searle,
>> one-time pads, Maudlin's Olympia example, Bruno's movie graph
>> argument, rocks implementing every FSA, Stathis's birds and trees,
>> triviality attacks on functionalism).
>>> That seems to me to imply that physical representations
>>> are enough to produce consciousness.
>> The problem is that physical "representations" are everywhere. The
>> problem is coming up with a non-arbitrary way of deciding when a
>> physical system represents something that's conscious and when it
>> Physical systems are too representationally promiscuous!
>> Which leads me to abandon physicalism/materialism for idealism.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com
To unsubscribe from this group, send email to
For more options, visit this group at