Re: computationalism and supervenience

2006-09-20 Thread Bruno Marchal


Le 15-sept.-06, à 13:53, Stathis Papaioannou a écrit :

> Yes, that's just what I would say. The only purpose served by the rock 
> is to provide the real world
> dynamism part of the computation, even if it does this simply by 
> mapping lines of code to the otherwise
> idle passage of time. The rock would be completely irrelevant but for 
> this, and in fact Bruno's idea is that the
> rock (or whatever) *is* irrelevant, and the computation is implemented 
> by virtue of its status as a Platonic
> object. It would then perhaps be more accurate to say that physical 
> reality maps onto the computation, rather
> than the computation maps onto physical reality. I think this is more 
> elegant than having useless chunks of
> matter implementing every computation, but I can't quite see a way to 
> eliminate all matter, since the only
> empirical starting point we have is that *some* matter appears to 
> implement some computations.


I agree with the idea that the only empirical starting point we have is 
that some matter *appears* to implement some computations [note the 
difference of emphasis, though].

Indeed we can only survive, with a reasonable high relative 
probability, in (2^aleph_0) computations implementing consistent 
"histories". So we can predict that if we look at a sample of "local 
observable matter" closely enough, it must, in some sense, be 
relatively "implemented"  by an infinity of very similar computations 
(similar to the one sustaining us).

The feeling that our subjective mind is a product of our objective 
brain, and not the reciprocal, is somehow due to the fact that "we" 
have to be embedded in a relatively stable reality (whatever that is).

Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-16 Thread Brent Meeker

Colin Geoffrey Hales wrote:
...
> COLIN:
> Hi a bunch of points...
> 
> 1) Re paper.. it is undergoing review and growing..
> The point of the paper is to squash the solipsism argument ...in
> particular the specific flavour of it that deals with 'other minds' and as
> it has (albeit tacitly) defined science's attitude to what is/is not
> scientific evidence. As such I am only concerned with scientific
> behaviour. The mere existence of a capacity to handle exquisite novelty
> demands the existence of the functionality of phenomenal consciousness
> within the scientist. Novel technology exists, ergo science is possible,
> ergo phenomenal consciousness exists. Phenomenal consciousness is proven
> by the existence of novel technology. More than 1 scientist has produced
> novel technology. Ergo there is more then 1 'mind' (=collection of
> phenomenal fields) ergo other minds do exist. Ergo solipsism is false. The
> problem is that along the way you have also proved that there is an
> external 'reality'...which is a bit of a bonus. So all the philosophical
> arguments about 'existence' that have wasted so much of our time is
> actually just that...a waste of time.
> 
> 2) Turing test. I think the turing test is a completely misguided idea.

Curiously, Turing's test was to see whether a computer could succeed at 
pretending to 
be a woman as well as a man could.

> It's based on the assumption that abstract (as-if) computation can fully
> replicate (has access to all the same information)  of computation
> performed by the natural world. This assumption can be made obvious as
> follows:
> Q. What is it like to be a human? It is like being a mind. There is
> information delivered into the mind by the action of brain material which
> bestows on the human intrinsic knowledge about the natural world outside
> the humanin the form of phenomenal consciousness. 

But the brain is made of physical components implementing physical proceses 
(neurons, 
proteins, ions,...) - why can't they be replaced by functionally identical 
components 
(artificial neurons, etc) and still deliver this consciousness?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-16 Thread 1Z


Colin Geoffrey Hales wrote:
> >
> >
> > Colin Geoffrey Hales wrote:
> >> >
> >
> >> Q. What is it like to be a human? It is like being a mind. There is
> >> information delivered into the mind by the action of brain material
> >> which
> >> bestows on the human intrinsic knowledge about the natural world outside
> >> the humanin the form of phenomenal consciousness. This knowledge is
> >> not a model/abstraction, but a literal mapping of what's there (no
> >> matter
> >> how mysterious its generation may seem).
> >
> > What is the difference between a "model" and a "literal mapping" ?
>
> Real physics C(.) does something (goes through certain states).
> Real physics f(C(.)) does something (directly tracks the states of C(.).
>
> A state machine S that abstracts the states of C(.) is a model.

That's no help at all. What is the difference between tracking the
states
and abstracting the states ?

> f(.) is a literal mapping.
>
> Humans to f(.), computers do S.
>
> >
> >> The zombie does not have this.
> >
> > Why not ?
>
> Because the physics of f(.) above is not there.

Zombies have the same physics as people, by definition.

> >
> >> Nor does the Turing machine.
> >
> >
> >> No matter how good the a-priori abstraction given by the human the UM
> >> will
> >> do science on its sensory feeds until it can no longer distinguish any
> >> effect because the senses cannot discriminate it
> >
> > Don't humans have sensory limits?
>
> Yes, but the sensory fields are NOT what is used in intelligence.

What -- not at all ?

> The
> sensory fields are used to generate the phenomenal fields.

So they are involved indirectly.

> The phenomenal
> fields are used to be intelligent. Human phenomenal fields to not include
> a representation of neutrino flux.

Because human sensory fields don't.

> A zombie could never know of neutrinos!

It is pretty hard for anyone to.

> ...because they are incapable of observation of their causal descendants
> (no phenomenal fields). Our sensory data did not deliver evidence of
> neurtrinos...our phenomenal fields did!

Hmmm.

Well, at least I wa able to come to a conlusion on the basis of what
you said...

> In terms of the symbols above
>
> The zombie can construct an S from sensory fields predictive of the impact
> of C(.) on its own sensory data. But the relationship of this S to the
> outside world C(.)? It can never know. C(.) could put 5 billion other
> states in between all the states detected by the zombie sensory data and
> the zombie would have no clue. Zombie science is the science of zombie
> senory data, not science of the natural world outside the zombie.
>
> Of course you can mentally increase the amount of dta and the
> computational intellect of teh zombie to arbitrary levels all you are
> doing is moving the abstractions around. The zombie still has no internal
> life, no awareness there is a natural world at all.
> 
> cheers
> colin hales


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-16 Thread Colin Geoffrey Hales

>
>
> Colin Geoffrey Hales wrote:
>> >
>
>> Q. What is it like to be a human? It is like being a mind. There is
>> information delivered into the mind by the action of brain material
>> which
>> bestows on the human intrinsic knowledge about the natural world outside
>> the humanin the form of phenomenal consciousness. This knowledge is
>> not a model/abstraction, but a literal mapping of what's there (no
>> matter
>> how mysterious its generation may seem).
>
> What is the difference between a "model" and a "literal mapping" ?

Real physics C(.) does something (goes through certain states).
Real physics f(C(.)) does something (directly tracks the states of C(.).

A state machine S that abstracts the states of C(.) is a model.
f(.) is a literal mapping.

Humans to f(.), computers do S.

>
>> The zombie does not have this.
>
> Why not ?

Because the physics of f(.) above is not there.

>
>> Nor does the Turing machine.
>
>
>> No matter how good the a-priori abstraction given by the human the UM
>> will
>> do science on its sensory feeds until it can no longer distinguish any
>> effect because the senses cannot discriminate it
>
> Don't humans have sensory limits?

Yes, but the sensory fields are NOT what is used in intelligence. The
sensory fields are used to generate the phenomenal fields. The phenomenal
fields are used to be intelligent. Human phenomenal fields to not include
a representation of neutrino flux. A zombie could never know of neutrinos!
...because they are incapable of observation of their causal descendants
(no phenomenal fields). Our sensory data did not deliver evidence of
neurtrinos...our phenomenal fields did!

In terms of the symbols above

The zombie can construct an S from sensory fields predictive of the impact
of C(.) on its own sensory data. But the relationship of this S to the
outside world C(.)? It can never know. C(.) could put 5 billion other
states in between all the states detected by the zombie sensory data and
the zombie would have no clue. Zombie science is the science of zombie
senory data, not science of the natural world outside the zombie.

Of course you can mentally increase the amount of dta and the
computational intellect of teh zombie to arbitrary levels all you are
doing is moving the abstractions around. The zombie still has no internal
life, no awareness there is a natural world at all.

cheers
colin hales






--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-16 Thread 1Z


Colin Geoffrey Hales wrote:
> >

> Q. What is it like to be a human? It is like being a mind. There is
> information delivered into the mind by the action of brain material which
> bestows on the human intrinsic knowledge about the natural world outside
> the humanin the form of phenomenal consciousness. This knowledge is
> not a model/abstraction, but a literal mapping of what's there (no matter
> how mysterious its generation may seem).

What is the difference between a "model" and a "literal mapping" ?

> The zombie does not have this.

Why not ?

> Nor does the Turing machine.


> No matter how good the a-priori abstraction given by the human the UM will
> do science on its sensory feeds until it can no longer distinguish any
> effect because the senses cannot discriminate it

Don't humans have sensory limits?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-16 Thread Bruno Marchal


Le 16-sept.-06, à 10:10, Colin Geoffrey Hales a écrit :


> 5) Re a fatal test for the Turing machine? Give it exquisite novelty by
> asking it to do science on an unknown area of the natural world. Proper
> science. It will fail because it does not know there is an outside 
> world.


And you *know* that?

We can *bet* on a independent reality, that's all. Justifiably so 
assuming comp, but I think you don't.

Self-referentially correct machine can *only* bet on their 
self-referential and referential correctness.


Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-16 Thread Stathis Papaioannou

Colin Hales writes:

> > I've had another think about this after reading the paper you sent
me.
> It
> > seems that
> > you are making two separate claims. The first is that a zombie would
not
> be able to
> > behave like a conscious being in every situation: specifically, when
> called upon to be
> > scientifically creative. If this is correct it would be a corollary
of
> the
> > Turing test, i.e.,
> > if it behaves as if it is conscious under every situation, then it's
> conscious. However,
> > you are being quite specific in describing what types of behaviour
could
> only occur
> > in the setting of phenomenal consciousness. Could you perhaps be
even
> more
> > specific
> > and give an example of the simplest possible behaviour or scientific
> theory which an
> > unconscious machine would be unable to mimic?
> >
> > The second claim is that a computer could only ever be a zombie, and
> therefore could
> > never be scientifically creative. However, it is possible to agree
with
> the first claim and
> > reject this one. Perhaps if a computer were complex enough to truly
> mimic
> > the behaviour
> > of a conscious being, including being scientifically creative, then
it
> would indeed be
> > conscious. Perhaps our present computers are either unconscious
because
> they are too
> > primitive or they are indeed conscious, but at the very low end of a
> consciousness
> > continuum, like single-celled organisms or organisms with relatively
> simple nervous systems
> > like planaria.
> >
> > Stathis Papaioannou
> 
> COLIN:
> Hi a bunch of points...
> 
> 1) Re paper.. it is undergoing review and growing..
> The point of the paper is to squash the solipsism argument ...in
> particular the specific flavour of it that deals with 'other minds'
and as
> it has (albeit tacitly) defined science's attitude to what is/is not
> scientific evidence. As such I am only concerned with scientific
> behaviour. The mere existence of a capacity to handle exquisite
novelty
> demands the existence of the functionality of phenomenal consciousness
> within the scientist. Novel technology exists, ergo science is
possible,
> ergo phenomenal consciousness exists. Phenomenal consciousness is
proven
> by the existence of novel technology. More than 1 scientist has
produced
> novel technology. Ergo there is more then 1 'mind' (=collection of
> phenomenal fields) ergo other minds do exist. Ergo solipsism is false.
The
> problem is that along the way you have also proved that there is an
> external 'reality'...which is a bit of a bonus. So all the
philosophical
> arguments about 'existence' that have wasted so much of our time is
> actually just that...a waste of time.

That's a very bold and ambitious claim, if I may say so.

> 2) Turing test. I think the turing test is a completely misguided
idea.
> It's based on the assumption that abstract (as-if) computation can
fully
> replicate (has access to all the same information)  of computation
> performed by the natural world. This assumption can be made obvious as
> follows:
> Q. What is it like to be a human? It is like being a mind. There is
> information delivered into the mind by the action of brain material
which
> bestows on the human intrinsic knowledge about the natural world
outside
> the humanin the form of phenomenal consciousness. This knowledge
is
> not a model/abstraction, but a literal mapping of what's there (no
matter
> how mysterious its generation may seem). The zombie does not have
this.
> Nor does the Turing machine. A turing machine is a zombie. No matter
what
> the program, it's always 'like a tape and tape reader' to be a Turing
> machine. The knowledge provided by phenonmenal cosnciousness is not an
> abstraction (programmed model)...it is a direct mapping.

>From memory the original Turing test did not specify that the test
subject was a "Turing machine" but rather just a hidden subject who
answers questions, so that the testers have to guess whether it is
conscious on the basis of the answers to the questions alone. So are you
saying that you are confident that a mere Turing machine would fail the
test (you can ask it come up with new scientific theories or whatever
you like), or are you saying that you would not accept that a Turing
machine was conscious even if you did your worst to fail it and it still
passed? If the latter, it would seem to go against your central thesis
that you can prove scientists are not zombies, and can't be aped by
zombie machines.

> 3) RE:
> > and give an example of the simplest possible behaviour or scientific
> theory which an
> > unconscious machine (UM) would be unable to mimic?
> 
> I think this is a meaningless quest. It depends on a) the
> sensory/actuation facilities and b) the a-priori knowledge bestowed
upon
> the UM by its human progenitor.
> 
> No matter how good the a-priori abstraction given by the human the UM
will
> do science on its sensory feeds until it can no longer distinguish any
> effect because the senses 

RE: computationalism and supervenience

2006-09-16 Thread Stathis Papaioannou

Peter Jones writes (quoting SP):

> > OK, but then you have the situation whereby a very complex, and to
our
> mind disorganised, conscious
> > computer might be designed and built by aliens, then discovered by
us
> after the aliens have become
> > extinct and their design blueprints, programming manuals and so on
have
> all been lost. We plug in the
> > computer (all we can figure out about it is the voltage and current
it
> needs to run) and it starts whirring
> > and flashing.  Although we have no idea what it's up to when it does
> this, had we been the aliens, we
> > would have been able to determine from observation that it was doing
> philosophy or proving mathematical
> > theorems. The point is, would we now say that it is *not* doing
> philosophy or proving mathematical theorems
> > because there are no aliens to observe it and interpret it?
> 
> Yes, and we would be correct, because the interpretation by th ealiens
> is a part of the process.
> The "computer" we recover is only one component, a subroutine.
> 
> If you only recover part of an artifiact, it is only natural that you
> cannot
> necessarily figure out the funtion of the whole.
> 
> > You might say, the interpretation has still occurred in the initial
> design, even though the designers are no
> > more. But what if exactly the same physical computer had come about
by
> incredible accident, as a result of
> > a storm bringing together the appropriate metal, semiconductors,
> insulators etc.: if the purposely built computer
> > were conscious, wouldn't its accidental twin also be conscious?
> 
> Interpretation is an activity. If the total systems of
> computer+intepretation is
> consicous, that *would* be true of an accidental system, if the
> interpretational subssytem were accidentally formed as wll, Otherwise,
> not.
> 
> > Finally, reverse the last step: a "computer" is as a matter of fact
> thrown together randomly from various
> > components, but it is like no computer ever designed, and just seems
to
> whir and flash randomly. Given that there
> > are no universal laws of computer design that everyone has to
follow,
> isn't it possible that some bizarre alien
> > engineer *could* have put this strange machine together, so that its
> seemingly random activity to that alien
> > engineer would have been purposely designed to implement conscious
> computation?
> 
> "To the alien engineer" means "interpreted by the alien
> engineer". Interpretation is an activity, so it means additional
> computaiton. All your
> examples are of subsytems that *could* be conscious
> if they were plugged into a specific larger system.
> 
>  And if so, is it any more
> > reasonable to deny that this computer is conscious because its
designer
> has not yet been born than it is to deny
> > that the first computer was conscious because its designer has died,
or
> because it was made accidentally rather
> > than purposely built in a factory?
> 
> 
> Interpretation is an activity. Possible designers and dictionaries
> don't lead to actual
> consciousness.

Perhaps you could suggest an experiment that would demonstrate the point
you are making? That is, a putatively conscious computer under various
situations, so it could be tested to see under what circumstances it is
conscious and under what circumstances its consciousness disappears.

Stathis Papaioannou


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-16 Thread Colin Geoffrey Hales

>
> Colin Hales writes:
>
>> Please consider the plight of the zombie scientist with a huge set of
sensory feeds and similar set of effectors. All carry similar signal
encoding and all, in themselves, bestow no experiential qualities on the
>> zombie.
>> Add a capacity to detect regularity in the sensory feeds.
>> Add a scientific goal-seeking behaviour.
>> Note that this zombie...
>> a) has the internal life of a dreamless sleep
>> b) has no concept or percept of body or periphery
>> c) has no concept that it is embedded in a universe.
>> I put it to you that science (the extraction of regularity) is the
science
>> of zombie sensory fields, not the science of the natural world outside
the
>> zombie scientist. No amount of creativity (except maybe random choices)
would ever lead to any abstraction of the outside world that gave it the
>> ability to handle novelty in the natural world outside the zombie
scientist.
>> No matter how sophisticated the sensory feeds and any guesswork as to a
model (abstraction) of the universe, the zombie would eventually find
novelty invisible because the sensory feeds fail to depict the novelty
.ie.
>> same sensory feeds for different behaviour of the natural world.
Technology built by a zombie scientist would replicate zombie sensory feeds,
>> not deliver an independently operating novel chunk of hardware with a
defined function(if the idea of function even has meaning in this
instance).
>> The purpose of consciousness is, IMO, to endow the cognitive agent with
at
>> least a repeatable (not accurate!) simile of the universe outside the
cognitive agent so that novelty can be handled. Only then can the
zombie
>> scientist detect arbitrary levels of novelty and do open ended science
(or
>> survive in the wild world of novel environmental circumstance). In the
absence of the functionality of phenomenal consciousness and
with
>> finite sensory feeds you cannot construct any world-model (abstraction)
in
>> the form of an innate (a-priori) belief system that will deliver an
endless
>> ability to discriminate novelty. In a very Godellian way eventually a
limit
>> would be reach where the abstracted model could not make any prediction
that
>> can be detected. The zombie is, in a very real way, faced with 'truths'
that
>> exist but can't be accessed/perceived. As such its behaviour will be
fundamentally fragile in the face of novelty (just like all computer
programs are).
>> ---
>> Just to make the zombie a little more real... consider the industrial
control system computer. I have designed, installed hundreds and wired up
>> tens (hundreds?) of thousands of sensors and an unthinkable number of
kilometers of cables. (NEVER again!) In all cases I put it to you that the
>> phenomenal content of sensory connections may, at best, be
characterised
>> as
>> whatever it is like to have electrons crash through wires, for that is
what
>> is actually going on. As far as the internal life of the CPU is
concerned...
>> whatever it is like to be an electrically noisy hot rock, regardless of
the
>> programalthough the character of the noise may alter with different
programs!
>> I am a zombie expert! No that didn't come out right...erm
>> perhaps... "I think I might be a world expert in zombies" yes,
that's
>> better.
>> :-)
>> Colin Hales
>
> I've had another think about this after reading the paper you sent me.
It
> seems that
> you are making two separate claims. The first is that a zombie would not
be able to
> behave like a conscious being in every situation: specifically, when
called upon to be
> scientifically creative. If this is correct it would be a corollary of
the
> Turing test, i.e.,
> if it behaves as if it is conscious under every situation, then it's
conscious. However,
> you are being quite specific in describing what types of behaviour could
only occur
> in the setting of phenomenal consciousness. Could you perhaps be even
more
> specific
> and give an example of the simplest possible behaviour or scientific
theory which an
> unconscious machine would be unable to mimic?
>
> The second claim is that a computer could only ever be a zombie, and
therefore could
> never be scientifically creative. However, it is possible to agree with
the first claim and
> reject this one. Perhaps if a computer were complex enough to truly
mimic
> the behaviour
> of a conscious being, including being scientifically creative, then it
would indeed be
> conscious. Perhaps our present computers are either unconscious because
they are too
> primitive or they are indeed conscious, but at the very low end of a
consciousness
> continuum, like single-celled organisms or organisms with relatively
simple nervous systems
> like planaria.
>
> Stathis Papaioannou

COLIN:
Hi a bunch of points...

1) Re paper.. it is undergoing review and growing..
The point of the paper is to squash the solipsism argument ...in
particular the specific flavour of it that deals with 'oth

RE: computationalism and supervenience

2006-09-15 Thread Stathis Papaioannou


Peter Jones writes:

> > > > > > That is what I mean
> > > > > > when I say that any computation can map onto any physical system. 
> > > > > > The physical structure and activity
> > > > > > of computer A implementing program a may be completely different to 
> > > > > > that of computer B implementing
> > > > > > program b, but program b may be an emulation of program a, which 
> > > > > > should make the two machines
> > > > > > functionally equivalent and, under computationalism, equivalently 
> > > > > > conscious.
> > > > >
> > > > > So ? If the functional equivalence doesn't depend on a
> > > > > baroque-reinterpretation,
> > > > > where is the problem ?
> > > >
> > > > Who interprets the meaning of "baroque"?
> > >
> > > There are objective ways of decifing that kiond of issue, e.g
> > > algortihmic information
> > > theory.
> >
> > Aren't you getting into the realm of the Platonic forms here?
> 
> No.I am getting into the realms of abstaction. Platonistists think
> abstracta exist plantoically. Extreme nominalists reject abstracta
> completely. All points in between accept abstracta, but not as having
> Platonic existence.
> 
> > Flowcharts are a representation of an algorithm, not
> > the algorithm itself, even if we are talking about the simplest possible 
> > flowchart. Three marks on a piece of paper,
> > or three objects, might be the simplest possible representation of the 
> > number "3" but that is not the same as the number
> > "3".
> 
> Yes. I only said that what a computation reality is , is something
> *like*
> a flowchart. The point is that what a computaton really is doesn't
> require inpteretation. It is is just *that* particular construction
> of loops and branches, in the same way that a square is a
> four-sided figure.

Well, do computations require interpretation or don't they? Perhaps I haven't 
been quite clear on this either. I agree with you 
that "what a computation really is" doesn't require any interpretation. If it's 
the physical activity in a piece of matter then the 
same physical activity will obviously occur whether anyone observes it or not. 
If it's an abstraction like the number "3" then 
that abstraction will also be valid whether there are any people around to have 
the idea or not (which by the way is all I mean 
by saying that the number "3" exists in Platonia). If it's a useful 
computation, such as that occurring in an industrial robot 
shovelling coal, then it will only be useful if there is coal to be shovelled 
and perhaps a reason to shovel it. If it's an interesting 
computation such a computer game or a spreadsheet it will only be interesting 
if there is someone around to appreciate it, although 
it will still be the same computation if it occurs without anyone to interpret 
it. But in the special case of a *conscious* computation, 
provided that its consciousness is not in some way contingent on interaction 
with others (eg. the computer automatically turns itself 
off or commits suicide if there is no-one around to talk to), then it is 
conscious regardless of what else happens in the universe, 
regardless of whether its designers are all dead, and regardless of who made it 
in the first place or why.

> > However, this does raise an important point about measure when every 
> > possible computation is implemented, eg.
> > as discussed in Russell Standish' book, some recent posts by Hal Finney, 
> > giving a rationale for why we are living in an
> > orderly universe described by a relatively simple set of physical laws, and 
> > why our conscious experience seems to derive
> > from brains rather than rocks.
> 
> Why should I worry about what happens when every computation is
> implemented,
> when there is no evidence for it ?

There's no direct evidence, as there is no direct evidence of MWI, but as we 
have been debating, I think it is a consequence of 
the idea that consciousness is a computation, and the same computation 
implemented on a different substrate should lead to the 
same consciousness.

Stathis Papaiaonnou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-15 Thread Brent Meeker

Stathis Papaioannou wrote:
> Colin Hales writes: 
> 
> 
>>Please consider the plight of the zombie scientist with a huge set of
>>sensory feeds and similar set of effectors. All carry similar signal
>>encoding and all, in themselves, bestow no experiential qualities on the
>>zombie.
>>
>>Add a capacity to detect regularity in the sensory feeds.
>>Add a scientific goal-seeking behaviour.
>>
>>Note that this zombie...
>>a) has the internal life of a dreamless sleep
>>b) has no concept or percept of body or periphery
>>c) has no concept that it is embedded in a universe.
>>
>>I put it to you that science (the extraction of regularity) is the science
>>of zombie sensory fields, not the science of the natural world outside the
>>zombie scientist. No amount of creativity (except maybe random choices)
>>would ever lead to any abstraction of the outside world that gave it the
>>ability to handle novelty in the natural world outside the zombie scientist.
>>
>>No matter how sophisticated the sensory feeds and any guesswork as to a
>>model (abstraction) of the universe, the zombie would eventually find
>>novelty invisible because the sensory feeds fail to depict the novelty .ie.
>>same sensory feeds for different behaviour of the natural world.
>>
>>Technology built by a zombie scientist would replicate zombie sensory feeds,
>>not deliver an independently operating novel chunk of hardware with a
>>defined function(if the idea of function even has meaning in this instance).
>>
>>The purpose of consciousness is, IMO, to endow the cognitive agent with at
>>least a repeatable (not accurate!) simile of the universe outside the
>>cognitive agent so that novelty can be handled. Only then can the zombie
>>scientist detect arbitrary levels of novelty and do open ended science (or
>>survive in the wild world of novel environmental circumstance).
>>
>>In the absence of the functionality of phenomenal consciousness and with
>>finite sensory feeds you cannot construct any world-model (abstraction) in
>>the form of an innate (a-priori) belief system that will deliver an endless
>>ability to discriminate novelty. In a very Godellian way eventually a limit
>>would be reach where the abstracted model could not make any prediction that
>>can be detected. The zombie is, in a very real way, faced with 'truths' that
>>exist but can't be accessed/perceived. As such its behaviour will be
>>fundamentally fragile in the face of novelty (just like all computer
>>programs are).
>>---
>>Just to make the zombie a little more real... consider the industrial
>>control system computer. I have designed, installed hundreds and wired up
>>tens (hundreds?) of thousands of sensors and an unthinkable number of
>>kilometers of cables. (NEVER again!) In all cases I put it to you that the
>>phenomenal content of sensory connections may, at best, be characterised as
>>whatever it is like to have electrons crash through wires, for that is what
>>is actually going on. As far as the internal life of the CPU is concerned...
>>whatever it is like to be an electrically noisy hot rock, regardless of the
>>programalthough the character of the noise may alter with different
>>programs!
>>
>>I am a zombie expert! No that didn't come out right...erm
>>perhaps... "I think I might be a world expert in zombies" yes, that's
>>better.
>>:-)
>>Colin Hales
> 
> 
> I've had another think about this after reading the paper you sent me. It 
> seems that 
> you are making two separate claims. The first is that a zombie would not be 
> able to 
> behave like a conscious being in every situation: specifically, when called 
> upon to be 
> scientifically creative. If this is correct it would be a corollary of the 
> Turing test, i.e., 
> if it behaves as if it is conscious under every situation, then it's 
> conscious. However, 
> you are being quite specific in describing what types of behaviour could only 
> occur 
> in the setting of phenomenal consciousness. Could you perhaps be even more 
> specific 
> and give an example of the simplest possible behaviour or scientific theory 
> which an 
> unconscious machine would be unable to mimic?
> 
> The second claim is that a computer could only ever be a zombie, and 
> therefore could 
> never be scientifically creative. However, it is possible to agree with the 
> first claim and 
> reject this one. Perhaps if a computer were complex enough to truly mimic the 
> behaviour 
> of a conscious being, including being scientifically creative, then it would 
> indeed be 
> conscious. Perhaps our present computers are either unconscious because they 
> are too 
> primitive or they are indeed conscious, but at the very low end of a 
> consciousness 
> continuum, like single-celled organisms or organisms with relatively simple 
> nervous systems 
> like planaria.
> 
> Stathis Papaioannou

I even know some spirit dualist who allow that spirit might attach to 
sufficiently 
complex computers and hence

Re: computationalism and supervenience

2006-09-15 Thread 1Z


Stathis Papaioannou wrote:
> Peter Jones writes:
>
> > Stathis Papaioannou wrote:
> > > Peter Jones writes (quoting SP):
> > >
> > > > > > > I'm not sure how the multiverse comes into the discussion, but 
> > > > > > > you have
> > > > > > > made the point several times that a computation depends on an 
> > > > > > > observer
> > > > > >
> > > > > >
> > > > > > No, I haven't! I have tried ot follow through the consequences of
> > > > > > assuming it must.
> > > > > > It seems to me that some sort of absurdity or contradiction ensues.
> > > > >
> > > > > OK. This has been a long and complicated thread.
> > > > >
> > > > > > > for its meaning. I agree, but *if* computations can be conscious 
> > > > > > > (remember,
> > > > > > > this is an assumption) then in that special case an external 
> > > > > > > observer is not
> > > > > > > needed.
> > > > > >
> > > > > > Why not ? (Well, I would be quite happy that a conscious
> > > > > > computation would have some inherent structural property --
> > > > > > I want to foind out why *you* would think it doesn't).
> > > > >
> > > > > I think it goes against standard computationalism if you say that a 
> > > > > conscious
> > > > > computation has some inherent structural property.
> > >
> > > I should have said, that the *hardware* has some special structural 
> > > property goes
> > > against computationalism. It is difficult to pin down the "structure" of 
> > > a computation
> > > without reference to a programming language or hardware.
> >
> > It is far from impossible. If it keeps returning to the same state,
> > it is in a loop, for instance. I am sure that you are tiching to point
> > out
> > that loops can be made to appear or vanish by re-interpretation.
> > My point is that it is RE interpretation. There is a baseline
> > set by what is true of a system under minimal interpretation.
> >
> >  The idea is that the
> > > same computation can look completely different on different computers,
> >
> > Not *completely* different. There will be a mapping, and it will
> > be a lot simpler than one of your fanciful ones.
> >
> > > the corollary
> > > of which is that any computer (or physical process) may be implementing 
> > > any
> > > computation, we just might not know about it.
> >
> > That doesn't follow. The computational structure that a physical
> > systems is "really" implementing is the computational structure that
> > can
> > be reverse-engineered under a minimally complex interpretation.
> >
> > You *can* introduce more complex mappings, but you don't *have* to. It
> > is
> > an artificial problem.
>
> You may be able to show that a particular interpretation is the simplest one, 
> but it
> certainly doesn't have to be the only interpretation. Practical computers and 
> operating
> systems are deliberately designed to be more complex than they absolutely 
> need to be
> so that they can be backward compatible with older software, or so that it is 
> easier for
> humans to program them, troubleshoot etc.

Of course. That is all part of their funcitonality. All that means is
that if
you reverse-egineer it , you conclude tha "this is a programme with
debug code"
, or "this is an application with self-diagnostic abilities". And you
wouldn't be saying anything wrong.

>  A COBOL program will do the same computation
> as the equivalent C program, on whatever machine it is run on.


Of course. The programme is "really" the algorithm, not the
code.

> And I'm sure the physical
> activity that goes on in the human brain in order to add two numbers would 
> make the most
> psychotic designer of electronic computers seem simple and orderly by 
> comparison.

Of course, Because humans add numbers together consciously. The
consciousness
is part of the functionallity. If it went missing during the
reverse-engineering process, *that* would be a problem.

> Stathis Papaioannou
> _
> Be one of the first to try Windows Live Mail.
> http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-15 Thread 1Z


Stathis Papaioannou wrote:
> Peter Jones writes:
>
> > Stathis Papaioannou wrote:
> > > Peter Jones writes:
> > >
> > > > > That's what I'm saying, but I certainly don't think everyone agrees 
> > > > > with me on the list, and
> > > > > I'm not completely decided as to which of the three is more absurd: 
> > > > > every physical system
> > > > > implements every conscious computation, no physical system implements 
> > > > > any conscious
> > > > > computation (they are all implemented non-physically in Platonia), or 
> > > > > the idea that a
> > > > > computation can be conscious in the first place.
> > > >
> > > >
> > > > You haven't made it clear why you don't accept that every physical
> > > > system
> > > > implements one computation, whether it is a
> > > > conscious computation or not. I don't see what
> > > > contradicts it.
> > >
> > > Every physical system does implement every computation, in a trivial 
> > > sense, as every rock
> > > is a hammer and a doorstop and contains a bust of Albert Einstein inside 
> > > it.
> >
> > The rock-hammer and the bust of Einstein are mere possibilities. You
> > don't
> > have an argument to the effect that every physical sytem is
> > implements every computation. Every physical systesm
> > could implelement any computation under suitable re-interpretation,
> > but that is a mere possibility unless someone does the re-interpreting,
> > --
> > in which case it is in fact the system+interpreter combination that is
> > doing
> > the  re-intrepreting.
>
> OK, but then you have the situation whereby a very complex, and to our mind 
> disorganised, conscious
> computer might be designed and built by aliens, then discovered by us after 
> the aliens have become
> extinct and their design blueprints, programming manuals and so on have all 
> been lost. We plug in the
> computer (all we can figure out about it is the voltage and current it needs 
> to run) and it starts whirring
> and flashing.  Although we have no idea what it's up to when it does this, 
> had we been the aliens, we
> would have been able to determine from observation that it was doing 
> philosophy or proving mathematical
> theorems. The point is, would we now say that it is *not* doing philosophy or 
> proving mathematical theorems
> because there are no aliens to observe it and interpret it?

Yes, and we would be correct, because the interpretation by th ealiens
is a part of the process.
The "computer" we recover is only one component, a subroutine.

If you only recover part of an artifiact, it is only natural that you
cannot
necessarily figure out the funtion of the whole.

> You might say, the interpretation has still occurred in the initial design, 
> even though the designers are no
> more. But what if exactly the same physical computer had come about by 
> incredible accident, as a result of
> a storm bringing together the appropriate metal, semiconductors, insulators 
> etc.: if the purposely built computer
> were conscious, wouldn't its accidental twin also be conscious?

Interpretation is an activity. If the total systems of
computer+intepretation is
consicous, that *would* be true of an accidental system, if the
interpretational subssytem were accidentally formed as wll, Otherwise,
not.

> Finally, reverse the last step: a "computer" is as a matter of fact thrown 
> together randomly from various
> components, but it is like no computer ever designed, and just seems to whir 
> and flash randomly. Given that there
> are no universal laws of computer design that everyone has to follow, isn't 
> it possible that some bizarre alien
> engineer *could* have put this strange machine together, so that its 
> seemingly random activity to that alien
> engineer would have been purposely designed to implement conscious 
> computation?

"To the alien engineer" means "interpreted by the alien
engineer". Interpretation is an activity, so it means additional
computaiton. All your
examples are of subsytems that *could* be conscious
if they were plugged into a specific larger system.

 And if so, is it any more
> reasonable to deny that this computer is conscious because its designer has 
> not yet been born than it is to deny
> that the first computer was conscious because its designer has died, or 
> because it was made accidentally rather
> than purposely built in a factory?


Interpretation is an activity. Possible designers and dictionaries
don't lead to actual
consciousness.

> Stathis Papaioannou
>
> _
> Be one of the first to try Windows Live Mail.
> http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more optio

Re: computationalism and supervenience

2006-09-15 Thread 1Z


Stathis Papaioannou wrote:
> 
> > From: [EMAIL PROTECTED]
> > To: everything-list@googlegroups.com
> > Subject: Re: computationalism and supervenience
> > Date: Wed, 13 Sep 2006 04:43:54 -0700
> >
> >
> >
> > Stathis Papaioannou wrote:
> > > Peter Jones writes:
> > >
> > > > Stathis Papaioannou wrote:
> > > > > Brent meeker writes:
> > > > >
> > > > > > >>>I think it goes against standard computationalism if you say 
> > > > > > >>>that a conscious
> > > > > > >>>computation has some inherent structural property. Opponents of 
> > > > > > >>>computationalism
> > > > > > >>>have used the absurdity of the conclusion that anything 
> > > > > > >>>implements any conscious
> > > > > > >>>computation as evidence that there is something special and 
> > > > > > >>>non-computational
> > > > > > >>>about the brain. Maybe they're right.
> > > > > > >>>
> > > > > > >>>Stathis Papaioannou
> > > > > > >>
> > > > > > >>Why not reject the idea that any computation implements every 
> > > > > > >>possible computation
> > > > > > >>(which seems absurd to me)?  Then allow that only computations 
> > > > > > >>with some special
> > > > > > >>structure are conscious.
> > > > > > >
> > > > > > >
> > > > > > > It's possible, but once you start in that direction you can say 
> > > > > > > that only computations
> > > > > > > implemented on this machine rather than that machine can be 
> > > > > > > conscious. You need the
> > > > > > > hardware in order to specify structure, unless you can think of a 
> > > > > > > God-given programming
> > > > > > > language against which candidate computations can be measured.
> > > > > >
> > > > > > I regard that as a feature - not a bug. :-)
> > > > > >
> > > > > > Disembodied computation doesn't quite seem absurd - but our 
> > > > > > empirical sample argues
> > > > > > for embodiment.
> > > > > >
> > > > > > Brent Meeker
> > > > >
> > > > > I don't have a clear idea in my mind of disembodied computation 
> > > > > except in rather simple cases,
> > > > > like numbers and arithmetic. The number 5 exists as a Platonic ideal, 
> > > > > and it can also be implemented
> > > > > so we can interact with it, as when there is a collection of 5 
> > > > > oranges, or 3 oranges and 2 apples,
> > > > > or 3 pairs of oranges and 2 triplets of apples, and so on, in 
> > > > > infinite variety. The difficulty is that if we
> > > > > say that "3+2=5" as exemplified by 3 oranges and 2 apples is 
> > > > > conscious, then should we also say
> > > > > that the pairs+triplets of fruit are also conscious?
> > > >
> > > > No, they are only subroutines.
> > >
> > > But a computation is just a lot of subroutines; or equivalently, a 
> > > computation is just a subroutine in a larger
> > > computation or subroutine.
> >
> > The point is that the subroutine does not have the functionality of the
> > programme.
> >
> >
> > > > >  If so, where do we draw the line?
> > > >
> > > > At specific structures
> > >
> > > By "structures" do you mean hardware or software?
> >
> > Functional/algorithmic.
> >
> > Whatever software does is also done by hardware. Software is  an
> > abstraction
> > ofrm hardware, not something additional.
> >
> > > I don't think it's possible to pin down software structures
> > > without reference to a particular machine and operating system. There is 
> > > no natural or God-given language.
> >
> > That isn't the point. I am not thiking of  a programme as a
> > sequence
> > of symbols. I am thinking of it as an abstract structure of branches
> > and loops,
> > the sort of thing that is represented by a 

RE: computationalism and supervenience

2006-09-15 Thread Stathis Papaioannou

Brent Meeker writes:

> >>>We would understand it in a third person sense but not in a first person 
> >>>sense, except by analogy with our 
> >>>own first person experience. Consciousness is the difference between what 
> >>>can be known by observing an 
> >>>entity and what can be known by being the entity, or something like the 
> >>>entity, yourself. 
> >>>
> >>>Stathis Papaioannou
> >>
> >>But you are simply positing that there is such a difference.  That's easy 
> >>to do 
> >>because we know so little about how brains work.  But consider the engine 
> >>in your 
> >>car.  Do you know what it's like to be the engine in your car?  You know a 
> >>lot about 
> >>it, but how do you know that you know all of it?  Does that mean your car 
> >>engine is 
> >>conscious?  I'd say yes it is (at a very low level) and you *can* know what 
> >>it's like.
> > 
> > 
> > No, I don't know what it's like to be the engine in my car. I would guess 
> > it isn't like anything, but I might be wrong. 
> > If I am wrong, then my car engine may indeed be conscious, but in a 
> > completely alien way, which I cannot 
> > understand no matter how much I learn about car mechanics, because I am not 
> > myself a car engine. 
> 
> Then doesn't the same apply to your hypothetical conscious, but alien 
> computer whose
> interpretative manuals are all lost?

Certainly: it might be conscious, but I couldn't even guess at this without 
some understanding of how it worked 
or some ability to interact with it. However, that's my problem, not 
necessarily the computer's, which might be 
happily dreaming or philosophising.

> >I think 
> > the same would happen if we encountered an alien civilization. We would 
> > probably assume that they were 
> > conscious because we would observe that they exhibit intelligent behaviour, 
> > but only if by coincidence they 
> > had sensations, emotions etc. which reminded us of our own would we be able 
> > to guess what their conscious 
> > experience was actually like, and even then we would not be sure.
> 
> How could their inner experiences - sensations, emotions, etc - remind us of
> anything?  We don't have access to them.  It would have to be their 
> interactions with
> the world and us that would cause us to infer their inner experiences; just 
> as I
> infer when my dog is happy or fearful.

That's how you infer that your dog has feelings, but your dog's actually having 
the feelings is not contingent on 
your inferences, except insofar as its feelings change according to your 
reaction to them. However, you did write 
"happy" and "fearful" because they are feelings you understand. There may be a 
special emotion that dogs alone
experience when they are burying a bone and simultaneously hear a 30 KHz tone, 
for example, and we could never 
hope to understand what that is like no matter how much we study it 
scientifically.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-15 Thread Stathis Papaioannou

Peter Jones writes:

> > What if the computer is built according to some ridiculously complex plan, 
> > plugged in, then all the engineers, manuals,
> > etc. disappear. If it was conscious to begin with, does it suddenly cease 
> > being conscious because no-one is able to
> > understand it?
> If it was consicous, it ws consicus as the result
> of whatever computation it is performing, and that is *not* the
> computation resulting froma complex process of re-interpretation.
> As I have shown, such a process is a separate computation in
> which the "computer" figures as a subrouting -- possibly
> even an unimportant one.
> 
> It is also possible that the "computer" isn't conscious, but
> the total system of comptuer+reinterpretation is conscious
> In that case, if the apparatus that implements the mapping is
> dismantled,
> the consciousness disappears.
> 
> The thing to remember is that just because one physical
> systems is desgnated a computer, and another isn't, that doesn't
> mean the first systesm is in fact doing all the computing. The
> computing
> is taking place where the activity and the complexity is taking place.

You could perhaps consistently claim this, but it would be difficult to defend. 
The "interpretation" need not be a dynamic 
activity, like talking to your programmer or interacting with the environment 
via sensors and effectors. It could just be a 
potential interaction: I have the manual on my desk, and if I wanted to I could 
study it, take the case off my computer, 
attach an oscilloscope probe, and figure out what it is up to while it is 
plugged in. At what point does the computer 
gain consciousness: when I read the manual? When I have understood what I have 
read? When I attach the probe? Or 
was it conscious all along because I merely had the potential to understand it? 
What if someone sneaks in while I am on a 
break and maliciously alters the manual? What if I have a stroke and lose the 
ability to read, or the whole population of the 
Earth is wiped out by a supernova?

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-15 Thread Stathis Papaioannou

Peter Jones writes:
 
> Brent Meeker wrote:
> > Stathis Papaioannou wrote:
> > > Peter Jones writes:
> > >
> > >
> > >>>That's what I'm saying, but I certainly don't think everyone agrees with 
> > >>>me on the list, and
> > >>>I'm not completely decided as to which of the three is more absurd: 
> > >>>every physical system
> > >>>implements every conscious computation, no physical system implements 
> > >>>any conscious
> > >>>computation (they are all implemented non-physically in Platonia), or 
> > >>>the idea that a
> > >>>computation can be conscious in the first place.
> > >>
> > >>
> > >>You haven't made it clear why you don't accept that every physical
> > >>system
> > >>implements one computation, whether it is a
> > >>conscious computation or not. I don't see what
> > >>contradicts it.
> > >
> > >
> > > Every physical system does implement every computation, in a trivial 
> > > sense, as every rock
> > > is a hammer and a doorstop and contains a bust of Albert Einstein inside 
> > > it. Those three aspects
> > > of rocks are not of any consequence unless there is someone around to 
> > > appreciate them.
> > > Similarly, if the vibration of atoms in a rock under some complex mapping 
> > > are calculating pi
> > > that is not of any consequence unless someone goes to the trouble of 
> > > determining that mapping,
> > > and even then it wouldn't be of any use as a general purpose computer 
> > > unless you built another
> > > general purpose computer to dynamically interpret the vibrations (which 
> > > does not mean the rock
> > > isn't doing the calculation without this extra computer).
> >
> > I think there are some constraints on what the rock must be doing in order 
> > that it
> > can be said to be calculating pi instead of the interpreting computer.  For 
> > example
> > if the rock states were just 1,0,1,0,1,0... then there are several 
> > arguments based on
> > for example information theory that would rule out that being a computation 
> > of pi.
> 
> Stathis would no doubt say you just need a dictionary that says;
> 
> Let the first 1 be 3
> let the first 0 be 1
> let the second 1 be 4
> let the second 0 be 1
> let the third 1 be 5
> let the third 0 be 9
> ...
> 
> But there are good AIT reasons for saying that all the complexity is
> in the dictionary

Yes, that's just what I would say. The only purpose served by the rock is to 
provide the real world 
dynamism part of the computation, even if it does this simply by mapping lines 
of code to the otherwise 
idle passage of time. The rock would be completely irrelevant but for this, and 
in fact Bruno's idea is that the 
rock (or whatever) *is* irrelevant, and the computation is implemented by 
virtue of its status as a Platonic 
object. It would then perhaps be more accurate to say that physical reality 
maps onto the computation, rather 
than the computation maps onto physical reality. I think this is more elegant 
than having useless chunks of 
matter implementing every computation, but I can't quite see a way to eliminate 
all matter, since the only 
empirical starting point we have is that *some* matter appears to implement 
some computations. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-15 Thread Stathis Papaioannou

Peter Jones writes:

> Stathis Papaioannou wrote:
> > Peter Jones writes:
> >
> > > > That's what I'm saying, but I certainly don't think everyone agrees 
> > > > with me on the list, and
> > > > I'm not completely decided as to which of the three is more absurd: 
> > > > every physical system
> > > > implements every conscious computation, no physical system implements 
> > > > any conscious
> > > > computation (they are all implemented non-physically in Platonia), or 
> > > > the idea that a
> > > > computation can be conscious in the first place.
> > >
> > >
> > > You haven't made it clear why you don't accept that every physical
> > > system
> > > implements one computation, whether it is a
> > > conscious computation or not. I don't see what
> > > contradicts it.
> >
> > Every physical system does implement every computation, in a trivial sense, 
> > as every rock
> > is a hammer and a doorstop and contains a bust of Albert Einstein inside it.
> 
> The rock-hammer and the bust of Einstein are mere possibilities. You
> don't
> have an argument to the effect that every physical sytem is
> implements every computation. Every physical systesm
> could implelement any computation under suitable re-interpretation,
> but that is a mere possibility unless someone does the re-interpreting,
> --
> in which case it is in fact the system+interpreter combination that is
> doing
> the  re-intrepreting.

OK, but then you have the situation whereby a very complex, and to our mind 
disorganised, conscious 
computer might be designed and built by aliens, then discovered by us after the 
aliens have become 
extinct and their design blueprints, programming manuals and so on have all 
been lost. We plug in the 
computer (all we can figure out about it is the voltage and current it needs to 
run) and it starts whirring 
and flashing.  Although we have no idea what it's up to when it does this, had 
we been the aliens, we 
would have been able to determine from observation that it was doing philosophy 
or proving mathematical 
theorems. The point is, would we now say that it is *not* doing philosophy or 
proving mathematical theorems 
because there are no aliens to observe it and interpret it? 

You might say, the interpretation has still occurred in the initial design, 
even though the designers are no 
more. But what if exactly the same physical computer had come about by 
incredible accident, as a result of 
a storm bringing together the appropriate metal, semiconductors, insulators 
etc.: if the purposely built computer 
were conscious, wouldn't its accidental twin also be conscious?

Finally, reverse the last step: a "computer" is as a matter of fact thrown 
together randomly from various 
components, but it is like no computer ever designed, and just seems to whir 
and flash randomly. Given that there 
are no universal laws of computer design that everyone has to follow, isn't it 
possible that some bizarre alien 
engineer *could* have put this strange machine together, so that its seemingly 
random activity to that alien 
engineer would have been purposely designed to implement conscious computation? 
And if so, is it any more 
reasonable to deny that this computer is conscious because its designer has not 
yet been born than it is to deny 
that the first computer was conscious because its designer has died, or because 
it was made accidentally rather 
than purposely built in a factory?

Stathis Papaioannou

_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-15 Thread Stathis Papaioannou

Peter Jones writes:

> Stathis Papaioannou wrote:
> > Peter Jones writes (quoting SP):
> >
> > > > > > I'm not sure how the multiverse comes into the discussion, but you 
> > > > > > have
> > > > > > made the point several times that a computation depends on an 
> > > > > > observer
> > > > >
> > > > >
> > > > > No, I haven't! I have tried ot follow through the consequences of
> > > > > assuming it must.
> > > > > It seems to me that some sort of absurdity or contradiction ensues.
> > > >
> > > > OK. This has been a long and complicated thread.
> > > >
> > > > > > for its meaning. I agree, but *if* computations can be conscious 
> > > > > > (remember,
> > > > > > this is an assumption) then in that special case an external 
> > > > > > observer is not
> > > > > > needed.
> > > > >
> > > > > Why not ? (Well, I would be quite happy that a conscious
> > > > > computation would have some inherent structural property --
> > > > > I want to foind out why *you* would think it doesn't).
> > > >
> > > > I think it goes against standard computationalism if you say that a 
> > > > conscious
> > > > computation has some inherent structural property.
> >
> > I should have said, that the *hardware* has some special structural 
> > property goes
> > against computationalism. It is difficult to pin down the "structure" of a 
> > computation
> > without reference to a programming language or hardware.
> 
> It is far from impossible. If it keeps returning to the same state,
> it is in a loop, for instance. I am sure that you are tiching to point
> out
> that loops can be made to appear or vanish by re-interpretation.
> My point is that it is RE interpretation. There is a baseline
> set by what is true of a system under minimal interpretation.
> 
>  The idea is that the
> > same computation can look completely different on different computers,
> 
> Not *completely* different. There will be a mapping, and it will
> be a lot simpler than one of your fanciful ones.
> 
> > the corollary
> > of which is that any computer (or physical process) may be implementing any
> > computation, we just might not know about it.
> 
> That doesn't follow. The computational structure that a physical
> systems is "really" implementing is the computational structure that
> can
> be reverse-engineered under a minimally complex interpretation.
> 
> You *can* introduce more complex mappings, but you don't *have* to. It
> is
> an artificial problem.

You may be able to show that a particular interpretation is the simplest one, 
but it 
certainly doesn't have to be the only interpretation. Practical computers and 
operating 
systems are deliberately designed to be more complex than they absolutely need 
to be 
so that they can be backward compatible with older software, or so that it is 
easier for 
humans to program them, troubleshoot etc. A COBOL program will do the same 
computation 
as the equivalent C program, on whatever machine it is run on. And I'm sure the 
physical 
activity that goes on in the human brain in order to add two numbers would make 
the most 
psychotic designer of electronic computers seem simple and orderly by 
comparison. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-15 Thread Stathis Papaioannou







> From: [EMAIL PROTECTED]
> To: everything-list@googlegroups.com
> Subject: Re: computationalism and supervenience
> Date: Wed, 13 Sep 2006 04:43:54 -0700
> 
> 
> 
> Stathis Papaioannou wrote:
> > Peter Jones writes:
> >
> > > Stathis Papaioannou wrote:
> > > > Brent meeker writes:
> > > >
> > > > > >>>I think it goes against standard computationalism if you say that 
> > > > > >>>a conscious
> > > > > >>>computation has some inherent structural property. Opponents of 
> > > > > >>>computationalism
> > > > > >>>have used the absurdity of the conclusion that anything implements 
> > > > > >>>any conscious
> > > > > >>>computation as evidence that there is something special and 
> > > > > >>>non-computational
> > > > > >>>about the brain. Maybe they're right.
> > > > > >>>
> > > > > >>>Stathis Papaioannou
> > > > > >>
> > > > > >>Why not reject the idea that any computation implements every 
> > > > > >>possible computation
> > > > > >>(which seems absurd to me)?  Then allow that only computations with 
> > > > > >>some special
> > > > > >>structure are conscious.
> > > > > >
> > > > > >
> > > > > > It's possible, but once you start in that direction you can say 
> > > > > > that only computations
> > > > > > implemented on this machine rather than that machine can be 
> > > > > > conscious. You need the
> > > > > > hardware in order to specify structure, unless you can think of a 
> > > > > > God-given programming
> > > > > > language against which candidate computations can be measured.
> > > > >
> > > > > I regard that as a feature - not a bug. :-)
> > > > >
> > > > > Disembodied computation doesn't quite seem absurd - but our empirical 
> > > > > sample argues
> > > > > for embodiment.
> > > > >
> > > > > Brent Meeker
> > > >
> > > > I don't have a clear idea in my mind of disembodied computation except 
> > > > in rather simple cases,
> > > > like numbers and arithmetic. The number 5 exists as a Platonic ideal, 
> > > > and it can also be implemented
> > > > so we can interact with it, as when there is a collection of 5 oranges, 
> > > > or 3 oranges and 2 apples,
> > > > or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite 
> > > > variety. The difficulty is that if we
> > > > say that "3+2=5" as exemplified by 3 oranges and 2 apples is conscious, 
> > > > then should we also say
> > > > that the pairs+triplets of fruit are also conscious?
> > >
> > > No, they are only subroutines.
> >
> > But a computation is just a lot of subroutines; or equivalently, a 
> > computation is just a subroutine in a larger
> > computation or subroutine.
> 
> The point is that the subroutine does not have the functionality of the
> programme.
> 
> 
> > > >  If so, where do we draw the line?
> > >
> > > At specific structures
> >
> > By "structures" do you mean hardware or software?
> 
> Functional/algorithmic.
> 
> Whatever software does is also done by hardware. Software is  an
> abstraction
> ofrm hardware, not something additional.
> 
> > I don't think it's possible to pin down software structures
> > without reference to a particular machine and operating system. There is no 
> > natural or God-given language.
> 
> That isn't the point. I am not thiking of  a programme as a
> sequence
> of symbols. I am thinking of it as an abstract structure of branches
> and loops,
> the sort of thing that is represented by a flowchart.
>
> > > > That is what I mean
> > > > when I say that any computation can map onto any physical system. The 
> > > > physical structure and activity
> > > > of computer A implementing program a may be completely different to 
> > > > that of computer B implementing
> > > > program b, but program b may be an emulation of program a, which should 
> &

RE: computationalism and supervenience

2006-09-15 Thread Stathis Papaioannou

Colin Hales writes: 

> Please consider the plight of the zombie scientist with a huge set of
> sensory feeds and similar set of effectors. All carry similar signal
> encoding and all, in themselves, bestow no experiential qualities on the
> zombie.
> 
> Add a capacity to detect regularity in the sensory feeds.
> Add a scientific goal-seeking behaviour.
> 
> Note that this zombie...
> a) has the internal life of a dreamless sleep
> b) has no concept or percept of body or periphery
> c) has no concept that it is embedded in a universe.
> 
> I put it to you that science (the extraction of regularity) is the science
> of zombie sensory fields, not the science of the natural world outside the
> zombie scientist. No amount of creativity (except maybe random choices)
> would ever lead to any abstraction of the outside world that gave it the
> ability to handle novelty in the natural world outside the zombie scientist.
> 
> No matter how sophisticated the sensory feeds and any guesswork as to a
> model (abstraction) of the universe, the zombie would eventually find
> novelty invisible because the sensory feeds fail to depict the novelty .ie.
> same sensory feeds for different behaviour of the natural world.
> 
> Technology built by a zombie scientist would replicate zombie sensory feeds,
> not deliver an independently operating novel chunk of hardware with a
> defined function(if the idea of function even has meaning in this instance).
> 
> The purpose of consciousness is, IMO, to endow the cognitive agent with at
> least a repeatable (not accurate!) simile of the universe outside the
> cognitive agent so that novelty can be handled. Only then can the zombie
> scientist detect arbitrary levels of novelty and do open ended science (or
> survive in the wild world of novel environmental circumstance).
> 
> In the absence of the functionality of phenomenal consciousness and with
> finite sensory feeds you cannot construct any world-model (abstraction) in
> the form of an innate (a-priori) belief system that will deliver an endless
> ability to discriminate novelty. In a very Godellian way eventually a limit
> would be reach where the abstracted model could not make any prediction that
> can be detected. The zombie is, in a very real way, faced with 'truths' that
> exist but can't be accessed/perceived. As such its behaviour will be
> fundamentally fragile in the face of novelty (just like all computer
> programs are).
> ---
> Just to make the zombie a little more real... consider the industrial
> control system computer. I have designed, installed hundreds and wired up
> tens (hundreds?) of thousands of sensors and an unthinkable number of
> kilometers of cables. (NEVER again!) In all cases I put it to you that the
> phenomenal content of sensory connections may, at best, be characterised as
> whatever it is like to have electrons crash through wires, for that is what
> is actually going on. As far as the internal life of the CPU is concerned...
> whatever it is like to be an electrically noisy hot rock, regardless of the
> programalthough the character of the noise may alter with different
> programs!
> 
> I am a zombie expert! No that didn't come out right...erm
> perhaps... "I think I might be a world expert in zombies" yes, that's
> better.
> :-)
> Colin Hales

I've had another think about this after reading the paper you sent me. It seems 
that 
you are making two separate claims. The first is that a zombie would not be 
able to 
behave like a conscious being in every situation: specifically, when called 
upon to be 
scientifically creative. If this is correct it would be a corollary of the 
Turing test, i.e., 
if it behaves as if it is conscious under every situation, then it's conscious. 
However, 
you are being quite specific in describing what types of behaviour could only 
occur 
in the setting of phenomenal consciousness. Could you perhaps be even more 
specific 
and give an example of the simplest possible behaviour or scientific theory 
which an 
unconscious machine would be unable to mimic?

The second claim is that a computer could only ever be a zombie, and therefore 
could 
never be scientifically creative. However, it is possible to agree with the 
first claim and 
reject this one. Perhaps if a computer were complex enough to truly mimic the 
behaviour 
of a conscious being, including being scientifically creative, then it would 
indeed be 
conscious. Perhaps our present computers are either unconscious because they 
are too 
primitive or they are indeed conscious, but at the very low end of a 
consciousness 
continuum, like single-celled organisms or organisms with relatively simple 
nervous systems 
like planaria.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~

Re: computationalism and supervenience

2006-09-14 Thread Brent Meeker

Stathis Papaioannou wrote:
> Brent meeker writes:
> 
> 
>>>We would understand it in a third person sense but not in a first person 
>>>sense, except by analogy with our 
>>>own first person experience. Consciousness is the difference between what 
>>>can be known by observing an 
>>>entity and what can be known by being the entity, or something like the 
>>>entity, yourself. 
>>>
>>>Stathis Papaioannou
>>
>>But you are simply positing that there is such a difference.  That's easy to 
>>do 
>>because we know so little about how brains work.  But consider the engine in 
>>your 
>>car.  Do you know what it's like to be the engine in your car?  You know a 
>>lot about 
>>it, but how do you know that you know all of it?  Does that mean your car 
>>engine is 
>>conscious?  I'd say yes it is (at a very low level) and you *can* know what 
>>it's like.
> 
> 
> No, I don't know what it's like to be the engine in my car. I would guess it 
> isn't like anything, but I might be wrong. 
> If I am wrong, then my car engine may indeed be conscious, but in a 
> completely alien way, which I cannot 
> understand no matter how much I learn about car mechanics, because I am not 
> myself a car engine. 

Then doesn't the same apply to your hypothetical conscious, but alien computer 
whose
interpretative manuals are all lost?

>I think 
> the same would happen if we encountered an alien civilization. We would 
> probably assume that they were 
> conscious because we would observe that they exhibit intelligent behaviour, 
> but only if by coincidence they 
> had sensations, emotions etc. which reminded us of our own would we be able 
> to guess what their conscious 
> experience was actually like, and even then we would not be sure.

How could their inner experiences - sensations, emotions, etc - remind us of
anything?  We don't have access to them.  It would have to be their 
interactions with
the world and us that would cause us to infer their inner experiences; just as I
infer when my dog is happy or fearful.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-14 Thread Brent Meeker

Stathis Papaioannou wrote:
> Brent Meeker writes:
> 
> 
>>>I don't have a clear idea in my mind of disembodied computation except in 
>>>rather simple cases, 
>>>like numbers and arithmetic. The number 5 exists as a Platonic ideal, and it 
>>>can also be implemented 
>>>so we can interact with it, as when there is a collection of 5 oranges, or 3 
>>>oranges and 2 apples, 
>>>or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite 
>>>variety. The difficulty is that if we 
>>>say that "3+2=5" as exemplified by 3 oranges and 2 apples is conscious, then 
>>>should we also say 
>>>that the pairs+triplets of fruit are also conscious? If so, where do we draw 
>>>the line? 
>>
>>I'm not sure I understand your example.  Are you saying that by simply 
>>existing, two 
>>apples and 3 oranges compute 2+3=5?  If so I would disagree.  I would say it 
>>is our 
>>comprehending them as individual objects and also as a set that is the 
>>computation. 
>>Just hanging there on the trees they may be "computing" apple hanging on a 
>>tree, 
>>apple hanging on a tree,... but they're not computing 2+3=5.
> 
> 
> What about my example in an earlier post of beads on an abacus? You can slide 
> 2 beads to the left, then another 
> 3 beads to the left, and count a total of 5 beads; or 2 pairs of beads and 3 
> pairs of beads and count a total of 5 
> pairs of beads, or any other variation. Perhaps it seems a silly example when 
> discussing consciousness, but the most 
> elaborate (and putatively conscious) computation can be reduced to a complex 
> bead-sliding exercise. And if sliding 
> beads computes 2+3=5, why not if 2 birds and then 3 birds happen to land on a 
> tree, or a flock of birds of which 2 
> are red lands on one tree and another flock of birds of which 3 are red lands 
> on an adjacent tree? It is true that these 
> birds and beads are not of much consequence computationally unless someone is 
> there to observe them and interpret 
> them, but what about the computer that is conscious chug-chugging away all on 
> its own? 

No it's not a silly example; it's just that it seems that you are hypothesizing 
that 
I am providing the computation by seeing the apples as a pair, by seeing the 
beads as 
a triple and a pair and then as a quintuple.  Above, this exchange began with 
you 
posing this as an example of a disembodied computation - but then the examples 
seem 
to depend on some (embodied) person witnessing them in order that the *be* 
computations.  I guess I'm not convinced that it makes sense to say that 
anything can 
be a computation; other than in the trivial sense that it's a "simulation" of 
itself. 
  I agree that there is a mapping to a computation - but in most cases the 
mapping is 
such that it seems more reasonable to say the computation is in the application 
of 
the mapping.  And I dont' mean  that the mapping is complex - a mapping from my 
brain 
states to yours would no doubt be very complex.  I think the characteristic 
that 
would allow us to say the thinking was not in the mapping is something like 
whether 
it was static (like a look-up table) and not to large in some sense.

> 
>>>That is what I mean 
>>>when I say that any computation can map onto any physical system. 
>>
>>But as you've noted before the computation is almost all in the mapping.  And 
>>not 
>>just in the map, but in the application of the map - which is something we 
>>do.  That 
>>action can't be abstracted away.  You can't just say there's a physical 
>>system and 
>>there's a manual that would map it into some computation and stop there as 
>>though the 
>>computation has been done.  The mapping, an action, still needs to be 
>>performed.
> 
> 
> What if the computer is built according to some ridiculously complex plan, 
> plugged in, then all the engineers, manuals, 
> etc. disappear. If it was conscious to begin with, does it suddenly cease 
> being conscious because no-one is able to 
> understand it? It could have been designed according to the radioactive decay 
> patterns of a sacred stone, in which 
> case without the documentation, its internal states might appear completely 
> random. With the documentation, it may be 
> possible to understand what it is doing or even interact with it, and you 
> have said previously that it is the potential for 
> interaction that allows it to be conscious, but does that mean it gradually 
> becomes less conscious as pages of the manual 
> are ripped out one by one and destroyed, even though the computer itself does 
> not change its activity as a result?
> 
> 
>>>The physical structure and activity 
>>>of computer A implementing program a may be completely different to that of 
>>>computer B implementing 
>>>program b, but program b may be an emulation of program a, which should make 
>>>the two machines 
>>>functionally equivalent and, under computationalism, equivalently conscious. 
>>
>>I don't see any problem with supposing that A and B are equally consci

Re: computationalism and supervenience

2006-09-14 Thread 1Z


Stathis Papaioannou wrote:
> Brent meeker writes:
>

> > I don't recall anything about all computations implementing consciousness?
> >
> > Brent Meeker
>
> OK, this is the basis of our disagreement. I understood computationalism as 
> the idea that it is the
> actual computation that gives rise to consciousness. For example, if you have 
> a conscious robot
> shovelling coal, you could take the computations going on in the robot's 
> processor and run it on
> another similar computer with sham inputs and the same conscious experience 
> would result. And
> if the program runs on one computer, it can run on another computer with the 
> appropriate emulation
> software (the most general case of which is the UTM), which should also 
> result in the same conscious
> experience. I suppose it is possible that *actually shovelling the coal* is 
> essential for the coal-shovelling
> experience, and an emulation of that activity just wouldn't do it. However, 
> how can the robot tell the
> difference between the coal and the simulated coal, and how can it know if it 
> is running on Windows XP
> or Mac OS emulating Windows XP?


That has nothing to do with all computations implementing consciousness


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-14 Thread 1Z


Stathis Papaioannou wrote:
> Brent Meeker writes:
>
> > > I don't have a clear idea in my mind of disembodied computation except in 
> > > rather simple cases,
> > > like numbers and arithmetic. The number 5 exists as a Platonic ideal, and 
> > > it can also be implemented
> > > so we can interact with it, as when there is a collection of 5 oranges, 
> > > or 3 oranges and 2 apples,
> > > or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite 
> > > variety. The difficulty is that if we
> > > say that "3+2=5" as exemplified by 3 oranges and 2 apples is conscious, 
> > > then should we also say
> > > that the pairs+triplets of fruit are also conscious? If so, where do we 
> > > draw the line?
> >
> > I'm not sure I understand your example.  Are you saying that by simply 
> > existing, two
> > apples and 3 oranges compute 2+3=5?  If so I would disagree.  I would say 
> > it is our
> > comprehending them as individual objects and also as a set that is the 
> > computation.
> > Just hanging there on the trees they may be "computing" apple hanging on a 
> > tree,
> > apple hanging on a tree,... but they're not computing 2+3=5.
>
> What about my example in an earlier post of beads on an abacus? You can slide 
> 2 beads to the left, then another
> 3 beads to the left, and count a total of 5 beads; or 2 pairs of beads and 3 
> pairs of beads and count a total of 5
> pairs of beads, or any other variation. Perhaps it seems a silly example when 
> discussing consciousness, but the most
> elaborate (and putatively conscious) computation can be reduced to a complex 
> bead-sliding exercise. And if sliding
> beads computes 2+3=5, why not if 2 birds and then 3 birds happen to land on a 
> tree, or a flock of birds of which 2
> are red lands on one tree and another flock of birds of which 3 are red lands 
> on an adjacent tree? It is true that these
> birds and beads are not of much consequence computationally unless someone is 
> there to observe them and interpret
> them, but what about the computer that is conscious chug-chugging away all on 
> its own?
>
> > >That is what I mean
> > > when I say that any computation can map onto any physical system.
> >
> > But as you've noted before the computation is almost all in the mapping.  
> > And not
> > just in the map, but in the application of the map - which is something we 
> > do.  That
> > action can't be abstracted away.  You can't just say there's a physical 
> > system and
> > there's a manual that would map it into some computation and stop there as 
> > though the
> > computation has been done.  The mapping, an action, still needs to be 
> > performed.
>
> What if the computer is built according to some ridiculously complex plan, 
> plugged in, then all the engineers, manuals,
> etc. disappear. If it was conscious to begin with, does it suddenly cease 
> being conscious because no-one is able to
> understand it?
If it was consicous, it ws consicus as the result
of whatever computation it is performing, and that is *not* the
computation resulting froma complex process of re-interpretation.
As I have shown, such a process is a separate computation in
which the "computer" figures as a subrouting -- possibly
even an unimportant one.

It is also possible that the "computer" isn't conscious, but
the total system of comptuer+reinterpretation is conscious
In that case, if the apparatus that implements the mapping is
dismantled,
the consciousness disappears.

The thing to remember is that just because one physical
systems is desgnated a computer, and another isn't, that doesn't
mean the first systesm is in fact doing all the computing. The
computing
is taking place where the activity and the complexity is taking place.

> It could have been designed according to the radioactive decay patterns of a 
> sacred stone, in which
> case without the documentation, its internal states might appear completely 
> random. With the documentation, it may be
> possible to understand what it is doing or even interact with it, and you 
> have said previously that it is the potential for
> interaction that allows it to be conscious, but does that mean it gradually 
> becomes less conscious as pages of the manual
> are ripped out one by one and destroyed, even though the computer itself does 
> not change its activity as a result?
>
> > >The physical structure and activity
> > > of computer A implementing program a may be completely different to that 
> > > of computer B implementing
> > > program b, but program b may be an emulation of program a, which should 
> > > make the two machines
> > > functionally equivalent and, under computationalism, equivalently 
> > > conscious.
> >
> > I don't see any problem with supposing that A and B are equally conscious 
> > (or
> > unconscious).
>
> But there is a mapping under which any machine B is emulating a machine A. 
> Figuring out this mapping does not change the
> physical activity of either A or B.

But without an phsyical a

RE: computationalism and supervenience

2006-09-14 Thread Stathis Papaioannou

Brent meeker writes:

> >>>I think it goes against standard computationalism if you say that a 
> >>>conscious
> >>>computation has some inherent structural property.
> > 
> > 
> > I should have said, that the *hardware* has some special structural 
> > property goes 
> > against computationalism. It is difficult to pin down the "structure" of a 
> > computation 
> > without reference to a programming language or hardware. The idea is that 
> > the 
> > same computation can look completely different on different computers, the 
> > corollary 
> > of which is that any computer (or physical process) may be implementing any 
> > computation, we just might not know about it. It is legitimate to say that 
> > only 
> > particular computers (eg. brains, or PC's) using particular languages arev 
> > actually 
> > implementing conscious computations, but that is not standard 
> > computationalism.
> > 
> > Statthis Papaioannou
> 
> I thought standard computationalism was just the modest position that if the 
> hardware 
> of your brain were replaced piecemeal by units with the same input-output at 
> some 
> microscopic level usually assumed to be neurons, you'd still be you and you'd 
> still 
> be conscious.
> 
> I don't recall anything about all computations implementing consciousness?
> 
> Brent Meeker

OK, this is the basis of our disagreement. I understood computationalism as the 
idea that it is the 
actual computation that gives rise to consciousness. For example, if you have a 
conscious robot 
shovelling coal, you could take the computations going on in the robot's 
processor and run it on 
another similar computer with sham inputs and the same conscious experience 
would result. And 
if the program runs on one computer, it can run on another computer with the 
appropriate emulation 
software (the most general case of which is the UTM), which should also result 
in the same conscious 
experience. I suppose it is possible that *actually shovelling the coal* is 
essential for the coal-shovelling 
experience, and an emulation of that activity just wouldn't do it. However, how 
can the robot tell the 
difference between the coal and the simulated coal, and how can it know if it 
is running on Windows XP 
or Mac OS emulating Windows XP?

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-14 Thread Stathis Papaioannou

Brent meeker writes:

> > We would understand it in a third person sense but not in a first person 
> > sense, except by analogy with our 
> > own first person experience. Consciousness is the difference between what 
> > can be known by observing an 
> > entity and what can be known by being the entity, or something like the 
> > entity, yourself. 
> > 
> > Stathis Papaioannou
> 
> But you are simply positing that there is such a difference.  That's easy to 
> do 
> because we know so little about how brains work.  But consider the engine in 
> your 
> car.  Do you know what it's like to be the engine in your car?  You know a 
> lot about 
> it, but how do you know that you know all of it?  Does that mean your car 
> engine is 
> conscious?  I'd say yes it is (at a very low level) and you *can* know what 
> it's like.

No, I don't know what it's like to be the engine in my car. I would guess it 
isn't like anything, but I might be wrong. 
If I am wrong, then my car engine may indeed be conscious, but in a completely 
alien way, which I cannot 
understand no matter how much I learn about car mechanics, because I am not 
myself a car engine. I think 
the same would happen if we encountered an alien civilization. We would 
probably assume that they were 
conscious because we would observe that they exhibit intelligent behaviour, but 
only if by coincidence they 
had sensations, emotions etc. which reminded us of our own would we be able to 
guess what their conscious 
experience was actually like, and even then we would not be sure.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-14 Thread Stathis Papaioannou

Brent Meeker writes:

> > I don't have a clear idea in my mind of disembodied computation except in 
> > rather simple cases, 
> > like numbers and arithmetic. The number 5 exists as a Platonic ideal, and 
> > it can also be implemented 
> > so we can interact with it, as when there is a collection of 5 oranges, or 
> > 3 oranges and 2 apples, 
> > or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite 
> > variety. The difficulty is that if we 
> > say that "3+2=5" as exemplified by 3 oranges and 2 apples is conscious, 
> > then should we also say 
> > that the pairs+triplets of fruit are also conscious? If so, where do we 
> > draw the line? 
> 
> I'm not sure I understand your example.  Are you saying that by simply 
> existing, two 
> apples and 3 oranges compute 2+3=5?  If so I would disagree.  I would say it 
> is our 
> comprehending them as individual objects and also as a set that is the 
> computation. 
> Just hanging there on the trees they may be "computing" apple hanging on a 
> tree, 
> apple hanging on a tree,... but they're not computing 2+3=5.

What about my example in an earlier post of beads on an abacus? You can slide 2 
beads to the left, then another 
3 beads to the left, and count a total of 5 beads; or 2 pairs of beads and 3 
pairs of beads and count a total of 5 
pairs of beads, or any other variation. Perhaps it seems a silly example when 
discussing consciousness, but the most 
elaborate (and putatively conscious) computation can be reduced to a complex 
bead-sliding exercise. And if sliding 
beads computes 2+3=5, why not if 2 birds and then 3 birds happen to land on a 
tree, or a flock of birds of which 2 
are red lands on one tree and another flock of birds of which 3 are red lands 
on an adjacent tree? It is true that these 
birds and beads are not of much consequence computationally unless someone is 
there to observe them and interpret 
them, but what about the computer that is conscious chug-chugging away all on 
its own? 
 
> >That is what I mean 
> > when I say that any computation can map onto any physical system. 
> 
> But as you've noted before the computation is almost all in the mapping.  And 
> not 
> just in the map, but in the application of the map - which is something we 
> do.  That 
> action can't be abstracted away.  You can't just say there's a physical 
> system and 
> there's a manual that would map it into some computation and stop there as 
> though the 
> computation has been done.  The mapping, an action, still needs to be 
> performed.

What if the computer is built according to some ridiculously complex plan, 
plugged in, then all the engineers, manuals, 
etc. disappear. If it was conscious to begin with, does it suddenly cease being 
conscious because no-one is able to 
understand it? It could have been designed according to the radioactive decay 
patterns of a sacred stone, in which 
case without the documentation, its internal states might appear completely 
random. With the documentation, it may be 
possible to understand what it is doing or even interact with it, and you have 
said previously that it is the potential for 
interaction that allows it to be conscious, but does that mean it gradually 
becomes less conscious as pages of the manual 
are ripped out one by one and destroyed, even though the computer itself does 
not change its activity as a result?

> >The physical structure and activity 
> > of computer A implementing program a may be completely different to that of 
> > computer B implementing 
> > program b, but program b may be an emulation of program a, which should 
> > make the two machines 
> > functionally equivalent and, under computationalism, equivalently 
> > conscious. 
> 
> I don't see any problem with supposing that A and B are equally conscious (or 
> unconscious).

But there is a mapping under which any machine B is emulating a machine A. 
Figuring out this mapping does not change the 
physical activity of either A or B. You can argue that therefore the physical 
activity of A or B is irrelevant and consciousness 
is implemented non-corporeally by virtue of its existence as a Platonic object; 
or you can argue that this is clearly nonsense and 
consciousness is implemented as a result of some special physical property of a 
particular machine.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-13 Thread 1Z


Brent Meeker wrote:

> Didn't what?...decide we had acted freely?...noticed?

if we noticed our decisions at the same time as we made them.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-13 Thread Brent Meeker

1Z wrote:
> 
> Brent Meeker wrote:
> 
>>Stathis Papaioannou wrote:
>>
>>>Peter Jones writes:
>>>
>>>
>>>
>That's what I'm saying, but I certainly don't think everyone agrees with 
>me on the list, and
>I'm not completely decided as to which of the three is more absurd: every 
>physical system
>implements every conscious computation, no physical system implements any 
>conscious
>computation (they are all implemented non-physically in Platonia), or the 
>idea that a
>computation can be conscious in the first place.


You haven't made it clear why you don't accept that every physical
system
implements one computation, whether it is a
conscious computation or not. I don't see what
contradicts it.
>>>
>>>
>>>Every physical system does implement every computation, in a trivial sense, 
>>>as every rock
>>>is a hammer and a doorstop and contains a bust of Albert Einstein inside it. 
>>>Those three aspects
>>>of rocks are not of any consequence unless there is someone around to 
>>>appreciate them.
>>>Similarly, if the vibration of atoms in a rock under some complex mapping 
>>>are calculating pi
>>>that is not of any consequence unless someone goes to the trouble of 
>>>determining that mapping,
>>>and even then it wouldn't be of any use as a general purpose computer unless 
>>>you built another
>>>general purpose computer to dynamically interpret the vibrations (which does 
>>>not mean the rock
>>>isn't doing the calculation without this extra computer).
>>
>>I think there are some constraints on what the rock must be doing in order 
>>that it
>>can be said to be calculating pi instead of the interpreting computer.  For 
>>example
>>if the rock states were just 1,0,1,0,1,0... then there are several arguments 
>>based on
>>for example information theory that would rule out that being a computation 
>>of pi.
> 
> 
> Stathis would no doubt say you just need a dictionary that says;
> 
> Let the first 1 be 3
> let the first 0 be 1
> let the second 1 be 4
> let the second 0 be 1
> let the third 1 be 5
> let the third 0 be 9
> ...

I don't think he would because he acceded to my point about isomorphism - 
although 
what's "iso" between two programs executing the same algorithm is a little hard 
to 
pin down.

Brent Meekeer


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-13 Thread Brent Meeker

1Z wrote:
> 
> Stathis Papaioannou wrote:
> 
>>Thanks for the quotes from Dennett's "Freedom Evolves". The physiological 
>>experiments are interesting,
>>but the fact is, even if they can be shown to be flawed in some way, it would 
>>still be entirely consistent
>>with our behaviour and our subjective experience of free will if we acted 
>>first, then noticed what we had
>>done and decided we had acted freely.
> 
> 
> And it would be consistent with our behaviour and experience if we
> didn't.

Didn't what?...decide we had acted freely?...noticed?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-13 Thread 1Z


Brent Meeker wrote:
> Stathis Papaioannou wrote:
> > Peter Jones writes:
> >
> >
> >>>That's what I'm saying, but I certainly don't think everyone agrees with 
> >>>me on the list, and
> >>>I'm not completely decided as to which of the three is more absurd: every 
> >>>physical system
> >>>implements every conscious computation, no physical system implements any 
> >>>conscious
> >>>computation (they are all implemented non-physically in Platonia), or the 
> >>>idea that a
> >>>computation can be conscious in the first place.
> >>
> >>
> >>You haven't made it clear why you don't accept that every physical
> >>system
> >>implements one computation, whether it is a
> >>conscious computation or not. I don't see what
> >>contradicts it.
> >
> >
> > Every physical system does implement every computation, in a trivial sense, 
> > as every rock
> > is a hammer and a doorstop and contains a bust of Albert Einstein inside 
> > it. Those three aspects
> > of rocks are not of any consequence unless there is someone around to 
> > appreciate them.
> > Similarly, if the vibration of atoms in a rock under some complex mapping 
> > are calculating pi
> > that is not of any consequence unless someone goes to the trouble of 
> > determining that mapping,
> > and even then it wouldn't be of any use as a general purpose computer 
> > unless you built another
> > general purpose computer to dynamically interpret the vibrations (which 
> > does not mean the rock
> > isn't doing the calculation without this extra computer).
>
> I think there are some constraints on what the rock must be doing in order 
> that it
> can be said to be calculating pi instead of the interpreting computer.  For 
> example
> if the rock states were just 1,0,1,0,1,0... then there are several arguments 
> based on
> for example information theory that would rule out that being a computation 
> of pi.

Stathis would no doubt say you just need a dictionary that says;

Let the first 1 be 3
let the first 0 be 1
let the second 1 be 4
let the second 0 be 1
let the third 1 be 5
let the third 0 be 9
...

But there are good AIT reasons for saying that all the complexity is
in the dictionary


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-13 Thread 1Z


Stathis Papaioannou wrote:
> Peter Jones writes:
>
> > > That's what I'm saying, but I certainly don't think everyone agrees with 
> > > me on the list, and
> > > I'm not completely decided as to which of the three is more absurd: every 
> > > physical system
> > > implements every conscious computation, no physical system implements any 
> > > conscious
> > > computation (they are all implemented non-physically in Platonia), or the 
> > > idea that a
> > > computation can be conscious in the first place.
> >
> >
> > You haven't made it clear why you don't accept that every physical
> > system
> > implements one computation, whether it is a
> > conscious computation or not. I don't see what
> > contradicts it.
>
> Every physical system does implement every computation, in a trivial sense, 
> as every rock
> is a hammer and a doorstop and contains a bust of Albert Einstein inside it.

The rock-hammer and the bust of Einstein are mere possibilities. You
don't
have an argument to the effect that every physical sytem is
implements every computation. Every physical systesm
could implelement any computation under suitable re-interpretation,
but that is a mere possibility unless someone does the re-interpreting,
--
in which case it is in fact the system+interpreter combination that is
doing
the  re-intrepreting.


>  Those three aspects
> of rocks are not of any consequence unless there is someone around to 
> appreciate them.
> Similarly, if the vibration of atoms in a rock under some complex mapping are 
> calculating pi
> that is not of any consequence unless someone goes to the trouble of 
> determining that mapping,
> and even then it wouldn't be of any use as a general purpose computer unless 
> you built another
> general purpose computer to dynamically interpret the vibrations (which does 
> not mean the rock
> isn't doing the calculation without this extra computer). However, if busts 
> of Einstein were conscious
> regardless of the excess rock around them, or calculations of pi were 
> conscious regardless of the
> absence of anyone being able to appreciate them, then the existence of the 
> rock in an otherwise
> empty universe would necessitate the existence of at least those two 
> conscious processes.

That's a big "if".

> Computationalism says that some computations are conscious.

They mean, of course, *actual* computations.

>  It is also a general principle of
> computer science that equivalent computations can be implemented on very 
> different hardware
> and software platforms;

That doens't mean that one system is implementing multiple
computations simulataneously.

> by extension, the vibration of atoms in a rock can be seen as implementing
> any computation under the right interpretation.

ie the vibration+interpreter system could implement any computation.

> Normally, it is of no consequence that a rock
> implements all these computations.

It doesn't, in itself, It needs an interpreter.

> But if some of these computations are conscious (a consequence
> of computationalism)

Some of them *would* be consious *if* there were an interpreter
available. Otherwise they are mere possibilities. And if the
physical process is simple, and the interpreter cmplex, it
is reasonable to suppose the interpreter is doing most of the wrok
and therefore has most of the consciousness.

> and if some of the conscious computations are conscious in the absence of
> environmental input, then every rock is constantly implementing all these 
> conscious computations.

No That's a "would be" not an "is".

> To get around this you would have to deny that computations can be conscious, 
> or at least restrict
> the conscious computations to specific hardware platforms and programming 
> languages.

No I don't. All I to do have to point out that if something is
interpreter-dependent,
it doesn't exist in the absence of an interpreter.

>  This destroys
> computationalism, although it can still allow a form of functionalism.
> The other way to go is to reject
> the supervenience thesis and keep computationalism, which would mean that 
> every computation
> (includidng the conscious ones) is implemented necessarily in the absence of 
> any physical process.



> Stathis Papaioannou
> _
> Be one of the first to try Windows Live Mail.
> http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-13 Thread 1Z


Stathis Papaioannou wrote:
> Thanks for the quotes from Dennett's "Freedom Evolves". The physiological 
> experiments are interesting,
> but the fact is, even if they can be shown to be flawed in some way, it would 
> still be entirely consistent
> with our behaviour and our subjective experience of free will if we acted 
> first, then noticed what we had
> done and decided we had acted freely.

And it would be consistent with our behaviour and experience if we
didn't.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-13 Thread 1Z


Stathis Papaioannou wrote:
> Peter Jones writes (quoting SP):
>
> > > > > I'm not sure how the multiverse comes into the discussion, but you 
> > > > > have
> > > > > made the point several times that a computation depends on an observer
> > > >
> > > >
> > > > No, I haven't! I have tried ot follow through the consequences of
> > > > assuming it must.
> > > > It seems to me that some sort of absurdity or contradiction ensues.
> > >
> > > OK. This has been a long and complicated thread.
> > >
> > > > > for its meaning. I agree, but *if* computations can be conscious 
> > > > > (remember,
> > > > > this is an assumption) then in that special case an external observer 
> > > > > is not
> > > > > needed.
> > > >
> > > > Why not ? (Well, I would be quite happy that a conscious
> > > > computation would have some inherent structural property --
> > > > I want to foind out why *you* would think it doesn't).
> > >
> > > I think it goes against standard computationalism if you say that a 
> > > conscious
> > > computation has some inherent structural property.
>
> I should have said, that the *hardware* has some special structural property 
> goes
> against computationalism. It is difficult to pin down the "structure" of a 
> computation
> without reference to a programming language or hardware.

It is far from impossible. If it keeps returning to the same state,
it is in a loop, for instance. I am sure that you are tiching to point
out
that loops can be made to appear or vanish by re-interpretation.
My point is that it is RE interpretation. There is a baseline
set by what is true of a system under minimal interpretation.

 The idea is that the
> same computation can look completely different on different computers,

Not *completely* different. There will be a mapping, and it will
be a lot simpler than one of your fanciful ones.

> the corollary
> of which is that any computer (or physical process) may be implementing any
> computation, we just might not know about it.

That doesn't follow. The computational structure that a physical
systems is "really" implementing is the computational structure that
can
be reverse-engineered under a minimally complex interpretation.

You *can* introduce more complex mappings, but you don't *have* to. It
is
an artificial problem.

>  It is legitimate to say that only
> particular computers (eg. brains, or PC's) using particular languages arev 
> actually
> implementing conscious computations, but that is not standard 
> computationalism.
>
> Statthis Papaioannou
> _
> Be one of the first to try Windows Live Mail.
> http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-13 Thread 1Z


Stathis Papaioannou wrote:
> Peter Jones writes:
>
> > Stathis Papaioannou wrote:
> > > Brent meeker writes:
> > >
> > > > >>>I think it goes against standard computationalism if you say that a 
> > > > >>>conscious
> > > > >>>computation has some inherent structural property. Opponents of 
> > > > >>>computationalism
> > > > >>>have used the absurdity of the conclusion that anything implements 
> > > > >>>any conscious
> > > > >>>computation as evidence that there is something special and 
> > > > >>>non-computational
> > > > >>>about the brain. Maybe they're right.
> > > > >>>
> > > > >>>Stathis Papaioannou
> > > > >>
> > > > >>Why not reject the idea that any computation implements every 
> > > > >>possible computation
> > > > >>(which seems absurd to me)?  Then allow that only computations with 
> > > > >>some special
> > > > >>structure are conscious.
> > > > >
> > > > >
> > > > > It's possible, but once you start in that direction you can say that 
> > > > > only computations
> > > > > implemented on this machine rather than that machine can be 
> > > > > conscious. You need the
> > > > > hardware in order to specify structure, unless you can think of a 
> > > > > God-given programming
> > > > > language against which candidate computations can be measured.
> > > >
> > > > I regard that as a feature - not a bug. :-)
> > > >
> > > > Disembodied computation doesn't quite seem absurd - but our empirical 
> > > > sample argues
> > > > for embodiment.
> > > >
> > > > Brent Meeker
> > >
> > > I don't have a clear idea in my mind of disembodied computation except in 
> > > rather simple cases,
> > > like numbers and arithmetic. The number 5 exists as a Platonic ideal, and 
> > > it can also be implemented
> > > so we can interact with it, as when there is a collection of 5 oranges, 
> > > or 3 oranges and 2 apples,
> > > or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite 
> > > variety. The difficulty is that if we
> > > say that "3+2=5" as exemplified by 3 oranges and 2 apples is conscious, 
> > > then should we also say
> > > that the pairs+triplets of fruit are also conscious?
> >
> > No, they are only subroutines.
>
> But a computation is just a lot of subroutines; or equivalently, a 
> computation is just a subroutine in a larger
> computation or subroutine.

The point is that the subroutine does not have the functionality of the
programme.


> > >  If so, where do we draw the line?
> >
> > At specific structures
>
> By "structures" do you mean hardware or software?

Functional/algorithmic.

Whatever software does is also done by hardware. Software is  an
abstraction
ofrm hardware, not something additional.

> I don't think it's possible to pin down software structures
> without reference to a particular machine and operating system. There is no 
> natural or God-given language.

That isn't the point. I am not thiking of  a programme as a
sequence
of symbols. I am thinking of it as an abstract structure of branches
and loops,
the sort of thing that is represented by a flowchart.

> > > That is what I mean
> > > when I say that any computation can map onto any physical system. The 
> > > physical structure and activity
> > > of computer A implementing program a may be completely different to that 
> > > of computer B implementing
> > > program b, but program b may be an emulation of program a, which should 
> > > make the two machines
> > > functionally equivalent and, under computationalism, equivalently 
> > > conscious.
> >
> > So ? If the functional equivalence doesn't depend on a
> > baroque-reinterpretation,
> > where is the problem ?
>
> Who interprets the meaning of "baroque"?

There are objective ways of decifing that kiond of issue, e.g
algortihmic information
theory.

> > > Maybe this is wrong, eg.
> > > there is something special about the insulation in the wires of machine 
> > > A, so that only A can be conscious.
> > > But that is no longer computationalism.
> >
> > No. But what would force that conclusion on us ? Why can't
> > consciousness
> > attach to features more gneral than hardware, but less general than one
> > of your re-interpretations ?
>
> Because there is no natural or God-given computer architecture or language.

Which is prcisely why computationalists should regard
consicousness as supervening on a functional structure that could
be implemented on a varierty of hardware platofirms and ina variety
of langauges. After all, we can talk about a "quicksort" without
specifiying whether it is a PC quicksort of a Mac quicksort, and
without specifiying
whether it is a Pascal quicksort or a Java quicksort.

The "right" level of abstraction is very much a part of standard
comouter science.

>  You could say that consciousness
> does follow a natural architecture: that of the brain. But that could mean 
> you would have a zombie if you tried
> to copy brain function with a digital computer, or with a digital computer 
> not running Mr. Gates' operating system.

I

Re: computationalism and supervenience

2006-09-13 Thread 1Z


Stathis Papaioannou wrote:
> Peter Jones writes:
>
> > If consciousness supervenes on inherent non-interprtation-dependent
> > features,
> > it can supervene on features which are binary, either present or
> > absent.
> >
> > For instance, whether a programme examines or modifies its own code is
> > surely
> > such a feature.
> >
> >
> > >Even if computationalism were false and only those machines
> > > specially blessed by God were conscious there would have to be a 
> > > continuum, across
> > > different species and within the lifespan of an individual from birth to 
> > > death. The possibility
> > > that consciousness comes on like a light at some point in your life, or 
> > > at some point in the
> > > evolution of a species, seems unlikely to me.
> >
> > Surely it comes on like a light whenver you wake up.
>
> Being alive/dead or conscious/unconscious would seem to be a binary property, 
> but it's
> hard to believe (though not impossible) that there would be one circuit, 
> neuron or line of
> code that makes the difference between conscious and unconscious.

It's easy to believe there is one line of code that makes the
difference between 
 a spreadsheet and a BSOD.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-13 Thread Stathis Papaioannou

Thanks for the quotes from Dennett's "Freedom Evolves". The physiological 
experiments are interesting, 
but the fact is, even if they can be shown to be flawed in some way, it would 
still be entirely consistent 
with our behaviour and our subjective experience of free will if we acted 
first, then noticed what we had 
done and decided we had acted freely. 

As for FW being a combination of random and determined, it has to be, doesn't 
it? What else is there? 
Roulette wheels also have to be a combination of random and determined, and if 
you can pin down the 
"determined" part, you could be rich.

Stathis Papaioannou




> From: [EMAIL PROTECTED]
> To: everything-list@googlegroups.com
> Subject: Re: computationalism and supervenience
> Date: Tue, 12 Sep 2006 09:15:12 -0700
> 
> 
> 
> Stathis Papaioannou wrote:
> > Brent Meeker writes:
> >
> > > >>I could make a robot that, having suitable thermocouples, would quickly 
> > > >>withdraw it's
> > > >>hand from a fire; but not be conscious of it.  Even if I provide the 
> > > >>robot with
> > > >>"feelings", i.e. judgements about good/bad/pain/pleasure I'm not sure 
> > > >>it would be
> > > >>conscious.  But if I provide it with "attention" and memory, so that it 
> > > >>noted the
> > > >>painful event as important and necessary to remember because of it's 
> > > >>strong negative
> > > >>affect; then I think it would be conscious.
> > > >
> > > >
> > > > It's interesting that people actually withdraw their hand from the fire 
> > > > *before* they experience
> > > > the pain. The withdrawl is a reflex, presumably evolved in organisms 
> > > > with the most primitive
> > > > central nervour systems, while the pain seems to be there as an 
> > > > afterthought to teach us a
> > > > lesson so we won't do it again. Thus, from consideration of 
> > > > evolutionary utility consciousness
> > > > does indeed seem to be a side-effect of memory and learning.
> > >
> > > Even more curious, volitional action also occurs before one is aware of 
> > > it. Are you
> > > familiar with the experiments of Benjamin Libet and Grey Walter?
> >
> > These experiments showed that in apparently voluntarily initiated motion, 
> > motor cortex activity
> > actually preceded the subject's awareness of his intention by a substantial 
> > fraction of a second.
> > In other words, we act first, then "decide" to act.
> 
> Does Benjamin Libet's Research Empirically Disprove Free Will ?
> Scientifically informed sceptics about FW often quote a famous
> experiment by benjamin Libet, which supposedly shows that a kind of
> signal called a "Readiness Potential", detectable by electrodes,
> precedes a conscious decisions, and is a reliable indicator of the
> decision, and thus -- so the claim goes -- indicates that our decisions
> are not ours but made for us by unconsious processes.
> 
> In fact, Libet himself doesn't draw a sweepingly sceptical conclusion
> from his own results. For one thing, Readiness Potentials are not
> always followed by actions. he believes it is possible for
> consicousness to intervene with a "veto" to the action:
> 
> "The initiation of the freely voluntary act appears to begin in the
> brain unconsciously, well before the person consciously knows he wants
> to act! Is there, then, any role for conscious will in the performing
> of a voluntary act?...To answer this it must be recognised that
> conscious will (W) does appear about 150milliseconds before the muscsle
> is activated, even though it follows the onset ofthe RP. An interval of
> 150msec would allow enough time in which the conscious function might
> affec the final outcome of the volitional process."
> 
> (Libet, quoted in "Freedom Evolves" by Daniel Dennett, p. 230 )
> 
> "This suggests our conscious minds may not have free will but
> rather free won't!"
> 
> (V.S Ramachandran, quoted in "Freedom Evolves" by Daniel Dennett, p.
> 231 )
> 
> However, it is quite possible that the Libertarian doesn't need to
> appeal to "free won't" to avoid the conclusion that free won't doesn't
> exist.
> 
> Libet tells when the RP occurs using electrodes. But how does Libet he
> when conscious decison-making occurs ? He relies on the subject
> reporting the position

RE: computationalism and supervenience

2006-09-12 Thread Colin Hales

Brent Meeker:
> 
> Colin Hales wrote:
> ...
> 
> >> As far as the internal life of the CPU is
> >>concerned...
> >>>whatever it is like to be an electrically noisy hot rock, regardless of
> >>the
> >>>programalthough the character of the noise may alter with different
> >>>programs!
> >
> >>That's like say whatever it is like to be you, it is at best some waves
> of
> >>chemical
> >>potential.  You don't *know* that the control system is not conscious -
> >>unless you
> >>know what structure or function makes a system conscious.
> >>
> >
> >
> > There is nothing there except wires and electrically noisy hot rocks,
> > plastic and other materials = .
> 
> Just like me.  Nothing but proteins and osmotic potentials and ACT and ADP
> = .

Wellnot quite... The  you talk about is behaving ly. All
except the neurons and astrocytes. They are behaving as 2 thingsthere is
virtual matter being generated. But this is just my (albeit well-founded,
IMO) prejudice, so if you don't want to believe it then it's all behaving
ly and only ly.

> 
> >Whatever its consciousness is... it
> > is the consciousness of the . The function
> 
> Which function?

WORD, EXCEL, IE etc If run WORD (in contrast to EXCEL) I think the
noise in the chip might be different... although the ratio of WORD noise to
WINDOWS noise (it is a time slice/event driven operating system, after all)
is hard to know.

> 
> > is an epiphenomenon at the
> > scale of a human user
> 
> Who's the user of my brain?

An implicit/inherent user in the situation of you as a collection of
electromagnetic phenomena extruded from the space you inhabit.

> 
> Brent Meeker
> 
> >that has nothing to do with the experiential qualities
> > of being the computer.
> 
> What are the experiential qualities of being a computer?

At he moment all I can say is a likelihood... that it is not like anything,
ever, because there is no virtual matter being generated. There may be some
associated with the capacitances in the electronics, but until I analyse it
properly I won't know.

 and how can we
> know them?

By sorting out virtual matter.
(see my post on this of a couple of weeks back)
See above.
> 
> Brent Meeker
> 

You do love these odd questions, don't you? :-)

Colin



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou

Peter Jones writes:
 
> Stathis Papaioannou wrote:
> > Brent meeker writes:
> >
> > > >>>I think it goes against standard computationalism if you say that a 
> > > >>>conscious
> > > >>>computation has some inherent structural property. Opponents of 
> > > >>>computationalism
> > > >>>have used the absurdity of the conclusion that anything implements any 
> > > >>>conscious
> > > >>>computation as evidence that there is something special and 
> > > >>>non-computational
> > > >>>about the brain. Maybe they're right.
> > > >>>
> > > >>>Stathis Papaioannou
> > > >>
> > > >>Why not reject the idea that any computation implements every possible 
> > > >>computation
> > > >>(which seems absurd to me)?  Then allow that only computations with 
> > > >>some special
> > > >>structure are conscious.
> > > >
> > > >
> > > > It's possible, but once you start in that direction you can say that 
> > > > only computations
> > > > implemented on this machine rather than that machine can be conscious. 
> > > > You need the
> > > > hardware in order to specify structure, unless you can think of a 
> > > > God-given programming
> > > > language against which candidate computations can be measured.
> > >
> > > I regard that as a feature - not a bug. :-)
> > >
> > > Disembodied computation doesn't quite seem absurd - but our empirical 
> > > sample argues
> > > for embodiment.
> > >
> > > Brent Meeker
> >
> > I don't have a clear idea in my mind of disembodied computation except in 
> > rather simple cases,
> > like numbers and arithmetic. The number 5 exists as a Platonic ideal, and 
> > it can also be implemented
> > so we can interact with it, as when there is a collection of 5 oranges, or 
> > 3 oranges and 2 apples,
> > or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite 
> > variety. The difficulty is that if we
> > say that "3+2=5" as exemplified by 3 oranges and 2 apples is conscious, 
> > then should we also say
> > that the pairs+triplets of fruit are also conscious?
> 
> No, they are only subroutines.

But a computation is just a lot of subroutines; or equivalently, a computation 
is just a subroutine in a larger 
computation or subroutine.
 
> >  If so, where do we draw the line?
> 
> At specific structures

By "structures" do you mean hardware or software? I don't think it's possible 
to pin down software structures 
without reference to a particular machine and operating system. There is no 
natural or God-given language.
 
> > That is what I mean
> > when I say that any computation can map onto any physical system. The 
> > physical structure and activity
> > of computer A implementing program a may be completely different to that of 
> > computer B implementing
> > program b, but program b may be an emulation of program a, which should 
> > make the two machines
> > functionally equivalent and, under computationalism, equivalently conscious.
> 
> So ? If the functional equivalence doesn't depend on a
> baroque-reinterpretation,
> where is the problem ?

Who interprets the meaning of "baroque"?
 
> > Maybe this is wrong, eg.
> > there is something special about the insulation in the wires of machine A, 
> > so that only A can be conscious.
> > But that is no longer computationalism.
> 
> No. But what would force that conclusion on us ? Why can't
> consciousness
> attach to features more gneral than hardware, but less general than one
> of your re-interpretations ?

Because there is no natural or God-given computer architecture or language. You 
could say that consciousness 
does follow a natural architecture: that of the brain. But that could mean you 
would have a zombie if you tried 
to copy brain function with a digital computer, or with a digital computer not 
running Mr. Gates' operating system.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Colin Hales wrote:
...

>> As far as the internal life of the CPU is
>>
>>concerned...
>>
>>>whatever it is like to be an electrically noisy hot rock, regardless of
>>
>>the
>>
>>>programalthough the character of the noise may alter with different
>>>programs!
>>
> 
>>That's like say whatever it is like to be you, it is at best some waves of
>>chemical
>>potential.  You don't *know* that the control system is not conscious -
>>unless you
>>know what structure or function makes a system conscious.
>>
> 
> 
> There is nothing there except wires and electrically noisy hot rocks,
> plastic and other materials = . 

Just like me.  Nothing but proteins and osmotic potentials and ACT and ADP = 
.

>Whatever its consciousness is... it
> is the consciousness of the . The function

Which function?

> is an epiphenomenon at the
> scale of a human user 

Who's the user of my brain?

Brent Meeker

>that has nothing to do with the experiential qualities
> of being the computer.

What are the experiential qualities of being a computer? and how can we know 
them?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou


Peter Jones writes:

> If consciousness supervenes on inherent non-interprtation-dependent
> features,
> it can supervene on features which are binary, either present or
> absent.
> 
> For instance, whether a programme examines or modifies its own code is
> surely
> such a feature.
> 
> 
> >Even if computationalism were false and only those machines
> > specially blessed by God were conscious there would have to be a continuum, 
> > across
> > different species and within the lifespan of an individual from birth to 
> > death. The possibility
> > that consciousness comes on like a light at some point in your life, or at 
> > some point in the
> > evolution of a species, seems unlikely to me.
> 
> Surely it comes on like a light whenver you wake up.

Being alive/dead or conscious/unconscious would seem to be a binary property, 
but it's 
hard to believe (though not impossible) that there would be one circuit, neuron 
or line of 
code that makes the difference between conscious and unconscious.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Colin Hales

Brent Meeker:
> 
> Colin Hales wrote:
> >
> > Stathis Papaioannou
> > 
> >
> >>Maybe this is a copout, but I just don't think it is even logically
> >>possible to explain what consciousness
> >>*is* unless you have it. It's like the problem of explaining vision to a
> >>blind man: he might be the world's
> >>greatest scientific expert on it but still have zero idea of what it is
> >>like to see - and that's even though
> >>he shares most of the rest of his cognitive structure with other humans,
> >>and can understand analogies
> >>using other sensations. Knowing what sort of program a conscious
> computer
> >>would have to run to be
> >>conscious, what the purpose of consciousness is, and so on, does not
> help
> >>me to understand what the
> >>computer would be experiencing, except by analogy with what I myself
> >>experience.
> >>
> >>Stathis Papaioannou
> >>
> >
> >
> > Please consider the plight of the zombie scientist with a huge set of
> > sensory feeds and similar set of effectors. All carry similar signal
> > encoding and all, in themselves, bestow no experiential qualities on the
> > zombie.
> >
> > Add a capacity to detect regularity in the sensory feeds.
> > Add a scientific goal-seeking behaviour.
> >
> > Note that this zombie...
> > a) has the internal life of a dreamless sleep
> > b) has no concept or percept of body or periphery
> > c) has no concept that it is embedded in a universe.
> >
> > I put it to you that science (the extraction of regularity) is the
> science
> > of zombie sensory fields, not the science of the natural world outside
> the
> > zombie scientist. No amount of creativity (except maybe random choices)
> > would ever lead to any abstraction of the outside world that gave it the
> > ability to handle novelty in the natural world outside the zombie
> scientist.
> >
> > No matter how sophisticated the sensory feeds and any guesswork as to a
> > model (abstraction) of the universe, the zombie would eventually find
> > novelty invisible because the sensory feeds fail to depict the novelty
> .ie.
> > same sensory feeds for different behaviour of the natural world.
> >
> > Technology built by a zombie scientist would replicate zombie sensory
> feeds,
> > not deliver an independently operating novel chunk of hardware with a
> > defined function(if the idea of function even has meaning in this
> instance).
> >
> > The purpose of consciousness is, IMO, to endow the cognitive agent with
> at
> > least a repeatable (not accurate!) simile of the universe outside the
> > cognitive agent so that novelty can be handled. Only then can the zombie
> > scientist detect arbitrary levels of novelty and do open ended science
> (or
> > survive in the wild world of novel environmental circumstance).
> 

> Almost all organisms have become extinct.  Handling *arbitrary* levels of
> novelty is probably too much to ask of any species; and it's certainly
> more than is necessary to survive for millenia.

I am talking purely about scientific behaviour, not general behaviour. A
creature with limited learning capacity and phenomenal scenes could quite
happily live in an ecological niche until the niche changed. I am not asking
any creature other than a scientist to be able to appreciate arbitrary
levels of novelty.

> 
> >
> > In the absence of the functionality of phenomenal consciousness and with
> > finite sensory feeds you cannot construct any world-model (abstraction)
> in
> > the form of an innate (a-priori) belief system that will deliver an
> endless
> > ability to discriminate novelty. In a very Godellian way eventually a
> limit
> > would be reach where the abstracted model could not make any prediction
> that
> > can be detected.
> 
> So that's how we got string theory!
> 
> >The zombie is, in a very real way, faced with 'truths' that
> > exist but can't be accessed/perceived. As such its behaviour will be
> > fundamentally fragile in the face of novelty (just like all computer
> > programs are).
> 

> How do you know we are so robust.  Planck said, "A new idea prevails, not
> by the
> conversion of adherents, but by the retirement and demise of opponents."
> In other
> words only the young have the flexibility to adopt new ideas.  Ironically
> Planck
> never really believed quantum mechanics was more than a calculational
> trick.

The robustness is probably in that science is actually, at the level of
critical argument (like this, now), a super-organism.

In retrospect I think QM will be regarded as a side effect of the desperate
attempt to mathematically abtract appearances rather then deal with the
structure that is behaving quantum-mechanically. After the event they'll all
be going..."what were we thinking!" it won't be wrong... just not useful
in the sense that any of its considerations are not about underlying
structure.


> 
> > ---
> > Just to make the zombie a little more real... consider the industrial
> > control system computer. I have designed,

Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Stathis Papaioannou wrote:
> Peter Jones writes:
> 
> 
>>>That's what I'm saying, but I certainly don't think everyone agrees with me 
>>>on the list, and
>>>I'm not completely decided as to which of the three is more absurd: every 
>>>physical system
>>>implements every conscious computation, no physical system implements any 
>>>conscious
>>>computation (they are all implemented non-physically in Platonia), or the 
>>>idea that a
>>>computation can be conscious in the first place.
>>
>>
>>You haven't made it clear why you don't accept that every physical
>>system
>>implements one computation, whether it is a
>>conscious computation or not. I don't see what
>>contradicts it.
> 
> 
> Every physical system does implement every computation, in a trivial sense, 
> as every rock 
> is a hammer and a doorstop and contains a bust of Albert Einstein inside it. 
> Those three aspects 
> of rocks are not of any consequence unless there is someone around to 
> appreciate them. 
> Similarly, if the vibration of atoms in a rock under some complex mapping are 
> calculating pi 
> that is not of any consequence unless someone goes to the trouble of 
> determining that mapping, 
> and even then it wouldn't be of any use as a general purpose computer unless 
> you built another 
> general purpose computer to dynamically interpret the vibrations (which does 
> not mean the rock 
> isn't doing the calculation without this extra computer). 

I think there are some constraints on what the rock must be doing in order that 
it 
can be said to be calculating pi instead of the interpreting computer.  For 
example 
if the rock states were just 1,0,1,0,1,0... then there are several arguments 
based on 
for example information theory that would rule out that being a computation of 
pi.

>However, if busts of Einstein were conscious 
> regardless of the excess rock around them, or calculations of pi were 
> conscious regardless of the 
> absence of anyone being able to appreciate them, then the existence of the 
> rock in an otherwise 
> empty universe would necessitate the existence of at least those two 
> conscious processes. 
> 
> Computationalism says that some computations are conscious. It is also a 
> general principle of 
> computer science that equivalent computations can be implemented on very 
> different hardware 
> and software platforms; by extension, the vibration of atoms in a rock can be 
> seen as implementing 
> any computation under the right interpretation. Normally, it is of no 
> consequence that a rock 
> implements all these computations. But if some of these computations are 
> conscious (a consequence 
> of computationalism) 

It's not a consequence of my more modest idea of computationalism.

>and if some of the conscious computations are conscious in the absence of 
> environmental input, then every rock is constantly implementing all these 
> conscious computations. 
> To get around this you would have to deny that computations can be conscious, 
> or at least restrict 
> the conscious computations to specific hardware platforms and programming 
> languages. 

Why not some more complex and subtle critereon based on the computation?  Why 
just 
hardware or language - both of which seem easy to rule out as definitive of 
consciousness or even computation?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Stathis Papaioannou wrote:
> 
> Peter Jones writes:
>  
> 
>>Stathis Papaioannou wrote:
>>
>>>Peter Jones writes:
>>>
>>>
Stathis Papaioannou wrote:


>>>Now, suppose some more complex variant of 3+2=3 implemented on your 
>>>abacus has consciousness associated with it, which is just one of the 
>>>tenets of computationalism. Some time later, you are walking in the 
>>>Amazon rain forest and notice that
>>>under a certain mapping
>>
>>
>>>of birds to beads and trees to wires, the forest is implementing the 
>>>same computation as your abacus was. So if your abacus was conscious, 
>>>and computationalism is true, the tree-bird sytem should also be 
>>>conscious.
>>
>>No necessarily, because the mapping is required too. Why should
>>it still be conscious if no-one is around to make the mapping.
>
>Are you claiming that a conscious machine stops being conscious if its 
>designers die
>and all the information about how it works is lost?

You are, if anyone is. I don't agree that computation *must* be
interpreted,
although they *can* be re-interpreted.
>>>
>>>What I claim is this:
>>>
>>>A computation does not *need* to be interpreted, it just is. However, a 
>>>computation
>>>does need to be interpreted, or interact with its environment in some way, 
>>>if it is to be
>>>interesting or meaningful.
>>
>>A computation other than the one you are running needs to be
>>interpreted by you
>>to be meaningful to you. The computation you are running is useful
>>to you because it keeps you alive.
>>
>>
>>>By analogy, a string of characters is a string of characters
>>>whether or not anyone interprets it, but it is not interesting or meaningful 
>>>unless it is
>>>interpreted. But if a computation, or for that matter a string of 
>>>characters, is conscious,
>>>then it is interesting and meaningful in at least one sense in the absence 
>>>of an external
>>>observer: it is interesting and meaningful to itself. If it were not, then 
>>>it wouldn't be
>>>conscious. The conscious things in the world have an internal life, a first 
>>>person
>>>phenomenal experience, a certain ineffable something, whatever you want to 
>>>call it,
>>>while the unconscious things do not. That is the difference between them.
>>
>>Which they manage to be aware of without the existence of an external
>>oberver,
>>so one of your premises must be wrong.
> 
> 
> No, that's exactly what I was saying all along. An observer is needed for 
> meaningfulness, 
> but consciousness provides its own observer. A conscious entity may interact 
> with its 
> environment, and in fact that would have to be the reason consciousness 
> evolved (nature 
> is not self-indulgent), but the interaction is not logically necessary for 
> consciousness.

But it may be nomologically necessary.  "Not logically necessary" is the 
weakest 
standard of non-necessity that is still coherent; the only things less 
necessary are 
incoherent.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Stathis Papaioannou wrote:
> Peter Jones writes (quoting SP):
> 
> 
>I'm not sure how the multiverse comes into the discussion, but you have
>made the point several times that a computation depends on an observer


No, I haven't! I have tried ot follow through the consequences of
assuming it must.
It seems to me that some sort of absurdity or contradiction ensues.
>>>
>>>OK. This has been a long and complicated thread.
>>>
>>>
>for its meaning. I agree, but *if* computations can be conscious (remember,
>this is an assumption) then in that special case an external observer is 
>not
>needed.

Why not ? (Well, I would be quite happy that a conscious
computation would have some inherent structural property --
I want to foind out why *you* would think it doesn't).
>>>
>>>I think it goes against standard computationalism if you say that a conscious
>>>computation has some inherent structural property.
> 
> 
> I should have said, that the *hardware* has some special structural property 
> goes 
> against computationalism. It is difficult to pin down the "structure" of a 
> computation 
> without reference to a programming language or hardware. The idea is that the 
> same computation can look completely different on different computers, the 
> corollary 
> of which is that any computer (or physical process) may be implementing any 
> computation, we just might not know about it. It is legitimate to say that 
> only 
> particular computers (eg. brains, or PC's) using particular languages arev 
> actually 
> implementing conscious computations, but that is not standard 
> computationalism.
> 
> Statthis Papaioannou

I thought standard computationalism was just the modest position that if the 
hardware 
of your brain were replaced piecemeal by units with the same input-output at 
some 
microscopic level usually assumed to be neurons, you'd still be you and you'd 
still 
be conscious.

I don't recall anything about all computations implementing consciousness?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou

Peter Jones writes:

> > That's what I'm saying, but I certainly don't think everyone agrees with me 
> > on the list, and
> > I'm not completely decided as to which of the three is more absurd: every 
> > physical system
> > implements every conscious computation, no physical system implements any 
> > conscious
> > computation (they are all implemented non-physically in Platonia), or the 
> > idea that a
> > computation can be conscious in the first place.
> 
> 
> You haven't made it clear why you don't accept that every physical
> system
> implements one computation, whether it is a
> conscious computation or not. I don't see what
> contradicts it.

Every physical system does implement every computation, in a trivial sense, as 
every rock 
is a hammer and a doorstop and contains a bust of Albert Einstein inside it. 
Those three aspects 
of rocks are not of any consequence unless there is someone around to 
appreciate them. 
Similarly, if the vibration of atoms in a rock under some complex mapping are 
calculating pi 
that is not of any consequence unless someone goes to the trouble of 
determining that mapping, 
and even then it wouldn't be of any use as a general purpose computer unless 
you built another 
general purpose computer to dynamically interpret the vibrations (which does 
not mean the rock 
isn't doing the calculation without this extra computer). However, if busts of 
Einstein were conscious 
regardless of the excess rock around them, or calculations of pi were conscious 
regardless of the 
absence of anyone being able to appreciate them, then the existence of the rock 
in an otherwise 
empty universe would necessitate the existence of at least those two conscious 
processes. 

Computationalism says that some computations are conscious. It is also a 
general principle of 
computer science that equivalent computations can be implemented on very 
different hardware 
and software platforms; by extension, the vibration of atoms in a rock can be 
seen as implementing 
any computation under the right interpretation. Normally, it is of no 
consequence that a rock 
implements all these computations. But if some of these computations are 
conscious (a consequence 
of computationalism) and if some of the conscious computations are conscious in 
the absence of 
environmental input, then every rock is constantly implementing all these 
conscious computations. 
To get around this you would have to deny that computations can be conscious, 
or at least restrict 
the conscious computations to specific hardware platforms and programming 
languages. This destroys 
computationalism, although it can still allow a form of functionalism. The 
other way to go is to reject 
the supervenience thesis and keep computationalism, which would mean that every 
computation 
(includidng the conscious ones) is implemented necessarily in the absence of 
any physical process.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou


Peter Jones writes:
 
> Stathis Papaioannou wrote:
> > Peter Jones writes:
> >
> > > Stathis Papaioannou wrote:
> > >
> > > > > > Now, suppose some more complex variant of 3+2=3 implemented on your 
> > > > > > abacus has consciousness associated with it, which is just one of 
> > > > > > the tenets of computationalism. Some time later, you are walking in 
> > > > > > the Amazon rain forest and notice that
> > > > > > under a certain mapping
> > > > >
> > > > >
> > > > > > of birds to beads and trees to wires, the forest is implementing 
> > > > > > the same computation as your abacus was. So if your abacus was 
> > > > > > conscious, and computationalism is true, the tree-bird sytem should 
> > > > > > also be conscious.
> > > > >
> > > > > No necessarily, because the mapping is required too. Why should
> > > > > it still be conscious if no-one is around to make the mapping.
> > > >
> > > > Are you claiming that a conscious machine stops being conscious if its 
> > > > designers die
> > > > and all the information about how it works is lost?
> > >
> > > You are, if anyone is. I don't agree that computation *must* be
> > > interpreted,
> > > although they *can* be re-interpreted.
> >
> > What I claim is this:
> >
> > A computation does not *need* to be interpreted, it just is. However, a 
> > computation
> > does need to be interpreted, or interact with its environment in some way, 
> > if it is to be
> > interesting or meaningful.
> 
> A computation other than the one you are running needs to be
> interpreted by you
> to be meaningful to you. The computation you are running is useful
> to you because it keeps you alive.
> 
> > By analogy, a string of characters is a string of characters
> > whether or not anyone interprets it, but it is not interesting or 
> > meaningful unless it is
> > interpreted. But if a computation, or for that matter a string of 
> > characters, is conscious,
> > then it is interesting and meaningful in at least one sense in the absence 
> > of an external
> > observer: it is interesting and meaningful to itself. If it were not, then 
> > it wouldn't be
> > conscious. The conscious things in the world have an internal life, a first 
> > person
> > phenomenal experience, a certain ineffable something, whatever you want to 
> > call it,
> > while the unconscious things do not. That is the difference between them.
> 
> Which they manage to be aware of without the existence of an external
> oberver,
> so one of your premises must be wrong.

No, that's exactly what I was saying all along. An observer is needed for 
meaningfulness, 
but consciousness provides its own observer. A conscious entity may interact 
with its 
environment, and in fact that would have to be the reason consciousness evolved 
(nature 
is not self-indulgent), but the interaction is not logically necessary for 
consciousness.

Stathis Papaioannou

_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou

Peter Jones writes (quoting SP):

> > > > I'm not sure how the multiverse comes into the discussion, but you have
> > > > made the point several times that a computation depends on an observer
> > >
> > >
> > > No, I haven't! I have tried ot follow through the consequences of
> > > assuming it must.
> > > It seems to me that some sort of absurdity or contradiction ensues.
> >
> > OK. This has been a long and complicated thread.
> >
> > > > for its meaning. I agree, but *if* computations can be conscious 
> > > > (remember,
> > > > this is an assumption) then in that special case an external observer 
> > > > is not
> > > > needed.
> > >
> > > Why not ? (Well, I would be quite happy that a conscious
> > > computation would have some inherent structural property --
> > > I want to foind out why *you* would think it doesn't).
> >
> > I think it goes against standard computationalism if you say that a 
> > conscious
> > computation has some inherent structural property.

I should have said, that the *hardware* has some special structural property 
goes 
against computationalism. It is difficult to pin down the "structure" of a 
computation 
without reference to a programming language or hardware. The idea is that the 
same computation can look completely different on different computers, the 
corollary 
of which is that any computer (or physical process) may be implementing any 
computation, we just might not know about it. It is legitimate to say that only 
particular computers (eg. brains, or PC's) using particular languages arev 
actually 
implementing conscious computations, but that is not standard computationalism.

Statthis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

1Z wrote:
...
> Dennett's idea of "stored" conscious volition is quite in line with our
> theory. Indeed, we would like to extend it in a way that Dennett does
> not. We would like to extend it to stored indeterminism. Any decision
> we make in exigent situations wher we do nto have the luxury of
> conisdered thought must be more-or-less determinsistic -- must be
> more-or-less determined by our state of mind at the time - -if they are
> to be of any use at all to us. Otherwise we might as well toss a coin.
> But our state of mind at the time can be formed by rumination, training
> and so over a long period, perhaps over a lifetime. As such it can
> contain elemetns of indeterminism in the positive sense -- of
> imagination and creativity, not mere caprice.

Right.  Even if it's determined, it's determined by who we are.

> 
> This extension of Dennett's criticism of Libet (or rather the way
> Libet's results are used by free-will sceptics) gives us a way of
> answering Dennett's own criticisms of Robert Kane, a prominent defender
> of naturalistic Free Will.

I didn't refer to Libet and Grey Walter as refuting free will - I was well 
aware of 
Dennett's writings (and Stathis probably is to). But I think they show that the 
conscious feeling of making a decision and actually making the decision are 
different 
things; that most of a decision making is unconscious.  Which is exactly what 
you 
would expect based on a model of a computer logging it's own decisions.  I 
actually 
found Grey Walter's experiments more convincing that Libet's.  It's too bad 
they 
aren't likely to be repeated.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Stathis Papaioannou wrote:
> Brent meeker writes:
> 
> 
>>Stathis Papaioannou wrote:
>>
>>>Peter Jones writes:
>>>
>>>
>>>
Stathis Papaioannou wrote:



>Like Bruno, I am not claiming that this is definitely the case, just that 
>it is the case if
>computationalism is true. Several philosophers (eg. Searle) have used the 
>self-evident
>absurdity of the idea as an argument demonstrating that computationalism 
>is false -
>that there is something non-computational about brains and consciousness. 
>I have not
>yet heard an argument that rejects this idea and saves computationalism.

[ rolls up sleaves ]

The idea is easilly refuted if it can be shown that computation doesn't
require
interpretation at all. It can also be refuted more circuitously by
showing that
computation is not entirely a matter of intepretation. In everythingism
, eveything
is equal. If some computations (the ones that don't depend on
interpretation) are
"more equal than others", the way is still open for the Somethinginst
to object
that interpretation-independent computations are really real, and the
others are
mere possibilities.

The claim has been made that computation is "not much use" without an
interpretation.
Well, if you define a computer as somethin that is used by a human,
that is true.
It is also very problematic to the computationalist claim that the
human mind is a computer.
Is the human mind of use to a human ? Well, yes, it helps us stay alive
in various ways.
But that is more to do with reacting to a real-time environment, than
performing abstract symbolic manipulations or elaborate
re-interpretations. (Computationalists need to be careful about how
they define "computer". Under
some perfectly reasonable definitions -- for instance, defining a
computer as
a human invention -- computationalism is trivially false).
>>>
>>>
>>>I don't mean anything controversial (I think) when I refer to interpretation 
>>>of 
>>>computation. Take a mercury thermometer: it would still do its thing if all 
>>>sentient life in the universe died out, or even if there were no sentient 
>>>life to 
>>>build it in the first place and by amazing luck mercury and glass had come 
>>>together 
>>>in just the right configuration. But if there were someone around to observe 
>>>it and 
>>>understand it, or if it were attached to a thermostat and heater, the 
>>>thermometer 
>>>would have extra meaning - the same thermometer, doing the same thermometer 
>>>stuff. Now, if thermometers were conscious, then part of their "thermometer 
>>>stuff" might include "knowing" what the temperature was - all by themselves, 
>>>without 
>>>benefit of external observer. 
>>
>>We should ask ourselves how do we know the thermometer isn't conscious of the 
>>temperature?  It seems that the answer has been that it's state or activity 
>>*could* 
>>be intepreted in many ways other than indicating the temperature; therefore 
>>it must 
>>be said to unconscious of the temperature or we must allow that it implements 
>>all 
>>conscious thought (or at least all for which there is a possible 
>>interpretative 
>>mapping).  But I see it's state and activity as relative to our shared 
>>environment; 
>>and this greatly constrains what it can be said to "compute", e.g. the 
>>temperature, 
>>the expansion coefficient of Hg...   With this constraint, then I think there 
>>is no 
>>problem in saying the thermometer is conscious at the extremely low level of 
>>being 
>>aware of the temperature or the expansion coefficient of Hg or whatever else 
>>is 
>>within the constraint.
> 
> 
> I would basically agree with that. Consciousness would probably have to be a 
> continuum 
> if computationalism is true. Even if computationalism were false and only 
> those machines 
> specially blessed by God were conscious there would have to be a continuum, 
> across
> different species and within the lifespan of an individual from birth to 
> death. The possibility 
> that consciousness comes on like a light at some point in your life, or at 
> some point in the 
> evolution of a species, seems unlikely to me.
> 
> 
>>>Furthermore, if thermometers were conscious, they 
>>>might be dreaming of temperatures, or contemplating the meaning of 
>>>consciousness, 
>>>again in the absence of external observers, and this time in the absence of 
>>>interaction 
>>>with the real world. 
>>>
>>>This, then, is the difference between a computation and a conscious 
>>>computation. If 
>>>a computation is unconscious, it can only have meaning/use/interpretation in 
>>>the eyes 
>>>of a beholder or in its interaction with the environment. 
>>
>>But this is a useless definition of the difference.  To apply we have to know 
>>whether 
>>some putative conscious computation has meaning to itself; which we can only 
>>know by 
>>knowing whet

Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

1Z wrote:
> 
> Stathis Papaioannou wrote:
> 
>>Brent meeker writes:
>>
>>
>>>Stathis Papaioannou wrote:
>>>
Peter Jones writes:
> 
> 
>>>We should ask ourselves how do we know the thermometer isn't conscious of the
>>>temperature?  It seems that the answer has been that it's state or activity 
>>>*could*
>>>be intepreted in many ways other than indicating the temperature; therefore 
>>>it must
>>>be said to unconscious of the temperature or we must allow that it 
>>>implements all
>>>conscious thought (or at least all for which there is a possible 
>>>interpretative
>>>mapping).  But I see it's state and activity as relative to our shared 
>>>environment;
>>>and this greatly constrains what it can be said to "compute", e.g. the 
>>>temperature,
>>>the expansion coefficient of Hg...   With this constraint, then I think 
>>>there is no
>>>problem in saying the thermometer is conscious at the extremely low level of 
>>>being
>>>aware of the temperature or the expansion coefficient of Hg or whatever else 
>>>is
>>>within the constraint.
>>
>>I would basically agree with that. Consciousness would probably have to be a 
>>continuum
>>if computationalism is true.
> 
> 
> I don't think that follows remotely. It is true that it is vastly
> better to interpret a column of mercury as a temperature-sensor than
> a pressure-sensor or a radiation-sensor. That doesn't mean the
> thermometer
> knows that in itself.
> 
> Computationalism does not claim that every computation is conscious.
> 
> If consciousness supervenes on inherent non-interprtation-dependent
> features,
> it can supervene on features which are binary, either present or
> absent.

It could, depending on what it is.  But that's why we need some independent 
operational definition of consciousness before we can say what has it and what 
doens't.  It's pretty clear that there are degrees of consciousness.  My dog is 
aware 
of where he is and who he is relative to the family etc.  But I don't think he 
passes 
the mirror test.  So whether a thermometer is conscious or not is likely to be 
a 
matter of how we define and quantify consciousness.

> 
> For instance, whether a programme examines or modifies its own code is
> surely
> such a feature.
> 
> 
> 
>>Even if computationalism were false and only those machines
>>specially blessed by God were conscious there would have to be a continuum, 
>>across
>>different species and within the lifespan of an individual from birth to 
>>death. The possibility
>>that consciousness comes on like a light at some point in your life, or at 
>>some point in the
>>evolution of a species, seems unlikely to me.
> 
> 
> Surely it comes on like a light whenver you wake up.

Not at all.  If someone whispers your name while you're asleep, you will wake 
up - 
showing you were conscious of sounds and their meaning.

On the other hand, it does come on like a light (or a slow sunrise) when you 
come out 
of anesthesia.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Stathis Papaioannou wrote:
> 
> Brent meeker writes (quoting SP):
> 
> 
>>>Maybe this is a copout, but I just don't think it is even logically possible 
>>>to explain what consciousness 
>>>*is* unless you have it. 
>>
>>Not being *logically* possible means entailing a contradiction - I doubt 
>>that.  But 
>>anyway you do have it and you think I do because of the way we interact.  So 
>>if you 
>>interacted the same way with a computer and you further found out that the 
>>computer 
>>was a neural network that had learned through interaction with people over a 
>>period 
>>of years, you'd probably infer that the computer was conscious - at least you 
>>wouldn't be sure it wasn't.
> 
> 
> True, but I could still only imagine that it experiences what I experience 
> because I already know what I 
> experience. I don't know what my current computer experiences, if anything, 
> because I'm not very much 
> like it.
>  
> 
>>>It's like the problem of explaining vision to a blind man: he might be the 
>>>world's 
>>>greatest scientific expert on it but still have zero idea of what it is like 
>>>to see - and that's even though 
>>>he shares most of the rest of his cognitive structure with other humans, and 
>>>can understand analogies 
>>>using other sensations. Knowing what sort of program a conscious computer 
>>>would have to run to be 
>>>conscious, what the purpose of consciousness is, and so on, does not help me 
>>>to understand what the 
>>>computer would be experiencing, except by analogy with what I myself 
>>>experience. 
>>
>>But that's true of everything.  Suppose we knew a lot more about brains and 
>>we 
>>created an intelligent computer using brain-like functional architecture and 
>>it acted 
>>like a conscious human being, then I'd say we understood its consciousness 
>>better 
>>than we understand quantum field theory or global economics.
> 
> 
> We would understand it in a third person sense but not in a first person 
> sense, except by analogy with our 
> own first person experience. Consciousness is the difference between what can 
> be known by observing an 
> entity and what can be known by being the entity, or something like the 
> entity, yourself. 
> 
> Stathis Papaioannou

But you are simply positing that there is such a difference.  That's easy to do 
because we know so little about how brains work.  But consider the engine in 
your 
car.  Do you know what it's like to be the engine in your car?  You know a lot 
about 
it, but how do you know that you know all of it?  Does that mean your car 
engine is 
conscious?  I'd say yes it is (at a very low level) and you *can* know what 
it's like.

This just an extreme example of that kind of special pleading you hear in 
politics - 
nobody can represent Black interests except a Black, no man can understand 
Feminism. 
  Can only children be pediatricians?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Stathis Papaioannou wrote:
> Colin Hales writes:
> 
> 
>>Please consider the plight of the zombie scientist with a huge set of
>>sensory feeds and similar set of effectors. All carry similar signal
>>encoding and all, in themselves, bestow no experiential qualities on the
>>zombie.
>>
>>Add a capacity to detect regularity in the sensory feeds.
>>Add a scientific goal-seeking behaviour.
>>
>>Note that this zombie...
>>a) has the internal life of a dreamless sleep
>>b) has no concept or percept of body or periphery
>>c) has no concept that it is embedded in a universe.
>>
>>I put it to you that science (the extraction of regularity) is the science
>>of zombie sensory fields, not the science of the natural world outside the
>>zombie scientist. No amount of creativity (except maybe random choices)
>>would ever lead to any abstraction of the outside world that gave it the
>>ability to handle novelty in the natural world outside the zombie scientist.
>>
>>No matter how sophisticated the sensory feeds and any guesswork as to a
>>model (abstraction) of the universe, the zombie would eventually find
>>novelty invisible because the sensory feeds fail to depict the novelty .ie.
>>same sensory feeds for different behaviour of the natural world.
>>
>>Technology built by a zombie scientist would replicate zombie sensory feeds,
>>not deliver an independently operating novel chunk of hardware with a
>>defined function(if the idea of function even has meaning in this instance).
>>
>>The purpose of consciousness is, IMO, to endow the cognitive agent with at
>>least a repeatable (not accurate!) simile of the universe outside the
>>cognitive agent so that novelty can be handled. Only then can the zombie
>>scientist detect arbitrary levels of novelty and do open ended science (or
>>survive in the wild world of novel environmental circumstance).
>>
>>In the absence of the functionality of phenomenal consciousness and with
>>finite sensory feeds you cannot construct any world-model (abstraction) in
>>the form of an innate (a-priori) belief system that will deliver an endless
>>ability to discriminate novelty. In a very Godellian way eventually a limit
>>would be reach where the abstracted model could not make any prediction that
>>can be detected. The zombie is, in a very real way, faced with 'truths' that
>>exist but can't be accessed/perceived. As such its behaviour will be
>>fundamentally fragile in the face of novelty (just like all computer
>>programs are).
>>---
>>Just to make the zombie a little more real... consider the industrial
>>control system computer. I have designed, installed hundreds and wired up
>>tens (hundreds?) of thousands of sensors and an unthinkable number of
>>kilometers of cables. (NEVER again!) In all cases I put it to you that the
>>phenomenal content of sensory connections may, at best, be characterised as
>>whatever it is like to have electrons crash through wires, for that is what
>>is actually going on. As far as the internal life of the CPU is concerned...
>>whatever it is like to be an electrically noisy hot rock, regardless of the
>>programalthough the character of the noise may alter with different
>>programs!
>>
>>I am a zombie expert! No that didn't come out right...erm
>>perhaps... "I think I might be a world expert in zombies" yes, that's
>>better.
>>:-)
>>Colin Hales
> 
> 
> I'm not sure I understand why the zombie would be unable to respond to any 
> situation it was likely to encounter. Doing science and philosophy is just a 
> happy 
> side-effect of a brain designed to help its owner survive and reproduce. Do 
> you 
> think it would be impossible to program a computer to behave like an insect, 
> or a 
> newborn infant, for example? You could add a random number generator to make 
> its behaviour less predictable (so predators can't catch it and parents don't 
> get 
> complacent) or to help it decide what to do in a truly novel situation. 
> 
> Stathis Papaioannou

And after you had given it all these capabilities how would you know it was not 
conscious?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Stathis Papaioannou wrote:
> 
> Brent meeker writes:
> 
> 
>I think it goes against standard computationalism if you say that a 
>conscious 
>computation has some inherent structural property. Opponents of 
>computationalism 
>have used the absurdity of the conclusion that anything implements any 
>conscious 
>computation as evidence that there is something special and 
>non-computational 
>about the brain. Maybe they're right.
>
>Stathis Papaioannou

Why not reject the idea that any computation implements every possible 
computation 
(which seems absurd to me)?  Then allow that only computations with some 
special 
structure are conscious.
>>>
>>>
>>>It's possible, but once you start in that direction you can say that only 
>>>computations 
>>>implemented on this machine rather than that machine can be conscious. You 
>>>need the 
>>>hardware in order to specify structure, unless you can think of a God-given 
>>>programming 
>>>language against which candidate computations can be measured.
>>
>>I regard that as a feature - not a bug. :-)
>>
>>Disembodied computation doesn't quite seem absurd - but our empirical sample 
>>argues 
>>for embodiment.
>>
>>Brent Meeker
> 
> 
> I don't have a clear idea in my mind of disembodied computation except in 
> rather simple cases, 
> like numbers and arithmetic. The number 5 exists as a Platonic ideal, and it 
> can also be implemented 
> so we can interact with it, as when there is a collection of 5 oranges, or 3 
> oranges and 2 apples, 
> or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite 
> variety. The difficulty is that if we 
> say that "3+2=5" as exemplified by 3 oranges and 2 apples is conscious, then 
> should we also say 
> that the pairs+triplets of fruit are also conscious? If so, where do we draw 
> the line? 

I'm not sure I understand your example.  Are you saying that by simply 
existing, two 
apples and 3 oranges compute 2+3=5?  If so I would disagree.  I would say it is 
our 
comprehending them as individual objects and also as a set that is the 
computation. 
Just hanging there on the trees they may be "computing" apple hanging on a 
tree, 
apple hanging on a tree,... but they're not computing 2+3=5.

>That is what I mean 
> when I say that any computation can map onto any physical system. 

But as you've noted before the computation is almost all in the mapping.  And 
not 
just in the map, but in the application of the map - which is something we do.  
That 
action can't be abstracted away.  You can't just say there's a physical system 
and 
there's a manual that would map it into some computation and stop there as 
though the 
computation has been done.  The mapping, an action, still needs to be performed.

>The physical structure and activity 
> of computer A implementing program a may be completely different to that of 
> computer B implementing 
> program b, but program b may be an emulation of program a, which should make 
> the two machines 
> functionally equivalent and, under computationalism, equivalently conscious. 

I don't see any problem with supposing that A and B are equally conscious (or 
unconscious).

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread 1Z


Stathis Papaioannou wrote:
> Brent Meeker writes:
>
> > >>I could make a robot that, having suitable thermocouples, would quickly 
> > >>withdraw it's
> > >>hand from a fire; but not be conscious of it.  Even if I provide the 
> > >>robot with
> > >>"feelings", i.e. judgements about good/bad/pain/pleasure I'm not sure it 
> > >>would be
> > >>conscious.  But if I provide it with "attention" and memory, so that it 
> > >>noted the
> > >>painful event as important and necessary to remember because of it's 
> > >>strong negative
> > >>affect; then I think it would be conscious.
> > >
> > >
> > > It's interesting that people actually withdraw their hand from the fire 
> > > *before* they experience
> > > the pain. The withdrawl is a reflex, presumably evolved in organisms with 
> > > the most primitive
> > > central nervour systems, while the pain seems to be there as an 
> > > afterthought to teach us a
> > > lesson so we won't do it again. Thus, from consideration of evolutionary 
> > > utility consciousness
> > > does indeed seem to be a side-effect of memory and learning.
> >
> > Even more curious, volitional action also occurs before one is aware of it. 
> > Are you
> > familiar with the experiments of Benjamin Libet and Grey Walter?
>
> These experiments showed that in apparently voluntarily initiated motion, 
> motor cortex activity
> actually preceded the subject's awareness of his intention by a substantial 
> fraction of a second.
> In other words, we act first, then "decide" to act.

Does Benjamin Libet's Research Empirically Disprove Free Will ?
Scientifically informed sceptics about FW often quote a famous
experiment by benjamin Libet, which supposedly shows that a kind of
signal called a "Readiness Potential", detectable by electrodes,
precedes a conscious decisions, and is a reliable indicator of the
decision, and thus -- so the claim goes -- indicates that our decisions
are not ours but made for us by unconsious processes.

In fact, Libet himself doesn't draw a sweepingly sceptical conclusion
from his own results. For one thing, Readiness Potentials are not
always followed by actions. he believes it is possible for
consicousness to intervene with a "veto" to the action:

"The initiation of the freely voluntary act appears to begin in the
brain unconsciously, well before the person consciously knows he wants
to act! Is there, then, any role for conscious will in the performing
of a voluntary act?...To answer this it must be recognised that
conscious will (W) does appear about 150milliseconds before the muscsle
is activated, even though it follows the onset ofthe RP. An interval of
150msec would allow enough time in which the conscious function might
affec the final outcome of the volitional process."

(Libet, quoted in "Freedom Evolves" by Daniel Dennett, p. 230 )

"This suggests our conscious minds may not have free will but
rather free won't!"

(V.S Ramachandran, quoted in "Freedom Evolves" by Daniel Dennett, p.
231 )

However, it is quite possible that the Libertarian doesn't need to
appeal to "free won't" to avoid the conclusion that free won't doesn't
exist.

Libet tells when the RP occurs using electrodes. But how does Libet he
when conscious decison-making occurs ? He relies on the subject
reporting the position of the hand of a clock. But, as Dennett points
out, this is only a report of where it seems to the subject that
various things come together, not of the objective time at which they
occur.

Suppose Libet knows that your readiness potential peaked at second
6,810 of the experimental trial, and the clock dot was straight down
(which is what you reported you saw) at millisecond 7,005. How many
milliseconds should he have to add to this number to get the time you
were conscious of it? The light gets from your clock face to your
eyeball almost instantaneously, but the path of the signals from retina
through lateral geniculate nucleus to striate cortex takes 5 to 10
milliseonds -- a paltry fraction of the 300 milliseconds offset, but
how much longer does it take them to get to you. (Or are you located in
the striate cortex?) The visual signals have to be processed before
they arrive at wherever they need to arrive for you to make a consicous
decision of simulataneity. Libet's method presupposes, in short, that
we can locate the intersection of two trajectories: # the
rising-to-consciousness of signals representing the decision to flick #
the rising to consciousness of signals representing successive
clock-face orientations so that these events occur side-by-side as it
were in place where their simultaneity can be noted.

("Freedom Evolves" by Daniel Dennett, p. 231 )

Dennett refers to an experiment in which Churchland showed, that just
pressing a button when asked to signal when you see a flash of light
takes a normal subject about 350 milliseconds.

Does that mean that all actions taking longer than that are unconcisous
?

The brain processes stimuli over time, and t

Re: computationalism and supervenience

2006-09-12 Thread 1Z


Stathis Papaioannou wrote:
> Brent meeker writes:
>
> > >>>I think it goes against standard computationalism if you say that a 
> > >>>conscious
> > >>>computation has some inherent structural property. Opponents of 
> > >>>computationalism
> > >>>have used the absurdity of the conclusion that anything implements any 
> > >>>conscious
> > >>>computation as evidence that there is something special and 
> > >>>non-computational
> > >>>about the brain. Maybe they're right.
> > >>>
> > >>>Stathis Papaioannou
> > >>
> > >>Why not reject the idea that any computation implements every possible 
> > >>computation
> > >>(which seems absurd to me)?  Then allow that only computations with some 
> > >>special
> > >>structure are conscious.
> > >
> > >
> > > It's possible, but once you start in that direction you can say that only 
> > > computations
> > > implemented on this machine rather than that machine can be conscious. 
> > > You need the
> > > hardware in order to specify structure, unless you can think of a 
> > > God-given programming
> > > language against which candidate computations can be measured.
> >
> > I regard that as a feature - not a bug. :-)
> >
> > Disembodied computation doesn't quite seem absurd - but our empirical 
> > sample argues
> > for embodiment.
> >
> > Brent Meeker
>
> I don't have a clear idea in my mind of disembodied computation except in 
> rather simple cases,
> like numbers and arithmetic. The number 5 exists as a Platonic ideal, and it 
> can also be implemented
> so we can interact with it, as when there is a collection of 5 oranges, or 3 
> oranges and 2 apples,
> or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite 
> variety. The difficulty is that if we
> say that "3+2=5" as exemplified by 3 oranges and 2 apples is conscious, then 
> should we also say
> that the pairs+triplets of fruit are also conscious?

No, they are only subroutines.

>  If so, where do we draw the line?

At specific structures

> That is what I mean
> when I say that any computation can map onto any physical system. The 
> physical structure and activity
> of computer A implementing program a may be completely different to that of 
> computer B implementing
> program b, but program b may be an emulation of program a, which should make 
> the two machines
> functionally equivalent and, under computationalism, equivalently conscious.

So ? If the functional equivalence doesn't depend on a
baroque-reinterpretation,
where is the problem ?

> Maybe this is wrong, eg.
> there is something special about the insulation in the wires of machine A, so 
> that only A can be conscious.
> But that is no longer computationalism.

No. But what would force that conclusion on us ? Why can't
consciousness
attach to features more gneral than hardware, but less general than one
of your re-interpretations ?

> Stathis Papaioannou
> _
> Be one of the first to try Windows Live Mail.
> http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread 1Z


Stathis Papaioannou wrote:
> Brent Meeker writes:
>
> > > I think it goes against standard computationalism if you say that a 
> > > conscious
> > > computation has some inherent structural property. Opponents of 
> > > computationalism
> > > have used the absurdity of the conclusion that anything implements any 
> > > conscious
> > > computation as evidence that there is something special and 
> > > non-computational
> > > about the brain. Maybe they're right.
> > >
> > > Stathis Papaioannou
> >
> > Why not reject the idea that any computation implements every possible 
> > computation
> > (which seems absurd to me)?  Then allow that only computations with some 
> > special
> > structure are conscious.
>
> It's possible, but once you start in that direction you can say that only 
> computations
> implemented on this machine rather than that machine can be conscious.

Yes, you can, but you don't have to. Consciousness might supervene on
computational structure, it might supervene on hardware structure.

What is the problem with computationalism being a contingent truth ?

> You need the
> hardware in order to specify structure, unless you can think of a God-given 
> programming
> language against which candidate computations can be measured.


There is a level of description -- a level at which a computaiton can
be said
to contain some may loops, branches and recursions -- which is higher
than the hardware level, but not as high as a specific programming
language 
like C or Fortran. Think of  a flowchart.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread 1Z


Stathis Papaioannou wrote:
> Brent meeker writes:
>
> > Stathis Papaioannou wrote:
> > > Peter Jones writes:

> > We should ask ourselves how do we know the thermometer isn't conscious of 
> > the
> > temperature?  It seems that the answer has been that it's state or activity 
> > *could*
> > be intepreted in many ways other than indicating the temperature; therefore 
> > it must
> > be said to unconscious of the temperature or we must allow that it 
> > implements all
> > conscious thought (or at least all for which there is a possible 
> > interpretative
> > mapping).  But I see it's state and activity as relative to our shared 
> > environment;
> > and this greatly constrains what it can be said to "compute", e.g. the 
> > temperature,
> > the expansion coefficient of Hg...   With this constraint, then I think 
> > there is no
> > problem in saying the thermometer is conscious at the extremely low level 
> > of being
> > aware of the temperature or the expansion coefficient of Hg or whatever 
> > else is
> > within the constraint.
>
> I would basically agree with that. Consciousness would probably have to be a 
> continuum
> if computationalism is true.

I don't think that follows remotely. It is true that it is vastly
better to interpret a column of mercury as a temperature-sensor than
a pressure-sensor or a radiation-sensor. That doesn't mean the
thermometer
knows that in itself.

Computationalism does not claim that every computation is conscious.

If consciousness supervenes on inherent non-interprtation-dependent
features,
it can supervene on features which are binary, either present or
absent.

For instance, whether a programme examines or modifies its own code is
surely
such a feature.


>Even if computationalism were false and only those machines
> specially blessed by God were conscious there would have to be a continuum, 
> across
> different species and within the lifespan of an individual from birth to 
> death. The possibility
> that consciousness comes on like a light at some point in your life, or at 
> some point in the
> evolution of a species, seems unlikely to me.

Surely it comes on like a light whenver you wake up.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread 1Z


Stathis Papaioannou wrote:

> That's what I'm saying, but I certainly don't think everyone agrees with me 
> on the list, and
> I'm not completely decided as to which of the three is more absurd: every 
> physical system
> implements every conscious computation, no physical system implements any 
> conscious
> computation (they are all implemented non-physically in Platonia), or the 
> idea that a
> computation can be conscious in the first place.


You haven't made it clear why you don't accept that every physical
system
implements one computation, whether it is a
conscious computation or not. I don't see what
contradicts it.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread 1Z


Stathis Papaioannou wrote:
> Peter Jones writes:
>
> > Stathis Papaioannou wrote:
> >
> > > > > Now, suppose some more complex variant of 3+2=3 implemented on your 
> > > > > abacus has consciousness associated with it, which is just one of the 
> > > > > tenets of computationalism. Some time later, you are walking in the 
> > > > > Amazon rain forest and notice that
> > > > > under a certain mapping
> > > >
> > > >
> > > > > of birds to beads and trees to wires, the forest is implementing the 
> > > > > same computation as your abacus was. So if your abacus was conscious, 
> > > > > and computationalism is true, the tree-bird sytem should also be 
> > > > > conscious.
> > > >
> > > > No necessarily, because the mapping is required too. Why should
> > > > it still be conscious if no-one is around to make the mapping.
> > >
> > > Are you claiming that a conscious machine stops being conscious if its 
> > > designers die
> > > and all the information about how it works is lost?
> >
> > You are, if anyone is. I don't agree that computation *must* be
> > interpreted,
> > although they *can* be re-interpreted.
>
> What I claim is this:
>
> A computation does not *need* to be interpreted, it just is. However, a 
> computation
> does need to be interpreted, or interact with its environment in some way, if 
> it is to be
> interesting or meaningful.

A computation other than the one you are running needs to be
interpreted by you
to be meaningful to you. The computation you are running is useful
to you because it keeps you alive.

> By analogy, a string of characters is a string of characters
> whether or not anyone interprets it, but it is not interesting or meaningful 
> unless it is
> interpreted. But if a computation, or for that matter a string of characters, 
> is conscious,
> then it is interesting and meaningful in at least one sense in the absence of 
> an external
> observer: it is interesting and meaningful to itself. If it were not, then it 
> wouldn't be
> conscious. The conscious things in the world have an internal life, a first 
> person
> phenomenal experience, a certain ineffable something, whatever you want to 
> call it,
> while the unconscious things do not. That is the difference between them.

Which they manage to be aware of without the existence of an external
oberver,
so one of your premises must be wrong.

> Stathis Papaioannou
> _
> Be one of the first to try Windows Live Mail.
> http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread 1Z


Stathis Papaioannou wrote:
> Peter Jones writes:
>
> > > I'm not sure how the multiverse comes into the discussion, but you have
> > > made the point several times that a computation depends on an observer
> >
> >
> > No, I haven't! I have tried ot follow through the consequences of
> > assuming it must.
> > It seems to me that some sort of absurdity or contradiction ensues.
>
> OK. This has been a long and complicated thread.
>
> > > for its meaning. I agree, but *if* computations can be conscious 
> > > (remember,
> > > this is an assumption) then in that special case an external observer is 
> > > not
> > > needed.
> >
> > Why not ? (Well, I would be quite happy that a conscious
> > computation would have some inherent structural property --
> > I want to foind out why *you* would think it doesn't).
>
> I think it goes against standard computationalism if you say that a conscious
> computation has some inherent structural property.

I fail to see why. Computations obviously do have structural
properties.
Why shouldn't consciousness supervene on them ?

> Opponents of computationalism
> have used the absurdity of the conclusion that anything implements any 
> conscious
> computation as evidence that there is something special and non-computational
> about the brain.

Yes, but "anything implements any computation" isn't a legitimate
conclusion
of computationalism or anything else. That being the case, there is no
need for special pleading for consciousness.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou


Brent meeker writes (quoting SP):

> > Maybe this is a copout, but I just don't think it is even logically 
> > possible to explain what consciousness 
> > *is* unless you have it. 
> 
> Not being *logically* possible means entailing a contradiction - I doubt 
> that.  But 
> anyway you do have it and you think I do because of the way we interact.  So 
> if you 
> interacted the same way with a computer and you further found out that the 
> computer 
> was a neural network that had learned through interaction with people over a 
> period 
> of years, you'd probably infer that the computer was conscious - at least you 
> wouldn't be sure it wasn't.

True, but I could still only imagine that it experiences what I experience 
because I already know what I 
experience. I don't know what my current computer experiences, if anything, 
because I'm not very much 
like it.
 
> >It's like the problem of explaining vision to a blind man: he might be the 
> >world's 
> > greatest scientific expert on it but still have zero idea of what it is 
> > like to see - and that's even though 
> > he shares most of the rest of his cognitive structure with other humans, 
> > and can understand analogies 
> > using other sensations. Knowing what sort of program a conscious computer 
> > would have to run to be 
> > conscious, what the purpose of consciousness is, and so on, does not help 
> > me to understand what the 
> > computer would be experiencing, except by analogy with what I myself 
> > experience. 
> 
> But that's true of everything.  Suppose we knew a lot more about brains and 
> we 
> created an intelligent computer using brain-like functional architecture and 
> it acted 
> like a conscious human being, then I'd say we understood its consciousness 
> better 
> than we understand quantum field theory or global economics.

We would understand it in a third person sense but not in a first person sense, 
except by analogy with our 
own first person experience. Consciousness is the difference between what can 
be known by observing an 
entity and what can be known by being the entity, or something like the entity, 
yourself. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou

Colin Hales writes:

> Please consider the plight of the zombie scientist with a huge set of
> sensory feeds and similar set of effectors. All carry similar signal
> encoding and all, in themselves, bestow no experiential qualities on the
> zombie.
> 
> Add a capacity to detect regularity in the sensory feeds.
> Add a scientific goal-seeking behaviour.
> 
> Note that this zombie...
> a) has the internal life of a dreamless sleep
> b) has no concept or percept of body or periphery
> c) has no concept that it is embedded in a universe.
> 
> I put it to you that science (the extraction of regularity) is the science
> of zombie sensory fields, not the science of the natural world outside the
> zombie scientist. No amount of creativity (except maybe random choices)
> would ever lead to any abstraction of the outside world that gave it the
> ability to handle novelty in the natural world outside the zombie scientist.
> 
> No matter how sophisticated the sensory feeds and any guesswork as to a
> model (abstraction) of the universe, the zombie would eventually find
> novelty invisible because the sensory feeds fail to depict the novelty .ie.
> same sensory feeds for different behaviour of the natural world.
> 
> Technology built by a zombie scientist would replicate zombie sensory feeds,
> not deliver an independently operating novel chunk of hardware with a
> defined function(if the idea of function even has meaning in this instance).
> 
> The purpose of consciousness is, IMO, to endow the cognitive agent with at
> least a repeatable (not accurate!) simile of the universe outside the
> cognitive agent so that novelty can be handled. Only then can the zombie
> scientist detect arbitrary levels of novelty and do open ended science (or
> survive in the wild world of novel environmental circumstance).
> 
> In the absence of the functionality of phenomenal consciousness and with
> finite sensory feeds you cannot construct any world-model (abstraction) in
> the form of an innate (a-priori) belief system that will deliver an endless
> ability to discriminate novelty. In a very Godellian way eventually a limit
> would be reach where the abstracted model could not make any prediction that
> can be detected. The zombie is, in a very real way, faced with 'truths' that
> exist but can't be accessed/perceived. As such its behaviour will be
> fundamentally fragile in the face of novelty (just like all computer
> programs are).
> ---
> Just to make the zombie a little more real... consider the industrial
> control system computer. I have designed, installed hundreds and wired up
> tens (hundreds?) of thousands of sensors and an unthinkable number of
> kilometers of cables. (NEVER again!) In all cases I put it to you that the
> phenomenal content of sensory connections may, at best, be characterised as
> whatever it is like to have electrons crash through wires, for that is what
> is actually going on. As far as the internal life of the CPU is concerned...
> whatever it is like to be an electrically noisy hot rock, regardless of the
> programalthough the character of the noise may alter with different
> programs!
> 
> I am a zombie expert! No that didn't come out right...erm
> perhaps... "I think I might be a world expert in zombies" yes, that's
> better.
> :-)
> Colin Hales

I'm not sure I understand why the zombie would be unable to respond to any 
situation it was likely to encounter. Doing science and philosophy is just a 
happy 
side-effect of a brain designed to help its owner survive and reproduce. Do you 
think it would be impossible to program a computer to behave like an insect, or 
a 
newborn infant, for example? You could add a random number generator to make 
its behaviour less predictable (so predators can't catch it and parents don't 
get 
complacent) or to help it decide what to do in a truly novel situation. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou


Brent meeker writes:

> >>>I think it goes against standard computationalism if you say that a 
> >>>conscious 
> >>>computation has some inherent structural property. Opponents of 
> >>>computationalism 
> >>>have used the absurdity of the conclusion that anything implements any 
> >>>conscious 
> >>>computation as evidence that there is something special and 
> >>>non-computational 
> >>>about the brain. Maybe they're right.
> >>>
> >>>Stathis Papaioannou
> >>
> >>Why not reject the idea that any computation implements every possible 
> >>computation 
> >>(which seems absurd to me)?  Then allow that only computations with some 
> >>special 
> >>structure are conscious.
> > 
> > 
> > It's possible, but once you start in that direction you can say that only 
> > computations 
> > implemented on this machine rather than that machine can be conscious. You 
> > need the 
> > hardware in order to specify structure, unless you can think of a God-given 
> > programming 
> > language against which candidate computations can be measured.
> 
> I regard that as a feature - not a bug. :-)
> 
> Disembodied computation doesn't quite seem absurd - but our empirical sample 
> argues 
> for embodiment.
> 
> Brent Meeker

I don't have a clear idea in my mind of disembodied computation except in 
rather simple cases, 
like numbers and arithmetic. The number 5 exists as a Platonic ideal, and it 
can also be implemented 
so we can interact with it, as when there is a collection of 5 oranges, or 3 
oranges and 2 apples, 
or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite variety. 
The difficulty is that if we 
say that "3+2=5" as exemplified by 3 oranges and 2 apples is conscious, then 
should we also say 
that the pairs+triplets of fruit are also conscious? If so, where do we draw 
the line? That is what I mean 
when I say that any computation can map onto any physical system. The physical 
structure and activity 
of computer A implementing program a may be completely different to that of 
computer B implementing 
program b, but program b may be an emulation of program a, which should make 
the two machines 
functionally equivalent and, under computationalism, equivalently conscious. 
Maybe this is wrong, eg. 
there is something special about the insulation in the wires of machine A, so 
that only A can be conscious. 
But that is no longer computationalism.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-11 Thread Stathis Papaioannou







> Date: Mon, 11 Sep 2006 13:10:52 -0700
> From: [EMAIL PROTECTED]
> To: everything-list@googlegroups.com
> Subject: Re: computationalism and supervenience
> 
> 
> Stathis Papaioannou wrote:
> > Brent Meeker writes:
> > 
> > 
> >>I think we need to say what it means for a computation to be 
> >>self-interpreting.  Many 
> >>control programs are written with self-monitoring functions and logging 
> >>functions. 
> >>Why would we not attribute consciousness to them?
> > 
> > 
> > Well, why not? Some people don't even think higher mammals are conscious, 
> > and perhaps 
> > some there are true solipsists who could convince themselves that other 
> > people are not really 
> > conscious as rationalisation for antisocial behaviour. 
> 
> Autistic people don't emphathize with others feelings - perhaps because they 
> don't 
> have them.  But their behavoir, and I would expect the behavoir of a real 
> solipist, 
> would be simply asocial.
> 
> >On the other hand, maybe flies experience 
> > pain and fear when confronted with insecticide that is orders of magnitude 
> > greater than that 
> > of any mere human experience of torture, and maybe when I press the letter 
> > "y" on my 
> > keyboard I am subjecting my computer to the torments of hell. 
> 
> And maybe every physical process implements all possible computations - but I 
> see no 
> reason to believe so.
> 
> >I don't buy the argument that 
> > only complex brains or computations can experience pain either: when I was 
> > a child I wasn't 
> > as smart as I am now, but I recall that it hurt a lot more and I was much 
> > more likely to cry when 
> > I cut myself. 
> > 
> > Stathis Papaioannou
> 
> You write as though we know nothing about the physical basis of pain and 
> fear.  There 
> is a lot of empirical evidence about what prevents pain in humans, you can 
> even get a 
>   degree in aesthesiology.  Fear can be induced by psychotropic drugs and 
> relieved by 
> whisky.
> 
> Brent Meeker

But can you comment on the difference between your own subjective experience of 
fear or 
pain compared to that of a rabbit, a fish, or something even more alien? I know 
we can say that 
when you prod a fish with stimulus A it responds by releasing hormones B, C and 
D and swishing its 
tail about in pattern E, F or G according to the time of day and the phases of 
the moon, or whatever, 
and furthermore that these hormones and behaviours are similar to those in 
human responses to 
similar stimuli - but what is the fish feeling?

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-11 Thread Stathis Papaioannou

Brent meeker writes:

> Stathis Papaioannou wrote:
> > Peter Jones writes:
> > 
> > 
> >>Stathis Papaioannou wrote:
> >>
> >>
> >>>Like Bruno, I am not claiming that this is definitely the case, just that 
> >>>it is the case if
> >>>computationalism is true. Several philosophers (eg. Searle) have used the 
> >>>self-evident
> >>>absurdity of the idea as an argument demonstrating that computationalism 
> >>>is false -
> >>>that there is something non-computational about brains and consciousness. 
> >>>I have not
> >>>yet heard an argument that rejects this idea and saves computationalism.
> >>
> >>[ rolls up sleaves ]
> >>
> >>The idea is easilly refuted if it can be shown that computation doesn't
> >>require
> >>interpretation at all. It can also be refuted more circuitously by
> >>showing that
> >>computation is not entirely a matter of intepretation. In everythingism
> >>, eveything
> >>is equal. If some computations (the ones that don't depend on
> >>interpretation) are
> >>"more equal than others", the way is still open for the Somethinginst
> >>to object
> >>that interpretation-independent computations are really real, and the
> >>others are
> >>mere possibilities.
> >>
> >>The claim has been made that computation is "not much use" without an
> >>interpretation.
> >>Well, if you define a computer as somethin that is used by a human,
> >>that is true.
> >>It is also very problematic to the computationalist claim that the
> >>human mind is a computer.
> >>Is the human mind of use to a human ? Well, yes, it helps us stay alive
> >>in various ways.
> >>But that is more to do with reacting to a real-time environment, than
> >>performing abstract symbolic manipulations or elaborate
> >>re-interpretations. (Computationalists need to be careful about how
> >>they define "computer". Under
> >>some perfectly reasonable definitions -- for instance, defining a
> >>computer as
> >>a human invention -- computationalism is trivially false).
> > 
> > 
> > I don't mean anything controversial (I think) when I refer to 
> > interpretation of 
> > computation. Take a mercury thermometer: it would still do its thing if all 
> > sentient life in the universe died out, or even if there were no sentient 
> > life to 
> > build it in the first place and by amazing luck mercury and glass had come 
> > together 
> > in just the right configuration. But if there were someone around to 
> > observe it and 
> > understand it, or if it were attached to a thermostat and heater, the 
> > thermometer 
> > would have extra meaning - the same thermometer, doing the same thermometer 
> > stuff. Now, if thermometers were conscious, then part of their "thermometer 
> > stuff" might include "knowing" what the temperature was - all by 
> > themselves, without 
> > benefit of external observer. 
> 
> We should ask ourselves how do we know the thermometer isn't conscious of the 
> temperature?  It seems that the answer has been that it's state or activity 
> *could* 
> be intepreted in many ways other than indicating the temperature; therefore 
> it must 
> be said to unconscious of the temperature or we must allow that it implements 
> all 
> conscious thought (or at least all for which there is a possible 
> interpretative 
> mapping).  But I see it's state and activity as relative to our shared 
> environment; 
> and this greatly constrains what it can be said to "compute", e.g. the 
> temperature, 
> the expansion coefficient of Hg...   With this constraint, then I think there 
> is no 
> problem in saying the thermometer is conscious at the extremely low level of 
> being 
> aware of the temperature or the expansion coefficient of Hg or whatever else 
> is 
> within the constraint.

I would basically agree with that. Consciousness would probably have to be a 
continuum 
if computationalism is true. Even if computationalism were false and only those 
machines 
specially blessed by God were conscious there would have to be a continuum, 
across
different species and within the lifespan of an individual from birth to death. 
The possibility 
that consciousness comes on like a light at some point in your life, or at some 
point in the 
evolution of a species, seems unlikely to me.

> >Furthermore, if thermometers were conscious, they 
> > might be dreaming of temperatures, or contemplating the meaning of 
> > consciousness, 
> > again in the absence of external observers, and this time in the absence of 
> > interaction 
> > with the real world. 
> > 
> > This, then, is the difference between a computation and a conscious 
> > computation. If 
> > a computation is unconscious, it can only have meaning/use/interpretation 
> > in the eyes 
> > of a beholder or in its interaction with the environment. 
> 
> But this is a useless definition of the difference.  To apply we have to know 
> whether 
> some putative conscious computation has meaning to itself; which we can only 
> know by 
> knowing whether it is conscious or not.  It makes consciou

Re: computationalism and supervenience

2006-09-11 Thread Brent Meeker

Colin Hales wrote:
> 
> Stathis Papaioannou
> 
> 
>>Maybe this is a copout, but I just don't think it is even logically
>>possible to explain what consciousness
>>*is* unless you have it. It's like the problem of explaining vision to a
>>blind man: he might be the world's
>>greatest scientific expert on it but still have zero idea of what it is
>>like to see - and that's even though
>>he shares most of the rest of his cognitive structure with other humans,
>>and can understand analogies
>>using other sensations. Knowing what sort of program a conscious computer
>>would have to run to be
>>conscious, what the purpose of consciousness is, and so on, does not help
>>me to understand what the
>>computer would be experiencing, except by analogy with what I myself
>>experience.
>>
>>Stathis Papaioannou
>>
> 
> 
> Please consider the plight of the zombie scientist with a huge set of
> sensory feeds and similar set of effectors. All carry similar signal
> encoding and all, in themselves, bestow no experiential qualities on the
> zombie.
> 
> Add a capacity to detect regularity in the sensory feeds.
> Add a scientific goal-seeking behaviour.
> 
> Note that this zombie...
> a) has the internal life of a dreamless sleep
> b) has no concept or percept of body or periphery
> c) has no concept that it is embedded in a universe.
> 
> I put it to you that science (the extraction of regularity) is the science
> of zombie sensory fields, not the science of the natural world outside the
> zombie scientist. No amount of creativity (except maybe random choices)
> would ever lead to any abstraction of the outside world that gave it the
> ability to handle novelty in the natural world outside the zombie scientist.
> 
> No matter how sophisticated the sensory feeds and any guesswork as to a
> model (abstraction) of the universe, the zombie would eventually find
> novelty invisible because the sensory feeds fail to depict the novelty .ie.
> same sensory feeds for different behaviour of the natural world.
> 
> Technology built by a zombie scientist would replicate zombie sensory feeds,
> not deliver an independently operating novel chunk of hardware with a
> defined function(if the idea of function even has meaning in this instance).
> 
> The purpose of consciousness is, IMO, to endow the cognitive agent with at
> least a repeatable (not accurate!) simile of the universe outside the
> cognitive agent so that novelty can be handled. Only then can the zombie
> scientist detect arbitrary levels of novelty and do open ended science (or
> survive in the wild world of novel environmental circumstance).

Almost all organisms have become extinct.  Handling *arbitrary* levels of 
novelty is 
probably too much to ask of any species; and it's certainly more than is 
necessary to 
survive for millenia.

> 
> In the absence of the functionality of phenomenal consciousness and with
> finite sensory feeds you cannot construct any world-model (abstraction) in
> the form of an innate (a-priori) belief system that will deliver an endless
> ability to discriminate novelty. In a very Godellian way eventually a limit
> would be reach where the abstracted model could not make any prediction that
> can be detected. 

So that's how we got string theory!

>The zombie is, in a very real way, faced with 'truths' that
> exist but can't be accessed/perceived. As such its behaviour will be
> fundamentally fragile in the face of novelty (just like all computer
> programs are).

How do you know we are so robust.  Planck said, "A new idea prevails, not by 
the 
conversion of adherents, but by the retirement and demise of opponents."  In 
other 
words only the young have the flexibility to adopt new ideas.  Ironically 
Planck 
never really believed quantum mechanics was more than a calculational trick.

> ---
> Just to make the zombie a little more real... consider the industrial
> control system computer. I have designed, installed hundreds and wired up
> tens (hundreds?) of thousands of sensors and an unthinkable number of
> kilometers of cables. (NEVER again!) In all cases I put it to you that the
> phenomenal content of sensory connections may, at best, be characterised as
> whatever it is like to have electrons crash through wires, for that is what
> is actually going on. As far as the internal life of the CPU is concerned...
> whatever it is like to be an electrically noisy hot rock, regardless of the
> programalthough the character of the noise may alter with different
> programs!

That's like say whatever it is like to be you, it is at best some waves of 
chemical 
potential.  You don't *know* that the control system is not conscious - unless 
you 
know what structure or function makes a system conscious.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-li

Re: computationalism and supervenience

2006-09-11 Thread Brent Meeker

Stathis Papaioannou wrote:
> 
> 
> Brent meeker writes:
> 
> 
>>I could make a robot that, having suitable thermocouples, would quickly 
>>withdraw it's 
>>hand from a fire; but not be conscious of it.  Even if I provide the 
>>robot with 
>>"feelings", i.e. judgements about good/bad/pain/pleasure I'm not sure it 
>>would be 
>>conscious.  But if I provide it with "attention" and memory, so that it 
>>noted the 
>>painful event as important and necessary to remember because of it's 
>>strong negative 
>>affect; then I think it would be conscious.
>
>
>It's interesting that people actually withdraw their hand from the fire 
>*before* they experience 
>the pain. The withdrawl is a reflex, presumably evolved in organisms with 
>the most primitive 
>central nervour systems, while the pain seems to be there as an 
>afterthought to teach us a 
>lesson so we won't do it again. Thus, from consideration of evolutionary 
>utility consciousness 
>does indeed seem to be a side-effect of memory and learning. 

Even more curious, volitional action also occurs before one is aware of it. 
Are you 
familiar with the experiments of Benjamin Libet and Grey Walter?
>>>
>>>
>>>These experiments showed that in apparently voluntarily initiated motion, 
>>>motor cortex activity 
>>>actually preceded the subject's awareness of his intention by a substantial 
>>>fraction of a second. 
>>>In other words, we act first, then "decide" to act. These studies did not 
>>>examine pre-planned 
>>>action (presumably that would be far more technically difficult) but it is 
>>>easy to imagine the analogous 
>>>situation whereby the action is unconsciously "planned" before we become 
>>>aware of our decision. In 
>>>other words, free will is just a feeling which occurs after the fact. This 
>>>is consistent with the logical 
>>>impossibility of something that is neither random nor determined, which is 
>>>what I feel my free will to be.
>>>
>>>
>>>
>I also think that this is an argument against zombies. If it were possible 
>for an organism to 
>behave just like a conscious being, but actually be unconscious, then why 
>would consciousness 
>have evolved? 

An interesting point - but hard to give any answer before pinning down what 
we mean 
by consciousness.  For example Bruno, Julian Jaynes, and Daniel Dennett 
have 
explanations; but they explain somewhat different consciousnesses, or at 
least 
different aspects.
>>>
>>>
>>>Consciousness is the hardest thing to explain but the easiest thing to 
>>>understand, if it's your own 
>>>consciousness at issue. I think we can go a long way discussing it assuming 
>>>that we do know what 
>>>we are talking about even though we can't explain it. The question I ask is, 
>>>why did people evolve 
>>>with this consciousness thing, whatever it is? The answer must be, I think, 
>>>that it is a necessary 
>>>side-effect of the sort of neural complexity that underpins our behaviour. 
>>>If it were not, and it 
>>>were possible that beings could behave exactly like humans and not be 
>>>conscious, then it would 
>>>have been wasteful of nature to have provided us with consciousness. 
>>
>>This is not necessarily so.  First, evolution is constrained by what goes 
>>before. 
>>Its engineering solutions often seem rube-goldberg, e.g. backward retina in 
>>mammals. 
> 
> 
> Sure, but vision itself would not have evolved unnecessarily.
> 
> 
>>  Second, there is selection against some evolved feature only to the extent 
>> it has a 
>>(net) cost.  For example, Jaynes explanation of consciousness conforms to 
>>these two 
>>criteria.  I think that any species that evolves intelligence comparable to 
>>ours will 
>>be conscious for reasons somewhat like Jaynes theory.  They will be social 
>>and this 
>>combined with intelligence will make language a good evolutionary move.  Once 
>>they 
>>have language, remembering what has happened, in order to communicate and 
>>plan, in 
>>symbolic terms will be a easy and natural evolvement.  Whether that leads to 
>>hearing 
>>your own narrative in your head, as Jaynes supposes, is questionable; but it 
>>would be 
>>consistent with evolution. It takes advantage of existing structure and 
>>functions to 
>>realize a useful new function.
> 
> 
> Agreed. So consciousness is either there for a reason or it's a necessary 
> side-effect of the sort 
> of brains we have and the way we have evolved. It's still theoretically 
> possible that if the latter 
> is the case, we might have been unconscious if we had evolved completely 
> different kinds of 
> brains, but similar behaviour - although I think it unlikely.
>  
> 
>>>This does not necessarily 
>>>mean that computers can be conscious: maybe if we had evolved with 
>>>electronic circuits in our 
>>>heads rather than neurons consciousness would not have been a necessary 
>>

RE: computationalism and supervenience

2006-09-11 Thread Colin Hales


Stathis Papaioannou

> Maybe this is a copout, but I just don't think it is even logically
> possible to explain what consciousness
> *is* unless you have it. It's like the problem of explaining vision to a
> blind man: he might be the world's
> greatest scientific expert on it but still have zero idea of what it is
> like to see - and that's even though
> he shares most of the rest of his cognitive structure with other humans,
> and can understand analogies
> using other sensations. Knowing what sort of program a conscious computer
> would have to run to be
> conscious, what the purpose of consciousness is, and so on, does not help
> me to understand what the
> computer would be experiencing, except by analogy with what I myself
> experience.
> 
> Stathis Papaioannou
> 

Please consider the plight of the zombie scientist with a huge set of
sensory feeds and similar set of effectors. All carry similar signal
encoding and all, in themselves, bestow no experiential qualities on the
zombie.

Add a capacity to detect regularity in the sensory feeds.
Add a scientific goal-seeking behaviour.

Note that this zombie...
a) has the internal life of a dreamless sleep
b) has no concept or percept of body or periphery
c) has no concept that it is embedded in a universe.

I put it to you that science (the extraction of regularity) is the science
of zombie sensory fields, not the science of the natural world outside the
zombie scientist. No amount of creativity (except maybe random choices)
would ever lead to any abstraction of the outside world that gave it the
ability to handle novelty in the natural world outside the zombie scientist.

No matter how sophisticated the sensory feeds and any guesswork as to a
model (abstraction) of the universe, the zombie would eventually find
novelty invisible because the sensory feeds fail to depict the novelty .ie.
same sensory feeds for different behaviour of the natural world.

Technology built by a zombie scientist would replicate zombie sensory feeds,
not deliver an independently operating novel chunk of hardware with a
defined function(if the idea of function even has meaning in this instance).

The purpose of consciousness is, IMO, to endow the cognitive agent with at
least a repeatable (not accurate!) simile of the universe outside the
cognitive agent so that novelty can be handled. Only then can the zombie
scientist detect arbitrary levels of novelty and do open ended science (or
survive in the wild world of novel environmental circumstance).

In the absence of the functionality of phenomenal consciousness and with
finite sensory feeds you cannot construct any world-model (abstraction) in
the form of an innate (a-priori) belief system that will deliver an endless
ability to discriminate novelty. In a very Godellian way eventually a limit
would be reach where the abstracted model could not make any prediction that
can be detected. The zombie is, in a very real way, faced with 'truths' that
exist but can't be accessed/perceived. As such its behaviour will be
fundamentally fragile in the face of novelty (just like all computer
programs are).
---
Just to make the zombie a little more real... consider the industrial
control system computer. I have designed, installed hundreds and wired up
tens (hundreds?) of thousands of sensors and an unthinkable number of
kilometers of cables. (NEVER again!) In all cases I put it to you that the
phenomenal content of sensory connections may, at best, be characterised as
whatever it is like to have electrons crash through wires, for that is what
is actually going on. As far as the internal life of the CPU is concerned...
whatever it is like to be an electrically noisy hot rock, regardless of the
programalthough the character of the noise may alter with different
programs!

I am a zombie expert! No that didn't come out right...erm
perhaps... "I think I might be a world expert in zombies" yes, that's
better.
:-)
Colin Hales


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-11 Thread Stathis Papaioannou



Brent meeker writes:

> I could make a robot that, having suitable thermocouples, would quickly 
> withdraw it's 
> hand from a fire; but not be conscious of it.  Even if I provide the 
> robot with 
> "feelings", i.e. judgements about good/bad/pain/pleasure I'm not sure it 
> would be 
> conscious.  But if I provide it with "attention" and memory, so that it 
> noted the 
> painful event as important and necessary to remember because of it's 
> strong negative 
> affect; then I think it would be conscious.
> >>>
> >>>
> >>>It's interesting that people actually withdraw their hand from the fire 
> >>>*before* they experience 
> >>>the pain. The withdrawl is a reflex, presumably evolved in organisms with 
> >>>the most primitive 
> >>>central nervour systems, while the pain seems to be there as an 
> >>>afterthought to teach us a 
> >>>lesson so we won't do it again. Thus, from consideration of evolutionary 
> >>>utility consciousness 
> >>>does indeed seem to be a side-effect of memory and learning. 
> >>
> >>Even more curious, volitional action also occurs before one is aware of it. 
> >>Are you 
> >>familiar with the experiments of Benjamin Libet and Grey Walter?
> > 
> > 
> > These experiments showed that in apparently voluntarily initiated motion, 
> > motor cortex activity 
> > actually preceded the subject's awareness of his intention by a substantial 
> > fraction of a second. 
> > In other words, we act first, then "decide" to act. These studies did not 
> > examine pre-planned 
> > action (presumably that would be far more technically difficult) but it is 
> > easy to imagine the analogous 
> > situation whereby the action is unconsciously "planned" before we become 
> > aware of our decision. In 
> > other words, free will is just a feeling which occurs after the fact. This 
> > is consistent with the logical 
> > impossibility of something that is neither random nor determined, which is 
> > what I feel my free will to be.
> > 
> > 
> >>>I also think that this is an argument against zombies. If it were possible 
> >>>for an organism to 
> >>>behave just like a conscious being, but actually be unconscious, then why 
> >>>would consciousness 
> >>>have evolved? 
> >>
> >>An interesting point - but hard to give any answer before pinning down what 
> >>we mean 
> >>by consciousness.  For example Bruno, Julian Jaynes, and Daniel Dennett 
> >>have 
> >>explanations; but they explain somewhat different consciousnesses, or at 
> >>least 
> >>different aspects.
> > 
> > 
> > Consciousness is the hardest thing to explain but the easiest thing to 
> > understand, if it's your own 
> > consciousness at issue. I think we can go a long way discussing it assuming 
> > that we do know what 
> > we are talking about even though we can't explain it. The question I ask 
> > is, why did people evolve 
> > with this consciousness thing, whatever it is? The answer must be, I think, 
> > that it is a necessary 
> > side-effect of the sort of neural complexity that underpins our behaviour. 
> > If it were not, and it 
> > were possible that beings could behave exactly like humans and not be 
> > conscious, then it would 
> > have been wasteful of nature to have provided us with consciousness. 
> 
> This is not necessarily so.  First, evolution is constrained by what goes 
> before. 
> Its engineering solutions often seem rube-goldberg, e.g. backward retina in 
> mammals. 

Sure, but vision itself would not have evolved unnecessarily.

>   Second, there is selection against some evolved feature only to the extent 
> it has a 
> (net) cost.  For example, Jaynes explanation of consciousness conforms to 
> these two 
> criteria.  I think that any species that evolves intelligence comparable to 
> ours will 
> be conscious for reasons somewhat like Jaynes theory.  They will be social 
> and this 
> combined with intelligence will make language a good evolutionary move.  Once 
> they 
> have language, remembering what has happened, in order to communicate and 
> plan, in 
> symbolic terms will be a easy and natural evolvement.  Whether that leads to 
> hearing 
> your own narrative in your head, as Jaynes supposes, is questionable; but it 
> would be 
> consistent with evolution. It takes advantage of existing structure and 
> functions to 
> realize a useful new function.

Agreed. So consciousness is either there for a reason or it's a necessary 
side-effect of the sort 
of brains we have and the way we have evolved. It's still theoretically 
possible that if the latter 
is the case, we might have been unconscious if we had evolved completely 
different kinds of 
brains, but similar behaviour - although I think it unlikely.
 
> >This does not necessarily 
> > mean that computers can be conscious: maybe if we had evolved with 
> > electronic circuits in our 
> > heads rather than neurons consciousness would not have been a necessary 
> > side-effect. 
> 
> But my point is that t

Re: computationalism and supervenience

2006-09-11 Thread Brent Meeker

Stathis Papaioannou wrote:
> Brent Meeker writes:
> 
> 
>>>I think it goes against standard computationalism if you say that a 
>>>conscious 
>>>computation has some inherent structural property. Opponents of 
>>>computationalism 
>>>have used the absurdity of the conclusion that anything implements any 
>>>conscious 
>>>computation as evidence that there is something special and 
>>>non-computational 
>>>about the brain. Maybe they're right.
>>>
>>>Stathis Papaioannou
>>
>>Why not reject the idea that any computation implements every possible 
>>computation 
>>(which seems absurd to me)?  Then allow that only computations with some 
>>special 
>>structure are conscious.
> 
> 
> It's possible, but once you start in that direction you can say that only 
> computations 
> implemented on this machine rather than that machine can be conscious. You 
> need the 
> hardware in order to specify structure, unless you can think of a God-given 
> programming 
> language against which candidate computations can be measured.

I regard that as a feature - not a bug. :-)

Disembodied computation doesn't quite seem absurd - but our empirical sample 
argues 
for embodiment.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-11 Thread Brent Meeker

Stathis Papaioannou wrote:
> Brent Meeker writes:
> 
> 
Why not?  Can't we map bat conscious-computation to human 
conscious-computation; 
since you suppose we can map any computation to any other.  But, you're 
thinking, 
since there a practical infinity of maps (even a countable infinity if you 
allow 
one->many) there is no way to know which is the correct map.  There is if 
you and the 
bat share an environment.
>>>
>>>
>>>You're right that the correct mapping is the one in which you and the bat 
>>>share the 
>>>environment. That is what interaction with the environment does: forces us 
>>>to choose 
>>>one mapping out of all the possible ones, whether that involves talking to 
>>>another person 
>>>or using a computer. However, that doesn't mean I know everything about bats 
>>>if I know 
>>>everything about bat-computations. If it did, that would mean there was no 
>>>difference 
>>>between zombie bats and conscious bats, no difference between first person 
>>>knowledge 
>>>and third person or vicarious knowledge.
>>>
>>>Stathis Papaioannou
>>
>>I don't find either of those conclusions absurd.  Computationalism is 
>>generally 
>>thought to entail both of them.  Bruno's theory that identifies knowledge 
>>with 
>>provability is the only form of computationalism that seems to allow the 
>>distinction 
>>in a fundamental way.
> 
> 
> The Turing test would seem to imply that if it behaves like a bat, it has the 
> mental states of a 
> bat, and maybe this is a good practical test, but I think we can keep 
> computationalism/strong AI 
> and allow that it might have different mental states and still behave the 
> same. A person given 
> an opiod drug still experiences pain, although less intensely, and would be 
> easily able to fool the 
> Turing tester into believing that he is experiecing the same pain as in the 
> undrugged state. By 
> extension, it is logically possible, though unlikely, that the subject may 
> have no conscious experiences 
> at all. The usual argument against this is that by the same reasoning we 
> cannot be sure that our 
> fellow humans are conscious. This is strictly true, but we have two reasons 
> for assuming other 
> people are conscious: they behave like we do and their brains are similar to 
> ours. I don't think 
> it would be unreasonable to wonder whether a digital computer that behaves 
> like we do really 
> has the same mental states as a human, while still believing that it is 
> theoretically possible that a 
> close enough analogue of a human brain would have the same mental states.
> 
> Stathis Papaioannou

I agree with that.  It would be hard to say whether a robot whose computation 
was via 
a digital computer implementing something like a production system was 
conscious or 
not even if its behavoir were very close to human.  On the other hand it would 
also 
be hard to say that another robot, whose computation was by digital simulation 
of a 
neural network modeled on a mammalian brain and whose behavoir was very close 
to 
human, was *not* conscious.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-11 Thread Colin Hales



> -Original Message-
Stathis Papaioannou
> 
> Brent Meeker writes:
> 
> > >>Why not?  Can't we map bat conscious-computation to human conscious-
> computation;
> > >>since you suppose we can map any computation to any other.  But,
> you're thinking,
> > >>since there a practical infinity of maps (even a countable infinity if
> you allow
> > >>one->many) there is no way to know which is the correct map.  There is
> if you and the
> > >>bat share an environment.
> > >
> > >
> > > You're right that the correct mapping is the one in which you and the
> bat share the
> > > environment. That is what interaction with the environment does:
> forces us to choose
> > > one mapping out of all the possible ones, whether that involves
> talking to another person
> > > or using a computer. However, that doesn't mean I know everything
> about bats if I know
> > > everything about bat-computations. If it did, that would mean there
> was no difference
> > > between zombie bats and conscious bats, no difference between first
> person knowledge
> > > and third person or vicarious knowledge.
> > >
> > > Stathis Papaioannou
> >
> > I don't find either of those conclusions absurd.  Computationalism is
> generally
> > thought to entail both of them.  Bruno's theory that identifies
> knowledge with
> > provability is the only form of computationalism that seems to allow the
> distinction
> > in a fundamental way.
> 
> The Turing test would seem to imply that if it behaves like a bat, it has
> the mental states of a
> bat, and maybe this is a good practical test, but I think we can keep
> computationalism/strong AI
> and allow that it might have different mental states and still behave the
> same. A person given
> an opiod drug still experiences pain, although less intensely, and would
> be easily able to fool the
> Turing tester into believing that he is experiecing the same pain as in
> the undrugged state. By
> extension, it is logically possible, though unlikely, that the subject may
> have no conscious experiences
> at all. The usual argument against this is that by the same reasoning we
> cannot be sure that our
> fellow humans are conscious. This is strictly true, but we have two
> reasons for assuming other
> people are conscious: they behave like we do and their brains are similar
> to ours. I don't think
> it would be unreasonable to wonder whether a digital computer that behaves
> like we do really
> has the same mental states as a human, while still believing that it is
> theoretically possible that a
> close enough analogue of a human brain would have the same mental states.
> 
> Stathis Papaioannou

I am so glad to here this come onto the list, Stathis. Your argument is
logically equivalentI took this argument (from the recent thread) over
to the JCS-ONLINE forum and threw it in there to see what would happen. As a
result I wrote a short paper ostensibly to dispose of the solipsism argument
once and for all by demonstrating empirical proof of the existence of
consciousness, (if not any particular details within it). In it is some of
the stuff from the thread...and acknowledgement to the list.

I expect it will be rejected as usual... regardless...it's encouraging to at
least see a little glimmer of hope that some of the old arguments that get
trotted out are getting a little frayed around the edges..

If anyone wants to see it they are welcome... just email me. Or perhaps I
could put it in the google forum somewhere... it can do that, can't it?

BTW: The 'what it is like' of a Turing machine = what it is like to be a
tape and tape reader, regardless of what is on the tape. 'tape_reader_ness',
I assume... :-)

Regards,


Colin Hales



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-11 Thread Stathis Papaioannou

Brent Meeker writes:

> > I think it goes against standard computationalism if you say that a 
> > conscious 
> > computation has some inherent structural property. Opponents of 
> > computationalism 
> > have used the absurdity of the conclusion that anything implements any 
> > conscious 
> > computation as evidence that there is something special and 
> > non-computational 
> > about the brain. Maybe they're right.
> > 
> > Stathis Papaioannou
> 
> Why not reject the idea that any computation implements every possible 
> computation 
> (which seems absurd to me)?  Then allow that only computations with some 
> special 
> structure are conscious.

It's possible, but once you start in that direction you can say that only 
computations 
implemented on this machine rather than that machine can be conscious. You need 
the 
hardware in order to specify structure, unless you can think of a God-given 
programming 
language against which candidate computations can be measured.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-11 Thread Stathis Papaioannou

Brent Meeker writes:

> >>Why not?  Can't we map bat conscious-computation to human 
> >>conscious-computation; 
> >>since you suppose we can map any computation to any other.  But, you're 
> >>thinking, 
> >>since there a practical infinity of maps (even a countable infinity if you 
> >>allow 
> >>one->many) there is no way to know which is the correct map.  There is if 
> >>you and the 
> >>bat share an environment.
> > 
> > 
> > You're right that the correct mapping is the one in which you and the bat 
> > share the 
> > environment. That is what interaction with the environment does: forces us 
> > to choose 
> > one mapping out of all the possible ones, whether that involves talking to 
> > another person 
> > or using a computer. However, that doesn't mean I know everything about 
> > bats if I know 
> > everything about bat-computations. If it did, that would mean there was no 
> > difference 
> > between zombie bats and conscious bats, no difference between first person 
> > knowledge 
> > and third person or vicarious knowledge.
> > 
> > Stathis Papaioannou
> 
> I don't find either of those conclusions absurd.  Computationalism is 
> generally 
> thought to entail both of them.  Bruno's theory that identifies knowledge 
> with 
> provability is the only form of computationalism that seems to allow the 
> distinction 
> in a fundamental way.

The Turing test would seem to imply that if it behaves like a bat, it has the 
mental states of a 
bat, and maybe this is a good practical test, but I think we can keep 
computationalism/strong AI 
and allow that it might have different mental states and still behave the same. 
A person given 
an opiod drug still experiences pain, although less intensely, and would be 
easily able to fool the 
Turing tester into believing that he is experiecing the same pain as in the 
undrugged state. By 
extension, it is logically possible, though unlikely, that the subject may have 
no conscious experiences 
at all. The usual argument against this is that by the same reasoning we cannot 
be sure that our 
fellow humans are conscious. This is strictly true, but we have two reasons for 
assuming other 
people are conscious: they behave like we do and their brains are similar to 
ours. I don't think 
it would be unreasonable to wonder whether a digital computer that behaves like 
we do really 
has the same mental states as a human, while still believing that it is 
theoretically possible that a 
close enough analogue of a human brain would have the same mental states.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-11 Thread Brent Meeker

Stathis Papaioannou wrote:
> Brent Meeker writes:
> 
> 
>>I think we need to say what it means for a computation to be 
>>self-interpreting.  Many 
>>control programs are written with self-monitoring functions and logging 
>>functions. 
>>Why would we not attribute consciousness to them?
> 
> 
> Well, why not? Some people don't even think higher mammals are conscious, and 
> perhaps 
> some there are true solipsists who could convince themselves that other 
> people are not really 
> conscious as rationalisation for antisocial behaviour. 

Autistic people don't emphathize with others feelings - perhaps because they 
don't 
have them.  But their behavoir, and I would expect the behavoir of a real 
solipist, 
would be simply asocial.

>On the other hand, maybe flies experience 
> pain and fear when confronted with insecticide that is orders of magnitude 
> greater than that 
> of any mere human experience of torture, and maybe when I press the letter 
> "y" on my 
> keyboard I am subjecting my computer to the torments of hell. 

And maybe every physical process implements all possible computations - but I 
see no 
reason to believe so.

>I don't buy the argument that 
> only complex brains or computations can experience pain either: when I was a 
> child I wasn't 
> as smart as I am now, but I recall that it hurt a lot more and I was much 
> more likely to cry when 
> I cut myself. 
> 
> Stathis Papaioannou

You write as though we know nothing about the physical basis of pain and fear.  
There 
is a lot of empirical evidence about what prevents pain in humans, you can even 
get a 
  degree in aesthesiology.  Fear can be induced by psychotropic drugs and 
relieved by 
whisky.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-11 Thread Brent Meeker

Stathis Papaioannou wrote:
> Peter Jones writes:
> 
> 
>>Stathis Papaioannou wrote:
>>
>>
>>>Like Bruno, I am not claiming that this is definitely the case, just that it 
>>>is the case if
>>>computationalism is true. Several philosophers (eg. Searle) have used the 
>>>self-evident
>>>absurdity of the idea as an argument demonstrating that computationalism is 
>>>false -
>>>that there is something non-computational about brains and consciousness. I 
>>>have not
>>>yet heard an argument that rejects this idea and saves computationalism.
>>
>>[ rolls up sleaves ]
>>
>>The idea is easilly refuted if it can be shown that computation doesn't
>>require
>>interpretation at all. It can also be refuted more circuitously by
>>showing that
>>computation is not entirely a matter of intepretation. In everythingism
>>, eveything
>>is equal. If some computations (the ones that don't depend on
>>interpretation) are
>>"more equal than others", the way is still open for the Somethinginst
>>to object
>>that interpretation-independent computations are really real, and the
>>others are
>>mere possibilities.
>>
>>The claim has been made that computation is "not much use" without an
>>interpretation.
>>Well, if you define a computer as somethin that is used by a human,
>>that is true.
>>It is also very problematic to the computationalist claim that the
>>human mind is a computer.
>>Is the human mind of use to a human ? Well, yes, it helps us stay alive
>>in various ways.
>>But that is more to do with reacting to a real-time environment, than
>>performing abstract symbolic manipulations or elaborate
>>re-interpretations. (Computationalists need to be careful about how
>>they define "computer". Under
>>some perfectly reasonable definitions -- for instance, defining a
>>computer as
>>a human invention -- computationalism is trivially false).
> 
> 
> I don't mean anything controversial (I think) when I refer to interpretation 
> of 
> computation. Take a mercury thermometer: it would still do its thing if all 
> sentient life in the universe died out, or even if there were no sentient 
> life to 
> build it in the first place and by amazing luck mercury and glass had come 
> together 
> in just the right configuration. But if there were someone around to observe 
> it and 
> understand it, or if it were attached to a thermostat and heater, the 
> thermometer 
> would have extra meaning - the same thermometer, doing the same thermometer 
> stuff. Now, if thermometers were conscious, then part of their "thermometer 
> stuff" might include "knowing" what the temperature was - all by themselves, 
> without 
> benefit of external observer. 

We should ask ourselves how do we know the thermometer isn't conscious of the 
temperature?  It seems that the answer has been that it's state or activity 
*could* 
be intepreted in many ways other than indicating the temperature; therefore it 
must 
be said to unconscious of the temperature or we must allow that it implements 
all 
conscious thought (or at least all for which there is a possible interpretative 
mapping).  But I see it's state and activity as relative to our shared 
environment; 
and this greatly constrains what it can be said to "compute", e.g. the 
temperature, 
the expansion coefficient of Hg...   With this constraint, then I think there 
is no 
problem in saying the thermometer is conscious at the extremely low level of 
being 
aware of the temperature or the expansion coefficient of Hg or whatever else is 
within the constraint.

>Furthermore, if thermometers were conscious, they 
> might be dreaming of temperatures, or contemplating the meaning of 
> consciousness, 
> again in the absence of external observers, and this time in the absence of 
> interaction 
> with the real world. 
> 
> This, then, is the difference between a computation and a conscious 
> computation. If 
> a computation is unconscious, it can only have meaning/use/interpretation in 
> the eyes 
> of a beholder or in its interaction with the environment. 

But this is a useless definition of the difference.  To apply we have to know 
whether 
some putative conscious computation has meaning to itself; which we can only 
know by 
knowing whether it is conscious or not.  It makes consciousness ineffable and 
so 
makes the question of whether computationalism is true an insoluble mystery.

Even worse it makes it impossible for us to know whether we're talking about 
the same 
thing when we use the word "consciousness".

>If a computation is conscious, 
> it may have meaning/use/interpretation in interacting with its environment, 
> including 
> other conscious beings, and for obvious reasons all the conscious 
> computations we 
> encounter will fall into that category; but a conscious computation can also 
> have meaning 
> all by itself, to itself. 

I think this is implicitly circular.  Consciousness supplies meaning through 
intepretation.  But meaning is defined only as what consciousness supplies.

It is to

RE : computationalism and supervenience

2006-09-11 Thread Bruno Marchal

Brent Meeker wrote (through many posts):


> I won't insist, because you might be right, but I don't think that is  
> proven.  It may
> be that interaction with the environment is essential to continued  
> consciousness.



Assuming comp, I think that this is a red herring. To make this clear I  
use a notion of generalized brain in some longer version of the UDA.  
See perhaps:

http://groups.google.com/group/everything-list/browse_frm/thread/ 
4c995dee307def3b/9f94f4d49cb2b9e6? 
q=universal+dovetailer&rnum=1#9f94f4d49cb2b9e6

The generalized brain is by definition the portion of whatever you need  
to turing-emulate to "experience nothing" or to survive in the relative  
way addressed through comp. It can contain any part of the environment.  
Note that in that case, assuming comp, such a part has to be assumed  
turing-emulable, or comp is just false.

Of course, if the generalized brain is the entire multiverse, the  
thought experiment with the doctor is harder to figure out, certainly.  
But already at the seventh step of the 8-steps-version of the UDA, you  
can understand that in front of the infinitely (even just potentially  
from all actual views) running UD, comp makes all your continuations  
UD-accessed. It would just mean, in that case, that there is a unique  
winning program with respect of building you. I doubt that, but that is  
not the point.

By the same token, it is also not difficult to get the "evolution of  
brain" into the notion of generalized brain, so that "evolution" is  
also a red herring when used as a critics of comp, despite the  
possibility of non computationnal aspect of evolution like geographical  
randomization à-la Washington/Moscow.



> I would bet on computationalism too.  But I still think the conclusion  
> that every
> physical process, even the null one, necessarily implements all  
> possible
> consciousness is absurd.


OK, but the point is just that comp implies that physical processes  
does not implement "per se" consciousness. They implements  
consciousness only as far as making that consciousness able to manifest  
itself relatively to its most probable computational history (among a  
continuum).





>>
>> Reductio ad absurdum of what? Comp or  (weak) Materialism?
>>
>> Bruno
>
> Dunno.  A reductio doesn't tell you which premise is wrong.



Nice. So you seem to agree with the UDA+movie-graph argument, we have:

not comp v not physical-supervenience.

This is equivalent to both:

comp -> not physical supervenience, and
physical supervenience -> not comp

Now I agree that at this stage(after UDA) it would be natural to  
abandon comp but then computer science and the translation of the UDA  
in the language of a universal turing machine (sufficiently rich, or  
lobian) such an abandonment could be premature (to say the least).  
Incompleteness should make us skeptical in front of any intuitive and  
too rapid conclusion.





> That's generally useful; but when we understand little about  
> something, such as
> consciousness, we should be careful about assuming what's  
> "theoretically possible";
> particularly when it seems to lead to absurdities.


Mmh If we assume "theoretical possibilities" and then are led to  
absurdities, then we have learned something: evidences against the  
theoretical assumptions. If the "absurdities" can be transform into  
clear contradiction, perhaps by making the theoretical assumptions  
clearer, then we have prove something: the falsity of the assumptions.
I think you know that, and you were just quick, isn't' it?





>
>> Stathis: In discussing Tim Maudlin's paper, Bruno has concluded
>> that either computationalism is false or the supervenience theory is  
>> false.
>
> As I understand it Bruno would say that physics supervenes on number  
> theory and
> consciousness supervenes on physics.  So physics is eliminable.


Note that Maudlin's arrives at the same conclusion than me: NOT comp OR  
NOT physical-supervenience. Mauldin's concludes then, assuming  
sup-phys, that comp is problematic (although he realized that not-comp  
is yet still more problematic). I conclude, just because I keep comp at  
this stage, that sup-phys is false, and this makes primary matter  
eliminable. Physics as a field is not eliminate of course, but is  
eliminated as a fundamental field. It is not so astonishing given that  
physics does not often seriously address the mind/body puzzle, and when  
it does (cf Bunge) it still uses the aristotle means to put the problem  
under the rug.




> That interpretation can be reduced to computation is implicit in  
> computationalism.
> The question is what, if anything, is unique about those computations  
> that execute
> interpretation.


Interpretation are done by interpreter, that is *universal* (turing)  
machine.
Perhaps we should agree on a definition, at least for the 3-notions: a  
3-interpretation can be encoded through a (in general infinite) trace  
of a computation.
Wi

RE: computationalism and supervenience

2006-09-11 Thread Stathis Papaioannou

Brent Meeker writes:

> I think we need to say what it means for a computation to be 
> self-interpreting.  Many 
> control programs are written with self-monitoring functions and logging 
> functions. 
> Why would we not attribute consciousness to them?

Well, why not? Some people don't even think higher mammals are conscious, and 
perhaps 
some there are true solipsists who could convince themselves that other people 
are not really 
conscious as rationalisation for antisocial behaviour. On the other hand, maybe 
flies experience 
pain and fear when confronted with insecticide that is orders of magnitude 
greater than that 
of any mere human experience of torture, and maybe when I press the letter "y" 
on my 
keyboard I am subjecting my computer to the torments of hell. I don't buy the 
argument that 
only complex brains or computations can experience pain either: when I was a 
child I wasn't 
as smart as I am now, but I recall that it hurt a lot more and I was much more 
likely to cry when 
I cut myself. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-11 Thread Stathis Papaioannou

Peter Jones writes:

> Stathis Papaioannou wrote:
> 
> > Like Bruno, I am not claiming that this is definitely the case, just that 
> > it is the case if
> > computationalism is true. Several philosophers (eg. Searle) have used the 
> > self-evident
> > absurdity of the idea as an argument demonstrating that computationalism is 
> > false -
> > that there is something non-computational about brains and consciousness. I 
> > have not
> > yet heard an argument that rejects this idea and saves computationalism.
> 
> [ rolls up sleaves ]
> 
> The idea is easilly refuted if it can be shown that computation doesn't
> require
> interpretation at all. It can also be refuted more circuitously by
> showing that
> computation is not entirely a matter of intepretation. In everythingism
> , eveything
> is equal. If some computations (the ones that don't depend on
> interpretation) are
> "more equal than others", the way is still open for the Somethinginst
> to object
> that interpretation-independent computations are really real, and the
> others are
> mere possibilities.
> 
> The claim has been made that computation is "not much use" without an
> interpretation.
> Well, if you define a computer as somethin that is used by a human,
> that is true.
> It is also very problematic to the computationalist claim that the
> human mind is a computer.
> Is the human mind of use to a human ? Well, yes, it helps us stay alive
> in various ways.
> But that is more to do with reacting to a real-time environment, than
> performing abstract symbolic manipulations or elaborate
> re-interpretations. (Computationalists need to be careful about how
> they define "computer". Under
> some perfectly reasonable definitions -- for instance, defining a
> computer as
> a human invention -- computationalism is trivially false).

I don't mean anything controversial (I think) when I refer to interpretation of 
computation. Take a mercury thermometer: it would still do its thing if all 
sentient life in the universe died out, or even if there were no sentient life 
to 
build it in the first place and by amazing luck mercury and glass had come 
together 
in just the right configuration. But if there were someone around to observe it 
and 
understand it, or if it were attached to a thermostat and heater, the 
thermometer 
would have extra meaning - the same thermometer, doing the same thermometer 
stuff. Now, if thermometers were conscious, then part of their "thermometer 
stuff" might include "knowing" what the temperature was - all by themselves, 
without 
benefit of external observer. Furthermore, if thermometers were conscious, they 
might be dreaming of temperatures, or contemplating the meaning of 
consciousness, 
again in the absence of external observers, and this time in the absence of 
interaction 
with the real world. 

This, then, is the difference between a computation and a conscious 
computation. If 
a computation is unconscious, it can only have meaning/use/interpretation in 
the eyes 
of a beholder or in its interaction with the environment. If a computation is 
conscious, 
it may have meaning/use/interpretation in interacting with its environment, 
including 
other conscious beings, and for obvious reasons all the conscious computations 
we 
encounter will fall into that category; but a conscious computation can also 
have meaning 
all by itself, to itself. You might argue, as Brent Meeker has, that a 
conscious being would 
quickly lose consciousness if environmental interaction were cut off, but I 
think that is just 
a contingent fact about brains, and in any case, as Bruno Marchal has pointed 
out, you 
only need a nanosecond of consciousness to prove the point.

> It is of course true that the output of a programme intended to do one
> thing
> ("system S", say) could be re-interpeted as something else. But what
> does it *mean* ?
> If computationalism is true whoever or whatever is doing the
> interpreting is another
> computational process. SO the ultimate result is formed by system S in
> connjunction
> with another systen. System S is merely acting as a subroutine. The
> Everythingist's
> intended conclusion is  that every physical system implements every
> computation.

That's what I'm saying, but I certainly don't think everyone agrees with me on 
the list, and 
I'm not completely decided as to which of the three is more absurd: every 
physical system 
implements every conscious computation, no physical system implements any 
conscious 
computation (they are all implemented non-physically in Platonia), or the idea 
that a 
computation can be conscious in the first place. 

> But the evidence -- the re-interpretation scenario -- only supports the
> idea
> that any computational system could become part of a larger system that
> is
> doing something else. System S cannot be said to be simultaneously
> perforiming
> every possible computation *itself*. The multiple-computaton -- i.e
> multiple-interpretation
> -- scenario is

Re: computationalism and supervenience

2006-09-10 Thread Brent Meeker

Stathis Papaioannou wrote:
> Peter Jones writes:
> 
> 
>>>I'm not sure how the multiverse comes into the discussion, but you have
>>>made the point several times that a computation depends on an observer
>>
>>
>>No, I haven't! I have tried ot follow through the consequences of
>>assuming it must.
>>It seems to me that some sort of absurdity or contradiction ensues.
> 
> 
> OK. This has been a long and complicated thread.
>  
> 
>>>for its meaning. I agree, but *if* computations can be conscious (remember,
>>>this is an assumption) then in that special case an external observer is not
>>>needed.
>>
>>Why not ? (Well, I would be quite happy that a conscious
>>computation would have some inherent structural property --
>>I want to foind out why *you* would think it doesn't).
> 
> 
> I think it goes against standard computationalism if you say that a conscious 
> computation has some inherent structural property. Opponents of 
> computationalism 
> have used the absurdity of the conclusion that anything implements any 
> conscious 
> computation as evidence that there is something special and non-computational 
> about the brain. Maybe they're right.
> 
> Stathis Papaioannou

Why not reject the idea that any computation implements every possible 
computation 
(which seems absurd to me)?  Then allow that only computations with some 
special 
structure are conscious.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-10 Thread Brent Meeker

Stathis Papaioannou wrote:
> Brent Meeker writes:
> 
> 
I could make a robot that, having suitable thermocouples, would quickly 
withdraw it's 
hand from a fire; but not be conscious of it.  Even if I provide the robot 
with 
"feelings", i.e. judgements about good/bad/pain/pleasure I'm not sure it 
would be 
conscious.  But if I provide it with "attention" and memory, so that it 
noted the 
painful event as important and necessary to remember because of it's strong 
negative 
affect; then I think it would be conscious.
>>>
>>>
>>>It's interesting that people actually withdraw their hand from the fire 
>>>*before* they experience 
>>>the pain. The withdrawl is a reflex, presumably evolved in organisms with 
>>>the most primitive 
>>>central nervour systems, while the pain seems to be there as an afterthought 
>>>to teach us a 
>>>lesson so we won't do it again. Thus, from consideration of evolutionary 
>>>utility consciousness 
>>>does indeed seem to be a side-effect of memory and learning. 
>>
>>Even more curious, volitional action also occurs before one is aware of it. 
>>Are you 
>>familiar with the experiments of Benjamin Libet and Grey Walter?
> 
> 
> These experiments showed that in apparently voluntarily initiated motion, 
> motor cortex activity 
> actually preceded the subject's awareness of his intention by a substantial 
> fraction of a second. 
> In other words, we act first, then "decide" to act. These studies did not 
> examine pre-planned 
> action (presumably that would be far more technically difficult) but it is 
> easy to imagine the analogous 
> situation whereby the action is unconsciously "planned" before we become 
> aware of our decision. In 
> other words, free will is just a feeling which occurs after the fact. This is 
> consistent with the logical 
> impossibility of something that is neither random nor determined, which is 
> what I feel my free will to be.
> 
> 
>>>I also think that this is an argument against zombies. If it were possible 
>>>for an organism to 
>>>behave just like a conscious being, but actually be unconscious, then why 
>>>would consciousness 
>>>have evolved? 
>>
>>An interesting point - but hard to give any answer before pinning down what 
>>we mean 
>>by consciousness.  For example Bruno, Julian Jaynes, and Daniel Dennett have 
>>explanations; but they explain somewhat different consciousnesses, or at 
>>least 
>>different aspects.
> 
> 
> Consciousness is the hardest thing to explain but the easiest thing to 
> understand, if it's your own 
> consciousness at issue. I think we can go a long way discussing it assuming 
> that we do know what 
> we are talking about even though we can't explain it. The question I ask is, 
> why did people evolve 
> with this consciousness thing, whatever it is? The answer must be, I think, 
> that it is a necessary 
> side-effect of the sort of neural complexity that underpins our behaviour. If 
> it were not, and it 
> were possible that beings could behave exactly like humans and not be 
> conscious, then it would 
> have been wasteful of nature to have provided us with consciousness. 

This is not necessarily so.  First, evolution is constrained by what goes 
before. 
Its engineering solutions often seem rube-goldberg, e.g. backward retina in 
mammals. 
  Second, there is selection against some evolved feature only to the extent it 
has a 
(net) cost.  For example, Jaynes explanation of consciousness conforms to these 
two 
criteria.  I think that any species that evolves intelligence comparable to 
ours will 
be conscious for reasons somewhat like Jaynes theory.  They will be social and 
this 
combined with intelligence will make language a good evolutionary move.  Once 
they 
have language, remembering what has happened, in order to communicate and plan, 
in 
symbolic terms will be a easy and natural evolvement.  Whether that leads to 
hearing 
your own narrative in your head, as Jaynes supposes, is questionable; but it 
would be 
consistent with evolution. It takes advantage of existing structure and 
functions to 
realize a useful new function.

>This does not necessarily 
> mean that computers can be conscious: maybe if we had evolved with electronic 
> circuits in our 
> heads rather than neurons consciousness would not have been a necessary 
> side-effect. 

But my point is that this may come down to what we would mean by a computer 
being 
conscious.  Bruno has an answer in terms of what the computer can prove.  
Jaynes (and 
probably John McCarthy) would say a computer is conscious if it creates a 
narrative 
of its experience which it can access as memory.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more opt

Re: computationalism and supervenience

2006-09-10 Thread Brent Meeker

Stathis Papaioannou wrote:
> Brent meeker writes:
> 
> 
>>Stathis Papaioannou wrote:
>>
>>>Peter Jones writes:
>>>
>>>
>>>
>>With physical supervenience, it is possible for the same person to
>>supervene on multiple physical objects. What is disallowed is multiple
>>persons to supervene on the same physical object.
>
>That is what is usually understood, but there is no logical reason why
>the relationship between the physical and the mental cannot be
>one->many, in much the same way as a written message can have
>several meanings depending on its interpretation.

There is a reason: multiple meanings depend on external observers
and interpretations. But who observes the multiverse ?
>>>
>>>
>>>I'm not sure how the multiverse comes into the discussion, but you have 
>>>made the point several times that a computation depends on an observer 
>>>for its meaning. I agree, but *if* computations can be conscious (remember, 
>>>this is an assumption) then in that special case an external observer is not 
>>>needed. In fact, that is as good a definition of consciousness as any: it is 
>>>that aspect of an entity that cannot be captured by an external observer, 
>>>but only experienced by the entity itself. Once we learn every observable 
>>>fact about stars we know all about stars, but if we learn every observable 
>>>fact about bats, we still don't know what it is like to be a bat. 
>>
>>Why not?  Can't we map bat conscious-computation to human 
>>conscious-computation; 
>>since you suppose we can map any computation to any other.  But, you're 
>>thinking, 
>>since there a practical infinity of maps (even a countable infinity if you 
>>allow 
>>one->many) there is no way to know which is the correct map.  There is if you 
>>and the 
>>bat share an environment.
> 
> 
> You're right that the correct mapping is the one in which you and the bat 
> share the 
> environment. That is what interaction with the environment does: forces us to 
> choose 
> one mapping out of all the possible ones, whether that involves talking to 
> another person 
> or using a computer. However, that doesn't mean I know everything about bats 
> if I know 
> everything about bat-computations. If it did, that would mean there was no 
> difference 
> between zombie bats and conscious bats, no difference between first person 
> knowledge 
> and third person or vicarious knowledge.
> 
> Stathis Papaioannou

I don't find either of those conclusions absurd.  Computationalism is generally 
thought to entail both of them.  Bruno's theory that identifies knowledge with 
provability is the only form of computationalism that seems to allow the 
distinction 
in a fundamental way.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-10 Thread Stathis Papaioannou

Peter Jones writes:

> Stathis Papaioannou wrote:
> 
> > > > Now, suppose some more complex variant of 3+2=3 implemented on your 
> > > > abacus has consciousness associated with it, which is just one of the 
> > > > tenets of computationalism. Some time later, you are walking in the 
> > > > Amazon rain forest and notice that
> > > > under a certain mapping
> > >
> > >
> > > > of birds to beads and trees to wires, the forest is implementing the 
> > > > same computation as your abacus was. So if your abacus was conscious, 
> > > > and computationalism is true, the tree-bird sytem should also be 
> > > > conscious.
> > >
> > > No necessarily, because the mapping is required too. Why should
> > > it still be conscious if no-one is around to make the mapping.
> >
> > Are you claiming that a conscious machine stops being conscious if its 
> > designers die
> > and all the information about how it works is lost?
> 
> You are, if anyone is. I don't agree that computation *must* be
> interpreted,
> although they *can* be re-interpreted.

What I claim is this:

A computation does not *need* to be interpreted, it just is. However, a 
computation 
does need to be interpreted, or interact with its environment in some way, if 
it is to be 
interesting or meaningful. By analogy, a string of characters is a string of 
characters 
whether or not anyone interprets it, but it is not interesting or meaningful 
unless it is 
interpreted. But if a computation, or for that matter a string of characters, 
is conscious, 
then it is interesting and meaningful in at least one sense in the absence of 
an external 
observer: it is interesting and meaningful to itself. If it were not, then it 
wouldn't be 
conscious. The conscious things in the world have an internal life, a first 
person 
phenomenal experience, a certain ineffable something, whatever you want to call 
it, 
while the unconscious things do not. That is the difference between them.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-10 Thread Stathis Papaioannou

Peter Jones writes:

> > I'm not sure how the multiverse comes into the discussion, but you have
> > made the point several times that a computation depends on an observer
> 
> 
> No, I haven't! I have tried ot follow through the consequences of
> assuming it must.
> It seems to me that some sort of absurdity or contradiction ensues.

OK. This has been a long and complicated thread.
 
> > for its meaning. I agree, but *if* computations can be conscious (remember,
> > this is an assumption) then in that special case an external observer is not
> > needed.
> 
> Why not ? (Well, I would be quite happy that a conscious
> computation would have some inherent structural property --
> I want to foind out why *you* would think it doesn't).

I think it goes against standard computationalism if you say that a conscious 
computation has some inherent structural property. Opponents of 
computationalism 
have used the absurdity of the conclusion that anything implements any 
conscious 
computation as evidence that there is something special and non-computational 
about the brain. Maybe they're right.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-10 Thread Stathis Papaioannou

Brent Meeker writes:

> >>I could make a robot that, having suitable thermocouples, would quickly 
> >>withdraw it's 
> >>hand from a fire; but not be conscious of it.  Even if I provide the robot 
> >>with 
> >>"feelings", i.e. judgements about good/bad/pain/pleasure I'm not sure it 
> >>would be 
> >>conscious.  But if I provide it with "attention" and memory, so that it 
> >>noted the 
> >>painful event as important and necessary to remember because of it's strong 
> >>negative 
> >>affect; then I think it would be conscious.
> > 
> > 
> > It's interesting that people actually withdraw their hand from the fire 
> > *before* they experience 
> > the pain. The withdrawl is a reflex, presumably evolved in organisms with 
> > the most primitive 
> > central nervour systems, while the pain seems to be there as an 
> > afterthought to teach us a 
> > lesson so we won't do it again. Thus, from consideration of evolutionary 
> > utility consciousness 
> > does indeed seem to be a side-effect of memory and learning. 
> 
> Even more curious, volitional action also occurs before one is aware of it. 
> Are you 
> familiar with the experiments of Benjamin Libet and Grey Walter?

These experiments showed that in apparently voluntarily initiated motion, motor 
cortex activity 
actually preceded the subject's awareness of his intention by a substantial 
fraction of a second. 
In other words, we act first, then "decide" to act. These studies did not 
examine pre-planned 
action (presumably that would be far more technically difficult) but it is easy 
to imagine the analogous 
situation whereby the action is unconsciously "planned" before we become aware 
of our decision. In 
other words, free will is just a feeling which occurs after the fact. This is 
consistent with the logical 
impossibility of something that is neither random nor determined, which is what 
I feel my free will to be.

> > I also think that this is an argument against zombies. If it were possible 
> > for an organism to 
> > behave just like a conscious being, but actually be unconscious, then why 
> > would consciousness 
> > have evolved? 
> 
> An interesting point - but hard to give any answer before pinning down what 
> we mean 
> by consciousness.  For example Bruno, Julian Jaynes, and Daniel Dennett have 
> explanations; but they explain somewhat different consciousnesses, or at 
> least 
> different aspects.

Consciousness is the hardest thing to explain but the easiest thing to 
understand, if it's your own 
consciousness at issue. I think we can go a long way discussing it assuming 
that we do know what 
we are talking about even though we can't explain it. The question I ask is, 
why did people evolve 
with this consciousness thing, whatever it is? The answer must be, I think, 
that it is a necessary 
side-effect of the sort of neural complexity that underpins our behaviour. If 
it were not, and it 
were possible that beings could behave exactly like humans and not be 
conscious, then it would 
have been wasteful of nature to have provided us with consciousness. This does 
not necessarily 
mean that computers can be conscious: maybe if we had evolved with electronic 
circuits in our 
heads rather than neurons consciousness would not have been a necessary 
side-effect. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-10 Thread Stathis Papaioannou

Brent meeker writes:

> Stathis Papaioannou wrote:
> > Peter Jones writes:
> > 
> > 
> With physical supervenience, it is possible for the same person to
> supervene on multiple physical objects. What is disallowed is multiple
> persons to supervene on the same physical object.
> >>>
> >>>That is what is usually understood, but there is no logical reason why
> >>>the relationship between the physical and the mental cannot be
> >>>one->many, in much the same way as a written message can have
> >>>several meanings depending on its interpretation.
> >>
> >>There is a reason: multiple meanings depend on external observers
> >>and interpretations. But who observes the multiverse ?
> > 
> > 
> > I'm not sure how the multiverse comes into the discussion, but you have 
> > made the point several times that a computation depends on an observer 
> > for its meaning. I agree, but *if* computations can be conscious (remember, 
> > this is an assumption) then in that special case an external observer is 
> > not 
> > needed. In fact, that is as good a definition of consciousness as any: it 
> > is 
> > that aspect of an entity that cannot be captured by an external observer, 
> > but only experienced by the entity itself. Once we learn every observable 
> > fact about stars we know all about stars, but if we learn every observable 
> > fact about bats, we still don't know what it is like to be a bat. 
> 
> Why not?  Can't we map bat conscious-computation to human 
> conscious-computation; 
> since you suppose we can map any computation to any other.  But, you're 
> thinking, 
> since there a practical infinity of maps (even a countable infinity if you 
> allow 
> one->many) there is no way to know which is the correct map.  There is if you 
> and the 
> bat share an environment.

You're right that the correct mapping is the one in which you and the bat share 
the 
environment. That is what interaction with the environment does: forces us to 
choose 
one mapping out of all the possible ones, whether that involves talking to 
another person 
or using a computer. However, that doesn't mean I know everything about bats if 
I know 
everything about bat-computations. If it did, that would mean there was no 
difference 
between zombie bats and conscious bats, no difference between first person 
knowledge 
and third person or vicarious knowledge.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-08 Thread 1Z


Brent Meeker wrote:
> 1Z wrote:
> >
> > Brent Meeker wrote:
> >
> >
> >>>That's not very interesting for non-conscious computations, because
> >>>they are only useful or meaningful if they can be observed or interact 
> >>>with their
> >>>environment. However, a conscious computation is interesting all on its 
> >>>own. It
> >>>might have a fuller life if it can interact with other minds, but its 
> >>>meaning is
> >>>not contingent on other minds the way a non-conscious computation's is.
> >>
> >>Empirically, all of the meaning seems to be referred to things outside the
> >>computation.  So if the conscious computation thinks of the word "chair" it 
> >>doesn't
> >>provide any meaning unless there is a chair - outside the computation.
> >
> >
> > What about when a human thinks about a chair ? What about
> > when a human thinks about a unicorn?
>
> He thinks about a white horse with a horn, both of which exist.

But the unicorn per se doesn't. "Unicorn" doesn't have a referent, but
the parts of which it is a composite have referents. That's a step
away form referntiallty. And we
can take other stepts, talking about quarks and branes.
Evetually our referential theory of meaning will
only be referntial in the sense that an an empty glass is a
glass that is not at all full.

> What is the meaning
> of "Zeus"...it refers through descriptions that have meaningful elements.

> >What about a computer thinking
> > about a unicorn?
>
> That's what we're puzzling over.  Is it meaningless if the computer isn't
> conscious...but refers to a horse with a horn if the computer is conscious?

It doesn't refer to a horned horse because there aren't any.

perhaps if we understood how such non-referential meaning works,
it would give us a clue to how consciousness works.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-08 Thread 1Z


Bruno Marchal wrote:
> Le 07-sept.-06, à 16:42, 1Z a écrit :
>
> > Rationalists, and hence everythingists, are no better off because they
> > still have to appeal to some contingent brute fact, that *onl*
> > mathemematical
> > (or computational) entities exist, even if *all* such entities do.
> > (Platonia
> > is broad but flat). Since no-one can explain why matter is impossible
> > (as opposed to merely unnecesary) the non-existence of matter is
> > a contingent fact.
>
>
> I guess here you mean "primary matter" for "matter".
>
> Would you say that a thermodynamician has to appeal to the "contingent
> brute fact" that car are not pulled by invisible horses?

Not directly. He would appeal to a background understanding
of physics which is rooted in contingent, observed facts,
not deduction from first principles.

> Does molecular biologist have to appeal to the "contingent brute fact"
> that the vital principle is a crackpot principle?

It is not crackpot in the sense of being logically contradictory,
so it is not a necessary truth, so it is contingent -- indeed
the rejection of vitalism leans on Occam's less-than-certain
razor.

> Should all scientist appeal to the "contingent brute fact" that God is
> most probably neither white, nor black, nor male, nor female, nor
> sitting on a cloud, nor sitting near a cloud ...

What fact would that be? How is it related to empiricism?
Just because some things that are true are not necesarily
true, does not mean anything anyone says qualifies as a contingent
truth.

> Let me be clear on this: comp reduce matter to number relation, it does
> not make matter impossible, it explain it from something else, like
> physics explain temperature from molecules cinetical energy.

Then you cannot say computationalism is false if matter exists.

> And then you come and talk like if physicists would have shown
> temperature impossible?
>
> Do I miss something?
>
> Comp makes primary matter dispensable only like thermodynamics makes
> phlogiston dispensable.
> And I think that's good given that nobody ever succeed in making those
> notion clear.
> I still don't know what do you mean by "primary matter".

To understand that you would
have to undetrstand what I mean by existence. But, to understand that,
you would have to understand what *you* mean by existence.

> Bruno
> 
> 
> 
> 
> 
> http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-08 Thread Brent Meeker

Bruno Marchal wrote:
> 
> Le 07-sept.-06, à 06:21, Brent Meeker a écrit :
> 
> 
>>>This seems to me very close to saying that every conscious 
>>>computation is
>>>implemented necessarily in Platonia, as the physical reality seems 
>>>hardly
>>>relevant.
>>
>>It seems to me to be very close to a reductio ad absurdum.
> 
> 
> 
> Reductio ad absurdum of what? Comp or  (weak) Materialism?
> 
> Bruno

Dunno.  A reductio doesn't tell you which premise is wrong.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-08 Thread Brent Meeker

1Z wrote:
> 
> Brent Meeker wrote:
> 
> 
>>>That's not very interesting for non-conscious computations, because
>>>they are only useful or meaningful if they can be observed or interact with 
>>>their
>>>environment. However, a conscious computation is interesting all on its own. 
>>>It
>>>might have a fuller life if it can interact with other minds, but its 
>>>meaning is
>>>not contingent on other minds the way a non-conscious computation's is.
>>
>>Empirically, all of the meaning seems to be referred to things outside the
>>computation.  So if the conscious computation thinks of the word "chair" it 
>>doesn't
>>provide any meaning unless there is a chair - outside the computation.
> 
> 
> What about when a human thinks about a chair ? What about
> when a human thinks about a unicorn? 

He thinks about a white horse with a horn, both of which exist.  What is the 
meaning 
of "Zeus"...it refers through descriptions that have meaningful elements.

>What about a computer thinking
> about a unicorn?

That's what we're puzzling over.  Is it meaningless if the computer isn't 
conscious...but refers to a horse with a horn if the computer is conscious?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-08 Thread Brent Meeker

Stathis Papaioannou wrote:
> Brent Meeker writes:
> 
> 
>>>A non-conscious computation cannot be *useful* without the 
>>>manual/interpretation,
>>>and in this sense could be called just a potential computation, but a 
>>>conscious
>>>computation is still *conscious* even if no-one else is able to figure this 
>>>out or
>>>interact with it. If a working brain in a vat were sealed in a box and sent 
>>>into
>>>space, it could still be dreaming away even after the whole human race and 
>>>all
>>>their information on brain function are destroyed in a supernova explosion. 
>>>As far
>>>as any alien is concerned who comes across it, the brain might be completely
>>>inscrutable, but that would not make the slightest difference to its 
>>>conscious
>>>experience.
>>
>>Suppose the aliens re-implanted the brain in a human body so they could 
>>interact with
>>it.  They ask it what is was "dreaming" all those years?  I think the answer 
>>might
>>be, "Years?  What years?  It was just a few seconds ago I was in the hospital 
>>for an
>>appendectomy.  What happened?  And who are you guys?"
> 
> 
> Maybe so; even more likely, the brain would just die. But these are 
> contingent facts about 
> human brains, while thought experiments rely on theoretical possibility.

That's generally useful; but when we understand little about something, such as 
consciousness, we should be careful about assuming what's "theoretically 
possible"; 
particularly when it seems to lead to absurdities.  How do we know it's a 
contingent, 
and not essential, fact about brains...and conscious thought?

>  
> 
>>>then it can be seen as implementing more than one computation
>>>simultaneously during the given interval.
>>
>>AFAICS that is only true in terms of dictionaries.
>
>Right: without the dictionary, it's not very interesting or relevant to 
>*us*.
>If we were to actually map a random physical process onto an arbitrary
>computation of interest, that would be at least as much work as building 
>and
>programming a conventional computer to carry out the computation. However,
>doing the mapping does not make a difference to the *system* (assuming we
>aren't going to use it to interact with it). If we say that under a certain
>interpretation - here it is, printed out on paper - the system is 
>implementing
>a conscious computation, it would still be implementing that computation 
>if we
>had never determined and printed out the interpretation.
>>
>>And if you added the random values of the physical process as an appendix in 
>>the
>>manual, would the manual itself then be a computation (the record problem)?  
>>If so
>>how would you tell if it were a conscious computation?
> 
> 
> The actual physical process becomes almost irrelevant. In the limiting case, 
> all of the 
> computation is contained in the manual, the physical existence of which makes 
> no 
> difference to whether or not the computation is implemented, since it makes 
> no difference 
> to the actual physical activity of the system and the theory under 
> consideration is that 
> consciousness supervenes on this physical activity. If we get rid of the 
> qualifier "almost" 
> the result is close to Bruno's theory, according to which the physical 
> activity is irrelevant 
> and the computation is "run" by virtue of its status as a Platonic object. As 
> I understand 
> it, Bruno arrives at this idea because it seems less absurd than the idea 
> that consciousness 
> supervenes on any and every physical process, while Maudlin finds both ideas 
> absurd and 
> thinks there is something wrong with computationalism.

As I understand your argument, the manual doesn't have to be a one-to-one 
translator 
of states, and so it can "translate" from the null event to any string 
whatsoever. 
So the physical event is irrelevant.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-08 Thread Bruno Marchal


Le 07-sept.-06, à 16:42, 1Z a écrit :

> Rationalists, and hence everythingists, are no better off because they
> still have to appeal to some contingent brute fact, that *onl*
> mathemematical
> (or computational) entities exist, even if *all* such entities do.
> (Platonia
> is broad but flat). Since no-one can explain why matter is impossible
> (as opposed to merely unnecesary) the non-existence of matter is
> a contingent fact.


I guess here you mean "primary matter" for "matter".

Would you say that a thermodynamician has to appeal to the "contingent 
brute fact" that car are not pulled by invisible horses?

Does molecular biologist have to appeal to the "contingent brute fact" 
that the vital principle is a crackpot principle?

Should all scientist appeal to the "contingent brute fact" that God is 
most probably neither white, nor black, nor male, nor female, nor 
sitting on a cloud, nor sitting near a cloud ...

Let me be clear on this: comp reduce matter to number relation, it does 
not make matter impossible, it explain it from something else, like 
physics explain temperature from molecules cinetical energy.
And then you come and talk like if physicists would have shown 
temperature impossible?

Do I miss something?

Comp makes primary matter dispensable only like thermodynamics makes 
phlogiston dispensable.
And I think that's good given that nobody ever succeed in making those 
notion clear.
I still don't know what do you mean by "primary matter".

Bruno





http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-08 Thread Bruno Marchal


Le 07-sept.-06, à 14:14, 1Z a écrit :

>
>
> Bruno Marchal wrote:
>> Le 06-sept.-06, à 21:59, 1Z a écrit :
>>
>>> Of course it is not natural, or we would not
>>> have two separate words for "possible" and "actual".
>>
>> Well, Platonist theories are counter-intuitive. Aristotle is the one
>> responsible to make us believe reality is what we measure. Plato says
>> what we observe is the shadow of what is.
>
> yet metaphysics is a *bad* thing...


Who says that?
I dislike the word "metaphysics" *because* that word is almost 
synonymous of "bad thing", but the whole field is like any other 
fields. It is never bad in itself. Some people are bad, like some 
gardener can be bad, or like plumber can be bad.
A field can also become bad when it is appropriated by dishonest 
people, like genetics in the USSR, or theology in Occident.
But that is contingent on history.

Concerning metaphysics and theology, I think there is just no clearcut 
frontier with physics, and math. The position of Earth relatively to 
the Sun has been a subject of theology during a long period of time.
Some of my old work on EPR and Bell has been discarded a long time ago 
because many scientist just believed it was "metaphysics" despite that 
my very point was that EPR showed that some metaphysical questions 
*was* really physical questions. Despite Bell tremendous clarification 
they were unable of changing their mind, and all this because they 
decided to take for granted Bohr's "metaphysics", ...
Same for theology: those who says that theology cannot be scientific 
are more or less the same as those who take for granted Aristotle 
metaphysics.

That is why I never judge any "field", only particular work by 
particular people. I appreciate when people put their carts on the 
table before the play. Clear assumption leads to clear refutation or 
genuine reconstruction.

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-08 Thread Bruno Marchal


Le 07-sept.-06, à 06:21, Brent Meeker a écrit :

>
>> This seems to me very close to saying that every conscious 
>> computation is
>> implemented necessarily in Platonia, as the physical reality seems 
>> hardly
>> relevant.
>
> It seems to me to be very close to a reductio ad absurdum.


Reductio ad absurdum of what? Comp or  (weak) Materialism?

Bruno



http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-08 Thread Bruno Marchal


Le 07-sept.-06, à 03:19, Stathis Papaioannou a écrit :

> Why do you disagree that one of the bitstrings is conscious? It seems 
> to
> me that "the subcollection of bitstrings that corresponds to the 
> actions of
> a program emulating a person under all possible inputs" is a 
> collection of
> multiple individually conscious entities, each of which would be just 
> as
> conscious if all the others were wiped out.

To be clear I agree on this.
But we have to keep in mind that the wiping out of the others will 
change the probabilities of what you can be conscious about. This lead 
to the measure problem and the rise of physics from comp.

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-08 Thread Bruno Marchal


Le 07-sept.-06, à 01:56, Russell Standish a écrit :

> This simplest way of addressing this is to use your dovetailer instead
> of quantum multiverses, which tends to confuse people, and get
> associated with quantum mysticism. The dovetailer is obviously
> computable, but not the internal "trace" of one of its branches. (A
> point you have frequently made).


"First person branch" are not computable, yes.



>
> In fact lets go one further and write a program that prints out all
> combinations of 10^{30} bits in Library of Babel style. This is more
> than enough information to encode all possible histories of neuronal
> activity of a human brain, so most of us would bet this level of
> substitution would satisfy "yes doctor".
>
> So does this mean that the entire library of babel is conscious, or
> the dovetailer program (which is about 5 lines of Fortran)


I can believe a UD with 5 lines of Prolog. Five lines of Fortran? Send 
us the code.



> is
> conscious?


No of course. Only people or person can be conscious. Person can be 
attached to program, and person's live can be attached to computations. 
But "attachment" is not identity.




> To me it is an emphatic no!


OK.



> Does it mean that one of the
> 10^{30} length bitstrings is conscious? Again I also say no. The only
> possible conscious thing is the subcollection of bitstrings that
> corresponds to the actions of a program emulating a person under all
> possible inputs. It will have complexity substantially less than
> 10^{30}, but substantially greater than the 5 line dovetailer.
>
>
>>> But I think we are headed in the direction of whether computable
>>> Multiverses really satisfy what we mean by computationalism. If
>>> someone copies the entirety of reality, do I still survive in a 
>>> "folk"
>>> psychology sense. I am still confused on this point.
>>
>> How could you not survive that?
>>
>> Bruno
>>
>
> If I have inoperable brain cancer in reality A, and someone duplicates
> reality A to reality B, then unfortunately I still have inoperable 
> brain
> cancer in reality B.
>
> Maybe I'm being too literal...


You loss me.  If you are still complaining about an inoperable disease 
in reality B, it means you did survive.



>
> I can also never experience your famous Washinton-Moscow teleportation
> excercise - in reality B I am still stuck in Brussels.


But in the WM protocol (step 3 of the 8 steps version of UDA, cf SANE 
paper) you are supposed to be annihilated in Brussels. Are you telling 
you are just dying in Brussels, in that situation. Well, that would 
mean you assume some non comp hyp. It would be nice if you could sum up 
your most basic assumptions.

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-08 Thread 1Z


Brent Meeker wrote:

> >That's not very interesting for non-conscious computations, because
> > they are only useful or meaningful if they can be observed or interact with 
> > their
> > environment. However, a conscious computation is interesting all on its 
> > own. It
> > might have a fuller life if it can interact with other minds, but its 
> > meaning is
> > not contingent on other minds the way a non-conscious computation's is.
>
> Empirically, all of the meaning seems to be referred to things outside the
> computation.  So if the conscious computation thinks of the word "chair" it 
> doesn't
> provide any meaning unless there is a chair - outside the computation.

What about when a human thinks about a chair ? What about
when a human thinks about a unicorn? What about a computer thinking
about a unicorn?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-08 Thread 1Z


Quentin Anciaux wrote:
> Le jeudi 7 septembre 2006 14:14, 1Z a écrit :
> > Bruno Marchal wrote:
> > > Le 06-sept.-06, à 21:59, 1Z a écrit :
> > > > Of course it is not natural, or we would not
> > > > have two separate words for "possible" and "actual".
> > >
> > > Well, Platonist theories are counter-intuitive. Aristotle is the one
> > > responsible to make us believe reality is what we measure. Plato says
> > > what we observe is the shadow of what is.
> >
> > yet metaphysics is a *bad* thing...
> 
> Why the hell metaphysics is a bad thing ?

Ask Bruno.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



  1   2   3   >