On 8/14/2014 6:45 AM, Pierz wrote:
On Tuesday, August 12, 2014 1:12:10 PM UTC+10, Brent wrote:
On 8/11/2014 7:29 PM, LizR wrote:
On 12 August 2014 12:48, meekerdb <[email protected] <javascript:>> wrote:
On 8/11/2014 4:03 PM, LizR wrote:
I have never got this idea of "counterfactual correctness". It
seems to be
that the argument goes ...
Assume computational process A is conscious
Take process B, which replays A - B passes through the same machine
states
as A, but it doesn't work them out, it's driven by a recording of A
- B
isn't conscious because it isn't counterfactually correct.
I can't see how this works. (Except insofar as if we assume
consciousness
doesn't supervene on material processes, then neither A nor B is
conscious,
they are just somehow attached to conscious experiences generated
elsewhere, maybe by a UD.)
It doesn't work, because it ignores the fact that consciousness is about
something. It can only exist in the context of thoughts (machine states
and
processes) referring to a "world"; being part of a representational and
predictive model. Without the counterfactuals, it's just a sequence of
states
and not a model of anything. But in order that it be a model it must
interact
or have interacted in the past in order that the model be causally
connected to
the world. It is this connection that gives meaning to the model.
What differentiates A and B, given that they use the same machine states?
How can A
be more about something than B? Or to put it another way, what is the
"meaning"
that makes A conscious, but not B?
A makes decisions in response to the world. Although, ex hypothesi, the
world is
repeating its inputs and A is repeating his decisions. Note that this
assumes QM
doesn't apply at the computational level of A. In the argument we're asked
to
consider a dream so that we're led to overlook the fact that the meaning of
A's
internal processes actually derive from A's interaction with a world.
Imagine A as
being born and living in a sensory deprivation tank - will A be conscious?
I think
not.
That is a weird assumption to me and completely contrary to my own intuition. Certainly
a person born and kept alive in sensory deprivation will be extremely limited in the
complexity of the mental states s/he can develop, but I would certainly expect that such
a person would have consciousness, ie., that there is something it would be like to be
such a person. Indeed I expect that such a person would suffer horribly. Such a
conclusion requires no mystical view of consciousness. It is based purely on biology -
we are programmed with biological expectations/predispositions which when not met, cause
us to suffer. As much as the brain can't be separated completely from other matter, it
*does* seem to house consciousness in a semi-autonomous fashion.
So how did you suffer in the womb?
Indeed I am puzzled by your insistence on consciousness deriving from relationships with
the world, given you seem to be a reductionist materialist. In a reductionist view, such
relationships don't have any intrinsic meaning, so how is it that the presence or
absence of such relationships can make the difference between "having an experience" and
"not having an experience"? What turns the light on as it were, turning the zombie into
the human, the robot into the "real boy" (guess you've seen the movie?)? The fact that
its internal states are meaningfully correlated to some "world", whatever that is? Such
a correlation might define the difference between adaptive and non-adaptive functioning,
but how does that distinction instantiate consciousness (or not)?
OK so that is back to "hard problem", which for people who are fundamentally interested
in engineering is also the "uninteresting problem" or the "pointlessly distracting
problem".
I don't think it's unintersting, I think it's unsolvable becuase it's demanding an
explanation and at the same time ruling out any explanation because it rejects the
engineering level explanation. Yet the engineering level explanation is the one we praise
and accept as the gold standard in every other field. In fact one of the things I like
about Bruno's theory is that it can prove within the computational paradigm exactly what
it unsolvable about the hard problem and why.
Within a materialist/evolutionist model it is also clear why it is unsolvable, why we
cannot experience the brain processes that produce experience of the world. It would be an
irrelevant and useless and wasteful use of brain resources at best and would be selected
against. At worst it might produced confusion and instability in thought processes. I
think it is really only through language and symbolic thought that the "hard problem" can
be formulated.
Brent
For me, software engineer by trade, philosopher/psychologist/tripper by nature, it's the
very other way around. The hard problem deeply troubled me even as a kid. I still find
it difficult to comprehend those whom it doesn't bother, or who can't even see it, as if
they're colour blind or something, but I've come to understand their practical
perspective. Still, to call it "uninteresting" (I don't know if you do) is not to make
an objective statement, it's merely to assert the sphere of one's interest - 3p rather
than 1p in the local vernacular.
But in Bruno's and Maudlin's thought experiments A might be, A could be
aware of
Peano's axioms and could prove all provable theorems plus Godel's
incompleteness.
Because Bruno is a logician he tends to think of consciousness as
performing
deductive proofs, executing a proof in the sense that every computer
program is
a proof. He models belief as proof. But this overlooks where the
meaning of
the program comes from. People that want to deny computers can be
conscious
point out that the meaning comes from the programmer. But it doesn't
have to.
If the computer has goals and can learn and act within the world then
its
internal modeling and decision processes get meaning through their
potential
for actions.
This is why I don't agree with the conclusion drawn from step 8. I
think the
requirement to counterfactually correct implies that a whole world, a
physics,
needs to be simulated too, or else the Movie Graph or Klara need to be
able to
interact with the world to supply the meaning to their program. But if
the
Movie Graph computer is a counterfactually correct simulation of a
person
within a simulated world, there's no longer a "reversal". Simulated
consciousness exists in simulated worlds - dog bites man.
Are you assuming that the world with which the MG interacts it itself
digitally
emulable? If so, doesn't Bruno's argument go through for the whole emulated
world,
if not for a subcomponent of it ("Klara") ? ISTM you're saying that a
conscious
being has to interact with a world - which may be true (people go mad in
sensory
isolation eventually). But if the world is emulable then the MGA can be
applied to
it as a whole.
Right.
Or at least I remember Bruno saying that the substitution level and region
to be
emulated weren't important to the argument, as long as there is some level
and
region in which it holds. I'm sure he said that it might involve emulating
the
world, or a chunk of the universe, but that the argument still goes through.
Or did I misremember that, or did he say that, but there's a flaw in his
argument?
It's not exactly a flaw. He always says, sure just make the simulation more
comprehensive, include more of the environment, even the whole universe.
Which is
OK, but then when you think about the reversal of physics and psychology
you see
that it is the physics here, in the non-simulated world, which has been
replaced by
the psychology PLUS physics in the simulated world. If I say I can replace
you with
a simulation - I'll probably be greeted with skepticism. But if I say I
can replace
you with a simulation of you in a simulation of the world - well then it's
not so
clear what I mean or how hard it will be.
Brent
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to
[email protected] <javascript:>.
To post to this group, send email to [email protected]
<javascript:>.
Visit this group at http://groups.google.com/group/everything-list
<http://groups.google.com/group/everything-list>.
For more options, visit https://groups.google.com/d/optout
<https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "Everything
List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
[email protected]
<mailto:[email protected]>.
To post to this group, send email to [email protected]
<mailto:[email protected]>.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.