On 07 Mar 2011, at 19:29, Brent Meeker wrote:
On 3/7/2011 1:11 AM, Bruno Marchal wrote:
On 06 Mar 2011, at 20:21, Brent Meeker wrote:
On 3/6/2011 5:07 AM, 1Z wrote:
The way I see it the MG consciousness would not be conscious of
> world except the virtual world of the MG, which is to say not
> at all in our terms. It could, provided enough environment
> emphasizes the UD will provide an arbitrarily large
> conscious *in this other universe*. But I think that's
> example of the conscious rock. It's conscious modulo some
> interpretation, but that's a reductio against saying it's
conscious at all.
I am not a fan of the MG specifically, but I don't see why
you need a world to have consciousness "as if" of a world.
The BIV argument indicates that you only need to simulate
incoming data on peripheral nerves
But how much of the world do you need to simulate to produce
consistent incoming data? and to allow the MG to act? I think a
lot. And in any case it is within and relative to this simulated
world that consciousness exists (if it does). The MGA tends to
obscure this because it helps itself to our intuition about this
world and that we are simulating it and so we "know" what the
simulation means, i.e. we have an interpretation. That's why I
referred to the rock that computes everything paradox; it's the
same situation except we *don't* have a ready made intuitive
interpretation. Stathis, as I recall, defended the idea that the
rock could, by instantiating consciousness, provide it's own
interpretation. I agreed with the inference, but I regard it as a
reductio against the rock that computes everything.
The brain-in-a-vat is somewhat different in that it is usually
supposed it is connected to our world for perception and action.
So it can have "real" (our kind of) consciousness.
What about a disconnected dreaming 'brain-in-a-vat'?
If you actually took a human brain and put it "in-a-vat" I think it
would quickly go into a loop and no longer be conscious in any
Any finite machine, when isolated either stop or go into a loop. But
the mind of a machine is associate to infinities of computations, most
of which expands without stopping.
But even that case what ever it was conscious of would be derivative
from interaction with this world.
What is "this world"? In the comp frame, this cannot be taken for
granted, unless we try some reductio ad absurdum, in some context.
If you "grew" a brain in a vat, one that never had perceptual
experience, you would no more be able to discern consciousness in it
than in a rock.
I always hesitate how to answer this. The simplest comp answer is that
there is no rock. Only appearance of rocks.
Or to recall that such an appearance grows up from an infinity of
computations (including infinities of universal dovetailing), so that
such a consciousness plays a trivial role in the picture.
And, in principle, I can "grow" a brain in a vat, if someone gives me
some finite amount of information.
In a sense, nature does this. There are evidence that apes fetus
already dream that they climb trees in their mother's womb, as a form
of training for real life. Of course this is not so easy to verify. of
course you can answer this by telling me that the apes baby brain is a
result of the ancestors interacting with 'real trees', but the point
is that such informations are coded in a finite way through neuronal
The question "how much of the world do we need to simulate" is not
relevant (for the seventh step) as far as we need only to simulate
locally a finite amount of information treatment, given that the UD,
in a 'real physical worlds' (or in arithmetic) simulates all such
finite information treatments.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at