Re: Emulation and Stuff

2009-08-23 Thread Flammarion



On 19 Aug, 15:16, Bruno Marchal  wrote:
> On 19 Aug 2009, at 10:33, Flammarion wrote:
>
>
>
>
>
> > On 19 Aug, 08:49, Bruno Marchal  wrote:
> >> On 19 Aug 2009, at 02:31, Brent Meeker wrote:
>
> >>> Bruno Marchal wrote:
>
>  This is not the point. The point is that if you develop a correct
>  argumentation that you are material, and that what we "see"  
>  around us
>  is material, then the arithmetical P. Jone(s) will also find a
>  correct
>  argumentation that *they* are material, and that what they see is
>  material. The problem is that if you are correct in "our physical
>  reality" their reasoning will be correct too, and false of course.
>  But
>  then your reasoning has to be false too.
>  The only way to prevent this consists in saying that you are not
>  Turing-emulable,
>
> >>> Why can't I just say I'm not Turing emulated?  It seems that your
> >>> argument uses MGA to
> >>> conclude that no physical instantaion is needed so Turing-
> >>> emulable=Turing-emulated.  It
> >>> seems that all you can conclude is one cannot *know* that they have
> >>> a correct argument
> >>> showing they are material.  But this is already well known from
> >>> "brain in a vat" thought
> >>> experiments.
>
> >> OK. But this seems to me enough to render invalid any reasoning
> >> leading to our primitive materiality.
> >> If a reasoning is valid, it has to be valid independently of being
> >> published or not, written with ink or carbon, being in or outside the
> >> UD*. I did not use MGA here.
>
> > That is false. You are tacitly assuming that PM has to be argued
> > with the full force of necessity --
>
> I don't remember. I don't find trace of what makes you think so. Where?

Well, if it;s tacit you wouldn't find  a trace.

Other than that. all pointing out that I might be in a UDA
and therefore wrong doesn't mean I am wrong now. only
that I am not necessarily right.

If you don't think the UDA is meant to show that
I am not necessarily right, maybe you could say what
it is meant to show


> > although your own argument does
> > not have that force.
>
> If there is a weakness somewhere, tell us where.

The conclusion of your argument *is* a necessary truth?

> > In fact, PM only has to be shown to be more
> > plausible than the alternatives. It is not necessarily true because of
> > sceptical hypotheses like the BIV and the UD, but since neither of
> > them has much prima-facie plausibility, the plausibility og PM
> > is not impacted much
>
> ?  Ex(x = UD) is a theorem of elementary arithmetic.

backwards-E x=UD is indeed true. Schools should not
be teaching that backwards-E means ontological existence,
since that is an open question among philosophers.

> I have been taught elementary arithmetic in school, and I don't think  
> such a theory has been refuted since.
>
> You will tell me that mathematical existence = non existence at all.  
> You are the first human who says so.

I am not the first formalist.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Dreaming On

2009-08-23 Thread David Nyman

2009/8/23 Brent Meeker :
>
> Stathis Papaioannou wrote:
>> 2009/8/22 Brent Meeker :
>>
>>> That's an interesting question and one that I think relates to the
>>> importance of context.  A scan of your brain would capture all the
>>> information in the Shannon/Boltzman sense, i.e. it would determine which
>>> of the possible configurations and processes were realized.  However,
>>> those concerned about the "hard problem", will point out that this
>>> misses the fact that the information represents or "means" something.
>>> To know the meaning of the information would require knowledge of the
>>> world in which the brain acts and perceives, including a lot of
>>> evolutionary history.  Image scanning the brain of an alien found  in a
>>> crash at Roswell.  Without knowledge of how he acts and the evolutionary
>>> history of his species it would be essentially impossible to guess the
>>> meaning of the patterns in his brain.  My point is that it is not just
>>> computation that is consciousness or cognition, but computation with
>>> meaning, which means within a certain context of action.
>>
>> You wouldn't be able to guess what the alien is thinking by scanning
>> his brain, but you could then run a simulation, exposing it to various
>> environmental stimuli, and it should behave the same way as the
>> original brain (if weak AI is true) and have the same experiences as
>> the original brain (if strong AI is true).
>>
>>
>
> True.  But the point was directed at the MGA.  Part of the simulation
> must be outside the brain - and possibly are very great deal.  So
> while it seems intuitively clear that the brain can be emulated, it's
> not so clear that the brain + enough environment can be.
>
> Brent

The point you make about the indispensability of the context of action
to recovering meaning from 'information' is at the heart of my
response to Peter's question.  The 'complete' brain scan is -
literally - meaning-less independent of its embodiment in some
interpretative context of action (and its environment) .  Were we to
imagine an 'eliminativist' account of brain events (which to carry
meaning for us would inevitably be an account in terms of alternative
higher-level concepts) it's all too easy to forget (i.e. the 'fish in
water' effect) that the *meaning* of any such reinterpretation is also
solely recoverable by re-embodiment in some interpretative context.
The ultimate attempt to 'eliminate' this entails the shift to the
'view from nowhere', which results in the complete stripping of
meaning from the universe itself, as 'observed' abstracted from any
interpretative context whatsoever.  What can make this hard to see, in
the gedanken experiment, is that we can't help embodying this
impossible view in an imagined context that we carry with us, and
hence we continue to rely - illicitly - on the historical continuity
of our interpretations.

In the example of the alien brain, as has been pointed out, the
context of meaning is to be discovered only in the its own local
embodiment of its history and current experience.  In Stathis' example
of *our* hypothesized observation of the alien's behaviour - whether
simulated or 'real' - any meaning to be found is again recoverable
exclusively in the context of either its, or our, historic and current
context of experience and action.  It is obvious, under this analysis,
that information taken-out-of-context is - in that form - literally
meaningless.  The function of observable information is to stabilise
relational causal configurations against their intelligible
reinstantiation in some context of meaning and action.  Absent such
reembodiment, all that remains is noise.

David

>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---