RE: Re: Bruno's argument

2006-07-23 Thread Stathis Papaioannou


Jesse Mazer writes (quoting SP):

 Whatyouseemtobesuggestingisthatnotallcomputationsareequivalent: somegiverisetomind,whileothers,apparentlysimilar,donot.Isn't thissimilartothereasoningofpeoplewhosaythatacomputercould neverbeconsciousbecauseevenifitexactlyemulatedahumanbrain,itis alawofnaturethatonlybrainscanbeconscious?  No,notatall--wheredidyougettheideaIwassaying"apparentlysimilar" computationswouldnotgiverisetominds?Thepsychophysicallawsare supposedtoinsurethatacomputationswhichappearscompletely*dissimilar* toahumanmind,likeasimulationofthemovementofatomsinarock,does notinfactqualifyasanimplementationof(orcontributetothemeasure of)mymindandeveryotherpossiblemind,aswouldbeconcludedby Maudlin'sargumentorBruno'smovie-graphargument,asIunderstandthem. SeeChalmers'paper"DoesaRockImplementEveryFinite-StateAutomaton?"at http://consc.net/papers/rock.htmlformoreonthis"implementationproblem".
OK, I should have said "apparently dissimilar, but actually similar computations".Chalmer's argument seems to be that the vibration of atoms in a rock does not follow any well-defined causal relationship, as the functioning of a computer or a brain does. It is only by accident, after the fact, that the rock's states map onto computational states, whereas a computer will reliably give a certain output for a certain input. Even if the computer has no input or output (which is the subtype of FSA which Putnam claims a rock implements) there is still a consistent set of rules governing the physical state transitions and mapping them onto computational states, such that had the physical states been different, so would the computation being implemented. The first problem with this idea is that it is an unnecessary complication:the fact that we*can't*observe rocks' solipsistic computing is enough to explain why we *don't* observe it.The second problem is that you would have to say that a system deliberately set up to perform a computation in the usual manner does perform that computation, but that the same system arising at random does not. This sounds almost like magic: why would the system know or care how it came about?

Stathis PapaioannouBe one of the first to try  Windows Live Mail.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups Everything List group.  To post to this group, send email to everything-list@googlegroups.com  To unsubscribe from this group, send email to [EMAIL PROTECTED]  For more options, visit this group at http://groups.google.com/group/everything-list  -~--~~~~--~~--~--~---




RE: Bruno's argument

2006-07-23 Thread Stathis Papaioannou


Russell Standish writes:

 Torefinetheproblemalittlefurther-weseeabraininour observedrealityonwhichourmindsupervenes.Andweseeother brains,forwhichwemustassumesupervenienceofotherpersons(the nozombiesassumption).  Whatisthecauseofthissupervenience?Itisasymptomofthe anthropicprinciple(observedrealitybeingconsistentwithour brains),butthisismerelytransferringthemystery.InmyToNbookI advancetheargumentthatthishastobesomethingtodowith self-awareness-iethebodyisnecessaryforself-awareness,and self-awarenessmustthereforebenecessaryforconsciousness.  Bruno,Iknowinyourtheorythatintrospectionisavitalcomponent (theGoedel-likeconstructions),butIdidn'tseehowthisturnsback ontotheself-awarenessissue.Didyoudevelopthissideoftheargument?
Why is the body necessary for self-awareness? And why are our heads not homogeneously solid like a potato?The answer is straightforward if you say only computers compute, but not if you say everything computes, or every computation is implemented (sans "physical reality") by virtue of itsstatus as a mathematical object in Platonia. One answer is that only those computations which supervene on physical processes in a brain which exists in a universe with orderly physical laws (which universe is just a tiny subset of the computations in Platonia) can result in the kind of orderly structure required to create the effect of a conscious being persisting through time. This does not necessarily mean that the computations underpinningyour stream of conscious are actually implemented in a physical universe, or even in a simulation of a physical universe, since it is impossible to say "where" a computation is being implemented when there are an infinity of them for every possible thought. Rather, it is enough that those computations which have a component in the physical universe (such as it is) are selected out, while those that end in your head turning into a bunch of flowers in the next microsecond are excluded.

The above is of course related to the problem of the failure of induction, which you address more rigorously in your "Why Occam's Razor" paper and (hopefully at a simpler level, when it arrives) in your ToN book.

Stathis PapaioannouBe one of the first to try  Windows Live Mail.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups Everything List group.  To post to this group, send email to everything-list@googlegroups.com  To unsubscribe from this group, send email to [EMAIL PROTECTED]  For more options, visit this group at http://groups.google.com/group/everything-list  -~--~~~~--~~--~--~---




Re: Bruno's argument

2006-07-23 Thread 1Z


Brent Meeker wrote:
 1Z wrote:
 
  Brent Meeker wrote:
 
 
 In other words it is not justified, based on our limited understanding of 
 brains, to say we'll never
 be able to know how another feels based on observation of their brain.
 
 
 
  We don't know how insects or amoebae feel, either.
  It is not just an issue of complexity.
  We don't knw where to *start* with qualia.

 We know where to start when it comes to knowing how other people feel, i.e. 
 we empathize.  If we
 knew how our brain worked and how the brain of our friend worked, then we 
 could correlate the
 empathized feeling with the brain events.

Correlation isn't explanation.

  This doesn't mean we would experience our friends
 feeling, but we could produce a mapping between his brain processes and his 
 (inferred) feelings.  Of
 course we wouldn't *know* this was right - but scientific knowledge is always 
 uncertain, so I don't
 see that as a objection to calling it knowledge.

I think you have skated past an important point. Being explanatory
is not all the same as being certain. All scientific knowledge
is uncertain; all knowledge worthy of the name is explanatory --
meaning it can provide answers (however uncertain) to how and why
questions.

 Then there are homologous structures in our
 friends brain to those in a chimpanzee's brain and there are similar 
 behavoirs - so I think we could
 extend our map to the feelings of a chimpanzee.  Of course with some really 
 alien life form, say an
 octopus, this would be difficult to test empirically - but not, I think, 
 impossible.

At best, this anwers questions about the circumstances under which an
organism might feel a quale. It doesn't say anything about what qualia
are -- why red seems red. (oh well, of course we can't answer that
question..)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Bruno's argument

2006-07-23 Thread 1Z


Russell Standish wrote:
 On Sun, Jul 23, 2006 at 06:53:50PM +1000, Stathis Papaioannou wrote:
  Russell Standish writes:
 
   To refine the problem a little further - we see a brain in our observed 
   reality on which our mind supervenes. And we see other brains, for which 
   we must assume supervenience of other persons (the no zombies 
   assumption).  What is the cause of this supervenience? It is a symptom 
   of the anthropic principle (observed reality being consistent with our 
   brains), but this is merely transferring the mystery. In my ToN book I 
   advance the argument that this has to be something to do with 
   self-awareness - ie the body is necessary for self-awareness, and 
   self-awareness must therefore be necessary for consciousness.  Bruno, I 
   know in your theory that introspection is a vital component (the 
   Goedel-like constructions), but I didn't see how this turns back onto 
   the self-awareness issue. Did you develop this side of the argument?
  Why is the body necessary for self-awareness?

 And why are our heads not homogeneously solid like a potato?

 Good question!

  The
 answer is straightforward if you say only computers compute, but not
 if you say everything computes, or every computation is implemented
 (sans physical reality) by virtue of its status as a mathematical
 object in Platonia.

 But why does our consciousness supervene on any physical object (which we
 conventionally label heads)?

it is easy enough to see why the Easy Problem asepcts of
consciousness...

# the ability to discriminate, categorize, and react to
environmental stimuli;
# the integration of information by a cognitive system;
# the reportability of mental states;
# the ability of a system to access its own internal states;
# the focus of attention;
# the deliberate control of behavior;
# the difference between wakefulness and sleep.

...do. The question, then, is : why do the Hard Problem aspects..


(The really hard problem of consciousness is the problem of experience.
When we think and perceive, there is a whir of information-processing,
but there is also a subjective aspect. As Nagel (1974) has put it,
there is something it is like to be a conscious organism. This
subjective aspect is experience. When we see, for example, we
experience visual sensations: the felt quality of redness, the
experience of dark and light, the quality of depth in a visual field.
Other experiences go along with perception in different modalities: the
sound of a clarinet, the smell of mothballs. Then there are bodily
sensations, from pains to orgasms; mental images that are conjured up
internally; the felt quality of emotion, and the experience of a stream
of conscious thought. What unites all of these states is that there is
something it is like to be in them. All of them are states of
experience.)


...supervene on the easy problem aspects. Of course, the universe would
be quite
a strange place if reports of red qualia (EP) weren't accompanied by
experienced
red qualia (HP)!

Which is just he issue Chalmers addresses in another key paper:

Absent Qualia, Fading Qualia, Dancing Qualia

http://consc.net/papers/qualia.html


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: COMP Self-awareness

2006-07-23 Thread Bruno Marchal


Le 20-juil.-06, à 21:01, Russell Standish a écrit :


 On Sat, Jul 22, 2006 at 04:49:04PM +0200, Bruno Marchal wrote:


 Le 20-juil.-06, à 13:46, Russell Standish a écrit :

 Bruno, I know in your theory that introspection is a vital component
 (the Goedel-like constructions), but I didn't see how this turns back
 onto the self-awareness issue. Did you develop this side of the
 argument?


 Yes sure. The Goedel-like construction can handle only a 3-person
 discursive self-reference.
 A little like if you where reasoning on some 3-description of your
 brain or body with your doctor, although it could be also an high 
 level
 3-description (like I have a head).


 ... Removed for brevity


 I will come back on the correspondence later. The key point is that 
 the
 nuance between
 p, Bp, Bp  p, Bp  Dp, Bp  Dp  p, are imposed by the incompleteness
 phenomenon, and self-awareness corresponds to the one having   p in
 their definition. It is the umbilical chord between truth and
 intellect of the reasonable first person.

 Bruno

 How do we get the  p part corresponding to self-awareness? That
 doesn't seem to make sense at all!

 We could of course be foundering upon my major problem with your
 work. I have no problems with your UDA, and even think it could be
 generalised to the functionalist position, but where I come to grief
 is the latter Theatetical arguments.


Functionalism is the same as comp, except that functionalist 
traditionally presuppose some knowable high level of substitution (and 
then like materialist presuppose a physical stuffy level).
So I would say comp is just the old functionalism corrected for 
taking the UDA consequence into account (the level of substitution is 
unkowable, and physical stuff is either contradictory or devoid of 
explanation power and redundant).



 I have studied the book by Boolos, and can appreciate the power of
 modal logic to handle reasoning about provability. I can also see how
 you (and others) have extended these logic systems to the Theatetical
 notion of knowledge (adding the p), but my (physicist's) intuition
 riots against this definition capturing what we mean by knowledge. At
 best, I consider it a description of _mathematical_ knowledge, where
 indeed we can never know something unless proved.


It depends what you put in the B. It is indeed a sort of scientific 
knowledge when starting with B = the provability predicate of some 
fixed theory like Peano arithmetic, but such a theory can 
(autonomously) transcends itself in the (constructive) transfinite, and 
the arithmetical meaning of B will evolved, letting invariant the 
modal logic G, G*, S4Grz, ...
Then the justification is that it works. It gives an unameable creative 
subject which lives in a non describable temporal structure, etc. You 
can take this as a simplification. With comp the simple first person 
already leads to a notion of arithmetical quantization. Then sensible 
matter is also given by adding  p , but on Bp  Dp, ...
I will say more in the road map ...


 General scientific
 knowledge doesn't seem to work that way, let alone knowledge of
 humanities or other types (echoes of John Mike's criticisms here, I 
 know).

 Parenthetically, what about scientific knowledge being captured by
 DB-p  -B-p? In other words, falsifiable, but not falsified, a
 statement of Popper's principle.

 Substituting D=-B-, we get -BDp  Dp, which has a similar Theatetical
 structure about a statement being possibly true.


Except that Dp always entail ~BDp (by second incompleteness). This 
would make your refutability notion much too large.



 Anyway, thats by the bye. If I accept the Theatetical notion for the
 sake of argument (since I can see how it might work for mathematical
 knowledge), I still struggle to see how the p part leads to self
 awareness.

To be just a little bit more specific, Bp is 3-self-referential (the 
machine proves correct proposition on any of third person description 
made at some level, correctly chosen in a serendipitous way).
But by adding  p, by a theorem similar to Tarski theorem, we are 
lead to a first person  self-reference (Bp  p) without any nameable 
subject. It is the I which has no name. That I, somehow, could 
correctly said about himself that he is not a program, that he is not 
duplicable (and indeed the first person is not duplicable from its 
first person point of view (despite Chalmers).
The heart of my critics to non computationalist is that they confuse Bp 
with Bp  p. (or Bp with Bp  Dp). Easy confusion given that G* 
justifies it, but then G* justified also that the machine cannot access 
that equivalence.
Feel free to propose other definition of person point of view 
(intensional variant of G). The key is that they are all G* equivalent 
and none are G-equiavlent reflecting an explanation gap between 
communicable, intelligible, sensible etc.

Bruno




http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You 

Re: Bruno's argument

2006-07-23 Thread Bruno Marchal


Le 22-juil.-06, à 22:02, Brent Meeker a écrit :



 No bigger than the assumption that other minds exists (a key
 assumption in comp if only through the trust to the doctor).

 Aren't those two propositions independent - that there are other minds  
 and that we cannot possibly
 know what their experiences are like?



Not with comp. Other minds have personal experiences, and if they are  
vehiculated by a software having a complexity comparable to your's,  
those personal experience are knowable only by empathy, for you. Not  
3-describable knowledge.





 And then it is a theorem that for any correct machine there are true
 propositions about them that the machine cannot prove.

 And there are true propositions about itself that the machine cannot  
 prove - but are they
 experiences?  Certainly there are myriad true propositions about  
 what my brain is doing that I am
 not, and cannot be aware of, but they aren't experiences.




I don't try to use a sophisticated theory of knowledge. You mention  
yourself knowing can be given by true justified opinion (Theaetetus).  
I take provability of p as a form of justified opinion of p:  Bp.  
Then I get knowledge by adding that p is true, under the form  p.
Limiting ourself to correct machine, we know that Bp and Bp  p are  
equivalent, but the key (godelian) point is that the machine itself  
cannot know that for its own provability predicate, making the logic of  
Bp  p different. It can be proved that Bp  p acts as a knowledge  
operator(*) (S4 modal logic), even a temporal one (S4Grz logic), and  
even a quasi quantum one with comp: S4GRz1 proves LASE p - BDp  
necessary to get an arithmetical interpretation of some quantum logic.
So non provability is not the way I model experience in the lobian  
interview. I model experiences and experiments with *variant* of G and  
G*, the logics of provable and true provability respectively.
The variants are obtain by adding  p or  Dp. This could sound  
technical, it is, sorry.

Bruno

(*) Which I should have recall to Russell (it is the best justification  
for the  p). Artemov has shown that it is the only one possible(*)  
if we decide to restrict ourself (as I have done) to what Russell call  
mathematical knowledge, but if Russell agrees with the UDA, this  
should not cause a problem (especially knowing that S4Grz describes  
mathematically a form of knowledge which cannot be put (knowingly) in a  
mathematical form. That's admittedly counter-intuitive and subtle and  
explains why I need to get people familiar with many similar  
counter-intuitive propositions which all are obtained directly or  
indirectly from diagonalizations.

(*)  
http://iridia.ulb.ac.be/~marchal/bxlthesis/Volume4CC/ 
6%20La%20these%20d'Artemov.pdf

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: This is not the roadmap

2006-07-23 Thread Bruno Marchal


Le 23-juil.-06, à 02:43, 1Z a écrit :

 There is no reason to think numbers can describe qualia at
 all, so the question of the  best description hardly arises.

That was my point. But then I can show this is a necessary consequence 
of comp.
Materialist who are using comp as a pretext for not doing serious 
philosophy of mind takes as granted that qualia can be described by 
number or machine or theories. Comp explains how both qualia can be 
related to a mixture of self-reference and unnameable truth.
Number cannot 3-describes qualia, but can build (correct) theories 
about them including explanation why numbers cannot describe them, yet 
bet instinctively on them before anything else.

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: COMP Self-awareness

2006-07-23 Thread Russell Standish

On Sun, Jul 23, 2006 at 04:38:01PM +0200, Bruno Marchal wrote:
 
 
 Functionalism is the same as comp, except that functionalist 
 traditionally presuppose some knowable high level of substitution (and 
 then like materialist presuppose a physical stuffy level).
 So I would say comp is just the old functionalism corrected for 
 taking the UDA consequence into account (the level of substitution is 
 unkowable, and physical stuff is either contradictory or devoid of 
 explanation power and redundant).
 

Hmm - you use the term functionalism quite differently to my
understanding. My take is that functionalism implies if you replace
the parts of my brain with things which were functionally equivalent,
you would end up with a copy of my consciousness. The description
given by Janet Levin on plato.stanford.edu seems to be in agreement
with this notion (even though she uses different words).

Nowhere in this discussion is an assumption of a level of
substitution, nor of stuffy matter.

Suppose I had a non Turing-emulable soul, composed of identical non
Turing-emulable particles called soulons. Functionalism would imply
I can copy my brain by adding in an appropriate arrangement of
physical particles, as well as an appropriate arrangement of
soulons. Yet, by construction, this theory is not computationalist!

So I stand by my remarks that computationalism is a specialised
variant of functionalism.

 
 
  I have studied the book by Boolos, and can appreciate the power of
  modal logic to handle reasoning about provability. I can also see how
  you (and others) have extended these logic systems to the Theatetical
  notion of knowledge (adding the p), but my (physicist's) intuition
  riots against this definition capturing what we mean by knowledge. At
  best, I consider it a description of _mathematical_ knowledge, where
  indeed we can never know something unless proved.
 
 
 It depends what you put in the B. It is indeed a sort of scientific 
 knowledge when starting with B = the provability predicate of some 
 fixed theory like Peano arithmetic, but such a theory can 
 (autonomously) transcends itself in the (constructive) transfinite, and 
 the arithmetical meaning of B will evolved, letting invariant the 
 modal logic G, G*, S4Grz, ...
 Then the justification is that it works. It gives an unameable creative 
 subject which lives in a non describable temporal structure, etc. You 
 can take this as a simplification. With comp the simple first person 
 already leads to a notion of arithmetical quantization. Then sensible 
 matter is also given by adding  p , but on Bp  Dp, ...

I can (sort of) see this. However, it is only one model, and not even
a terribly convincing one (to me at least). Do you have any uniqueness
results showing that the p is necessary for obtaining the unamable
creative subject or the temporality?

...

 
 Except that Dp always entail ~BDp (by second incompleteness). This 
 would make your refutability notion much too large.

Oh, well another idea bites the dust!

 
 
 
  Anyway, thats by the bye. If I accept the Theatetical notion for the
  sake of argument (since I can see how it might work for mathematical
  knowledge), I still struggle to see how the p part leads to self
  awareness.
 
 To be just a little bit more specific, Bp is 3-self-referential (the 
 machine proves correct proposition on any of third person description 
 made at some level, correctly chosen in a serendipitous way).
 But by adding  p, by a theorem similar to Tarski theorem, we are 
 lead to a first person  self-reference (Bp  p) without any nameable 
 subject. It is the I which has no name. That I, somehow, could 
 correctly said about himself that he is not a program, that he is not 
 duplicable (and indeed the first person is not duplicable from its 
 first person point of view (despite Chalmers).

You would need to be more specific in your claims, but that would
probably be the subject of a full scientific paper, and perhaps you
are only speculating at present anyway. I will need to be patient.

But even so, I don't see anywhere the necessity of 1st person
self-awareness, which is what I was driving at.

 The heart of my critics to non computationalist is that they confuse Bp 
 with Bp  p. (or Bp with Bp  Dp). Easy confusion given that G* 
 justifies it, but then G* justified also that the machine cannot access 
 that equivalence.

OK - but I hope I'm not doing that.

 Feel free to propose other definition of person point of view 
 (intensional variant of G). The key is that they are all G* equivalent 
 and none are G-equiavlent reflecting an explanation gap between 
 communicable, intelligible, sensible etc.
 
 Bruno
 

I wish I shared your certainty that any n-person POV can be captured
by means of a modal logic. But I don't. All I can say is that I find
it unconvincing, whilst admitting that perhaps you have a point.


-- 
*PS: A number of people ask me about the attachment to my email, which
is of type 

Re: This is not the roadmap

2006-07-23 Thread 1Z


Bruno Marchal wrote:
 Le 23-juil.-06, à 02:43, 1Z a écrit :

  There is no reason to think numbers can describe qualia at
  all, so the question of the  best description hardly arises.

 That was my point. But then I can show this is a necessary consequence
 of comp.
 Materialist who are using comp as a pretext for not doing serious
 philosophy of mind takes as granted that qualia can be described by
 number or machine or theories. Comp explains how both qualia can be
 related to a mixture of self-reference and unnameable truth.

OTOH, materialism explains how qualia can be unrelated to computation.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Bruno's argument

2006-07-23 Thread Russell Standish

On Mon, Jul 24, 2006 at 12:35:02PM +1000, Stathis Papaioannou wrote: 

 What if we just say that there is no more to the supervenience of the
 mental on the physical than there is to the supervenience of a
 parabola on the trajectory of a projectile under gravity? The
 projectile doesn't create the parabola, which exists in Platonia in
 an infinite variety of formulations (different coordinate systems and
 so on) along with all the other mathematical objects, but there is an
 isomorphism between physical reality and mathematical structure, which
 in the projectile's case happens to be a parabola. So we could say
 that the brain does not create consciousness, but it does happen
 that those mathematical structures isomorphic with brain processes in
 a particular individual are the subset of Platonia that constitutes a
 coherent conscious stream. This is not to assume that there actually
 is a real physical world: simulating a projectile's motion with pencil
 and paper, on a computer, or just the *idea* of doing so will define
 that subset of Platonia corresponding to a particular parabola as
 surely as doing the actual experiment. Similarly, simulating atoms,
 molecules etc. making up a physical brain, or just the idea of doing
 so defines the subset of Platonia corresponding to an individual
 stream of consciousness. Your head suddenly turning into a bunch of
 flowers is not part of the consciousness simulation/reality (although
 it still is part of Platonia), just as the projectile suddenly
 changing its trajectory in a random direction is not part of the
 parabola simulation/reality, or 7 is not an element of the set of
 even numbers.Stathis Papaioannou 

So you consider it just a coincidence then that incredibly complicated
structures (called brains) are part of our observed reality, even
though by Occam's razor we really should be demanding an explanation
of why such complexity exists.

Cheers

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 ()
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---