Re: free will and mathematics

2012-06-12 Thread R AM
On Mon, Jun 11, 2012 at 6:42 PM, meekerdb meeke...@verizon.net wrote:

 On 6/11/2012 8:45 AM, R AM wrote:

 But what I'm saying here is not ontological determinism but in fact,
 about the subjective experience. I'm defending that we cannot imagine
 ourselves in exactly the same subjective situation and still think that we
 could have done otherwise.


 I can certainly imagine that.  But I wonder if your use of subjective
 situation is ambiguous.  Do you mean exactly the same state, including
 memory, conscious and unconscious thoughts..., or do you just mean
 satisfying the same subjective description?


I would say exactly the same conscious state.

If we are put again in the same conscious state, I don't think that we can
consistently imagine ourselves doing otherwise. If at subjective situation
t we decided x, why would we decide otherwise if *exactly* the same
subjective situation was again the case?

Of course, unconscious processes might make the difference (in fact, they
do), but this is no help for a defender of free will, because he cannot
maintain that decisions have, at bottom, an unconscious origin.


 Brent


  Or something equivalent, if we were put again in exactly the same
 subjective situation, would we do otherwise? I don't think so, but If yes,
 why?


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to 
 everything-list@googlegroups.**comeverything-list@googlegroups.com
 .
 To unsubscribe from this group, send email to everything-list+unsubscribe@
 **googlegroups.com everything-list%2bunsubscr...@googlegroups.com.
 For more options, visit this group at http://groups.google.com/**
 group/everything-list?hl=enhttp://groups.google.com/group/everything-list?hl=en
 .



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: free will and mathematics

2012-06-12 Thread R AM
On Tue, Jun 12, 2012 at 12:18 AM, RMahoney rmaho...@poteau.com wrote:


 I'm assuming you mean by exactly the same situation, every atom in it's
 exact same physical state.


Not really. I mean the same conscious or subjective situation. From the
free will point of view, decisions are conscious and can only be based on
what is available to consciousness at the moment of decision. Defenders of
free will are commited to say that, no matter how long and deep we ponder a
question before making a decision, if we were put again in exactly the same
subjective situation (after all the pondering, etc) we could still do
otherwise.



 Now the question that came up, is this person not responsible for his/her
 actions if only at the mercy of the physical laws of the universe (no free
 will). The answers I've been hearing that suggest she/he may not be
 responsible miss the point. The measure of wrongness was defined by
 society.


I agree. People is not responsible in some ontological way. But society
considers us responsible (i.e. punishable). And we take that into account.
The important fact is not whether we have free will or not, but to know
that we are considered responsible.

It's interesting to notice that discussions about free will almost always
go hand in hand with discussions about responsability and punishment.

If history and experience yields a member of society that does a horrendous
 wrong, he/she is a defect of society and needs to be removed,
 rehabilitated, or whatever society dictates. Here's where I don't agree
 with aquitting someone due to mental defect. If the defect is there, the
 result is the same. Fix it if it's fixable or if it's not fixable remove
 them from society.


I agree, but we have to be careful here, lest we consider people to be
machines (something that has to be fixed or removed, like in
the Clockwork Orange movie).

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: QTI and eternal torment

2012-06-12 Thread Bruno Marchal


On 11 Jun 2012, at 15:09, David Nyman wrote:


On 11 June 2012 13:04, Bruno Marchal marc...@ulb.ac.be wrote:

Why do you think that pure indexicality (self-reference) is not  
enough? It
seems clear to me that from the current state of any universal  
machine, it

will look like a special moment is chosen out of the others, for the
elementary reason that such a state individuates the present  
moment here

and now from her point of view.


Yes, but the expression from the current state of any universal
machine (different sense of universal, of course) already *assumes*
the restriction of universal attention to a particular state of a
particular machine.


But is that not the result of the fact that each machine has only  
access to its own configuration?




 Hoyle on the other hand is considering a
*universal state of attention* and hence needs to make such isolation
of particulars *explicit*.  The beam stands for the unique, momentary
isolation in experience of that single state from the class of all
possible states (of all possible machines).


Why could not each machine do the same? Consider the WM-duplication.  
The body reconstituted in Moscow has access only to the memory  
reimplemented in M, + the further new change, which includes the  
feeling Oh I am the one in Moscow. From the point of view of the  
universal person this is only a particular windows, and both are  
lived, but not (at this stage at least) from the point of view of the  
subject in M. I am not sure a beam has to focus on him, for making his  
experience more genuine. Would the beam have to dovetail on the two  
reconstitution, making recurrently one of a them into a zombie?


It seems to me that the beam introduces only supplementary  
difficulties. The reason why we feel disconnected is related to our  
self-identification with our most recent memories, which become  
disconnected in the differentiation of consciousness.


We are all the same person, in a sense similar to the W-guy and the M- 
guy are the same Helsinki-guy, just with different futures, and by  
work, they can understand the significance of this, or even experience  
it through some induced amnesia. The beam is like to reintroduce a  
sort of conscious selection on some conscious order, which seems to  
me made unnecessary by the use of indexicals (self-reference being  
what theoretical computer science handles the best).





 Thus, momentarily, the
*single* universal knower can be in possession of a *single* focus of
attention, to the exclusion of all others.


He always focus on the whole experience of consciousness, which  
might be the same for similar creature, and the *relative* truth  
differentiate by themselves. He lives them out of time, and time  
+personal differentiation is the fate of those machine which  
individuates themselves to such personal memories. It is useful when  
doing shopping or any concrete things locally. No doubt evolution has  
put some pressure, and every day life pushes a bit in that direction,  
but eventually your first person identity remains a private matter,  
and there is matter of choice.






 This is the only
intelligible meaning of mutually-exclusive, considered at the
*universal level*.


I don't see this. It looks like adding something which seems to me  
precisely made unnecessary with comp.







If you remove such a principle of isolation, how can the state of
knowledge of the universal knower ever be anything other than a sum
over all experiences, which can never be the state of any single mind?


Hmm... You don't know that!
Jouvet, others, including myself in my dream diary notes, have  
described (experimented) the possibility of awakening from two  
simultaneous dreams. I can conceive this easily for any finite  
number of experiences, and less easily for an infinite numbers. The  
implementation is simple, just connect the memories so that the common  
person in all different experiences awaken in a state having all those  
memories personally accessible. For the two experiences/dreams case,  
Jouvet suggested that it might be provoked by the paralysis of the  
corpus callosum, indeed, in some REM sleep.


And the UD generates all possible type of corpus callosum *possible  
(consistent)*. In such a state we might be able to relativize more the  
difference, and build on more universal things, and then differentiate  
again.






Sure, the states are all there at once, but what principle allows
you, in the person of the universal knower, to restrict your
attention to any one of them?


He looks at all of them, from out of time (arithmetic). It is only  
from each particular perspective that it looks like it is disconnected  
from the others. That is, with comp, just an illusion, easily  
explainable by the locally disconnected memories of machines sharing  
computations/dreams.





 It seems to me that, if one wants to
make sense of the notion of a *universal* locus of experience, with
personal 

Re: QTI and eternal torment

2012-06-12 Thread Bruno Marchal


On 11 Jun 2012, at 17:44, Stephen P. King wrote:


On 6/11/2012 8:04 AM, Bruno Marchal wrote:


On 10 Jun 2012, at 22:57, David Nyman wrote:


On 10 June 2012 17:26, Bruno Marchal marc...@ulb.ac.be wrote:


I am not sure I understand your problem with that simultaneity. The
arithmetical relations are out of time. It would not make sense  
to say that
they are simultaneously true, because this refer to some time,  
and can

only be used as a metaphor.


I agree with almost everything you say.  I would say also that the
moments of experience, considered as a class, are themselves out of
time.  What it takes to create (experiential) time - the notorious
illusion - is whatever is held to be responsible for the  
irreducible

mutual-exclusivity of such moments, from the perspective of the
(universal) knower.  Hoyle does us the service of making this
mutual-exclusivity explicit by invoking his light beam to  
illuminate

the pigeon holes at hazard; those who conclude that this function is
redundant, and that the structure of pigeon holes itself somehow  
does

the work of creating personal history, owe us an alternative
explanation of the role of Hoyle's beam.

I understand, of course, that these are just ways of thinking  
about a
state of affairs that is ultimately not finitely conceivable, but  
all

the same, I think there is something that cries out for explanation
here and Hoyle is one of the few to have explicitly attempted to
address it.


David, I can't see the role of Hoyles' beam. The reason of the  
mutual exclusivity of moments seems to me to be explained (in comp)  
by the fact that a machine cannot address the memory of another  
machine, or of itself at another moment (except trough memory).  
Hoyles' beam seem to reintroduce a sort of  external reality, which  
does not solve anything, it seems to me, and introduces more  
complex events in the picture.


Dear Bruno,

Here we seem to be synchronized in our ideas!

This cannot access the memory of another machine and (cannot  
access memory ) of itself at another moment is exactly the way that  
the concurrency problem of computer science is related to space and  
time!



I don't think so. With comp you have to distinguish completely the  
easy concurrency problem, from the harder physical concurrency  
problem. It is easy to emulate interacting program in the sense I  
have to use to explain that a machine cannot access the meory of  
another machine. And obviously the UD or arithmetic implements ad  
nauseam such kind of interactions. But then the physical laws emerge  
from the statistics on *all* computation, and all such interaction,  
and from this we must justify physics, including the physical logic of  
interaction. But that is a separate problem, and the Z and X logic  
suggest how to proceed by already given a reasonable arithmetical  
quantization (it shows also that it is technically difficult to  
progress).




But now I am confused as you seem to recognize that a machine has  
and needs the resource of memory;



This is quite typical in computer science. Most machines have  
memories. Like they have often read and write intructions to handle  
those memories.




it was my (mis)understanding that machines are purely immaterial,  
existing a a priori given strings of integers.


You were right. This has nothing to do that the program i in the list  
of the phi_i, can have memories. A large part of computer science can  
be entirely arithmetized.
You might think to study a good book on theoretical computer science  
to swallow definitely that fact. All proposition on machine are either  
arithmetical statements, or arithmetically related statements. I work  
both in comp, and in arithmetic.






How does memory non-access become encoded in a string?


Why would we need to encode the non access. It is enough that the  
numbers involved have no access.


The computation phi_i(j)^k has no access to the computation  
phi'(j')^k, if i ≠ i' and j ≠ j', for example.


But non-access can be implemented in various ways. It is just not  
relevant.




Is it the non-existence of a particular Godelization within a  
particular string that would relate to some other portion of a string?


It is more simple. See above.







Why do you think that pure indexicality (self-reference) is not  
enough? It seems clear to me that from the current state of any  
universal machine, it will look like a special moment is chosen out  
of the others, for the elementary reason that such a state  
individuates the present moment here and now from her point of  
view.


How is the index establish a form of sameness? It seems to me  
that one needs at least bisimilarity to establish the connectivity.




Of course, the idea that some time exists is very deep in us, and I  
understand that the big comp picture is very counter-intuitive, but  
in this case, it is a kind of difficulty already present in any  
atemporal static view of everything, 

Re: free will and mathematics

2012-06-12 Thread Bruno Marchal

On 11 Jun 2012, at 17:45, R AM wrote:




On Thu, Jun 7, 2012 at 5:34 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:
On Thu, Jun 7, 2012 at 1:37 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


OK, for the sake of the argument, let's suppose that you ate  
spaghetti because that's what you liked at that moment. Do you  
think you could have done otherwise?


Now, let's suppose a gangster decides to rob a bank after  
considering all his options. Later he might be judged and told that  
he could have done otherwise? Could he really have done otherwise?


At the level of the arithmetical laws, or physical laws, the answer  
is no. But we don't live at that level, so at the level of its first  
person impression the answer is yes.


OK. So that means that if you (or the ganster) were put again in  
exactly the same subjective situation (same beliefs, likings,  
emotions, intentions, memories, same everything) you could do  
otherwise?


No. But the gangster does not know this determination. So although at  
that level he could not do otherwise, from his perspective, it still  
can make genuine sense that he could have done otherwise, from our  
embedded pov perspective. Only for God, it does not make sense, but  
locally we are not God.





More specifically. You are in a situation where you crave for  
spaghetti, you haven't had spaghetti in the last month, you know  
spaghetti is good for er ... whatever. You therefore make the  
decision to eat spaghetti. Now, you are put again in exactly the  
same situation and ... do you really think you could choose  
strawberries instead? would you choose strawberries?


If I am craving spaghetti I could not do otherwise. But then I would  
not have said it. The situation is when I remember having hesitate,  
and the day after, despite the determination, I can think that I could  
have done otherwise, because I cannot be aware of the complete  
determination. And, indeed, after that hesitation, I might well have  
taken the strawberry.


Determinism is just not incompatible with genuine free will or  
will, for the will is not playing at the same level than the  
determination. If they were on the same level, you could trivially  
justify all your act by I am just obeying the physical laws, which  
is just false, because you are an abstract person, not a body.






A guy rapes and tortures 10 children, could he have done otherwise?  
Well, there is a sense for some medical expert to say that he could  
have done otherwise, for the guy is judged responsible and not under  
some mental disease (for example). Now, if the guy defends himself  
in saying that he was just obeying to the physical laws, he will  
convince nobody, and rightly so.


He will convince nobody  because we all believe that he (and all of  
us) could have done otherwise. And we all believe that because, for  
some reason, we believe it is unfair to punish someone if he cannot  
do otherwise. What I'm saying is that belief in free-will is just a  
justification for punishing people.


OK. And rightly so, unless unfair trial of course.




But in fact, we punish people, not because he could have done  
otherwise but because next time, he will think twice.


Actually this is not proved, and some argue that going in jail can  
augment the probability of recurrence of certain type of crime. But  
that's not relevant. So OK.




Next time, he will not be in the same subjective situation: he will  
have the memories of his punishment and he will take that into  
account.


He learned to do otherwise.




If next time he is in exactly the same subjective situation, he will  
do exactly the same. Why would he do otherwise? Why didn't he already?


The point is not that he is determinate, but that he is aware of his  
non knowledge of determination, making him capable to think correctly  
that he might have done otherwise; perhaps having a slight change of  
state of mind, or awaken in different mood, or any detail he knows  
that he did not know.


Sometimes, we might become aware of the reason which might have invite  
us to do otherwise, so we prepare ourselves better for the future  
hesitation.





Let's suppose that a person forgets everything every morning. Would  
it make any sense to punish someone like that, because he just could  
have done otherwise?


Someone like that must go to an hospital, be cured, and then can be  
judged responsible or not. It can depend on many factors. There are no  
general rules, nor any scientific criteria for judging with any  
certainty the responsibility.





We are determinate, but we cannot known completely our  
determination, so from our point of view there is a genuine spectrum  
of different possibilities and we can choose freely among them. It  
does not matter that a God, or a Laplacean daemon can predict our  
actions, for *we* can't, and have no other choice than choosing  
without complete information, and in some case it makes sense that  
we could have 

Re: inside vs outside

2012-06-12 Thread Bruno Marchal


On 11 Jun 2012, at 20:08, Abram Demski wrote:




On Sun, Jun 10, 2012 at 10:11 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 09 Jun 2012, at 21:53, Abram Demski wrote:


Bruno, Wei,

I've been reading the book saving truth from paradox on and off,  
and it has convinced me of the importance of the inside view way  
of doing foundations research as opposed to the outside view.


At first, I simply understood Field to be referring to the language  
vs meta-language distinction. He criticises other researchers for  
taking the outside view of the system they are describing,  
meaning that they are describing the theory from a meta-language  
which must necessarily exist outside the theory.


Since Gödel we know that for rich theory we can embed the  
metatheory in the theory. That is what Gödel's provability predicate  
does, and what Kleene predicate does for embedding the reasoning on  
the Turing machines, and the phi_i, in terms of number relations.


Arithmetic contains its own interpreter(s).




Hm, well, such is not the case for Truth. According to Tarski, we  
are forbidden from embedding the metalanguage in the language. This  
follows from simple, intuitive assumptions, showing that the Liar  
paradox will spring up in any 'reasonable' theory of truth. Kripke  
showed how to weaken our notion of truth to the point where the  
truth predicate could be within the language, but his theory does  
not allow is to say everything which seems natural to assert about  
truth, so many more theories have been created after. Every theory  
seems to suffer from the revenge problem: in order to define a  
notion of truth which can fit into the language, a more complicated  
semantics for that truth predicate must be described. The  
Strengthened Liar Paradox is then describable in that semantics, if  
we try to fit it within the same language. So, we are again forced  
to create a meta-language outside of our language to describe its  
semantics. (But who describes the semantics of the meta-language?)


So, the cases for syntactic meta-theories and semantic meta-theories  
diverge widely.




I thought that his complaint was frivolous; of course you need to  
describe a theory of truth via a meta-language. That is part of the  
structure of the problem. Yes, it makes the entire theory dubious;  
but without a concrete alternative, the only reply to this is such  
is life!. So I was confused when he refused to take other  
logicians literally (accepting the logic which they put forward as  
the logic which they put forward), and instead claimed that their  
logic corresponded to the 1-higher theory (the metalanguage in  
which they describe their theory).


At some point, though, the technique clicked for me, and I  
understood that he was saying something very different. For  
example, the outside view of Kripke's theory of truth says that  
truth is a 'partial' notion, with an extension and an anti- 
extension, but also a 'gap' between the two where it is undefined.  
(It is a gap theory.)


I am not sure I understand well.


I hope the previous explanation helped. Field claims to get around  
the revenge problem by not really providing a meta-language to give  
a semantics to the truth predicate. He does provide something  
similar, but it is really a semantics for a restricted domain, to  
give an intuition for the working of the theory. (He argues that  
Kripke's semantics must be viewed in this way, too.)


Field wrote a book science without number which I found not really  
convincing, except for Newtonian gravitation, but not physical  
sciences or the theological sciences in general (rather trivially once  
you assume the comp hypothesis). You might elaborate on his argument.


I am problem driven, and I think comp entails big change in  
fundamental science that we have to take into account. I am not sure  
it makes sense to interpret Kripke semantics literally, as opposed to  
arithmetic and most of computer science. The right ([]*) modal  
hypostases have no Kripke semantics, for example.


For arithmetic, I use Tarski theory of truth. So ExP(x) is true if  
it exists a natural number n so that it is the case that P(n). That is  
enough to describe the behavior of machines (but not their mind!).   
Then I use, implicitly thanks to Solovay theorem, what simple  
arithmetical machine (relations) can prove and not prove about them,  
and how they can interpret their relation with some other universal  
numbers. I let the machine develop her own semantics, and facilitate  
my task by studying only the correct one (by definition).


Some Kripke semantics occur more or less naturally, but not all  
arithmetical modalities have a Kripke semantics. Note that for G* you  
can develop a semantics is term of sequences of Kripke models. Then  
p is satisfied by such a sequence if p is eventually satisfy in the  
models in that models sequence.


Bruno







On the inside view, however, it does not make 

Re: free will and mathematics

2012-06-12 Thread meekerdb

On 6/12/2012 1:31 AM, R AM wrote:



On Mon, Jun 11, 2012 at 6:42 PM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


On 6/11/2012 8:45 AM, R AM wrote:

But what I'm saying here is not ontological determinism but in fact, 
about the
subjective experience. I'm defending that we cannot imagine ourselves 
in exactly
the same subjective situation and still think that we could have done 
otherwise.


I can certainly imagine that.  But I wonder if your use of subjective 
situation is
ambiguous.  Do you mean exactly the same state, including memory, conscious 
and
unconscious thoughts..., or do you just mean satisfying the same subjective 
description?


I would say exactly the same conscious state.

If we are put again in the same conscious state, I don't think that we can consistently 
imagine ourselves doing otherwise.


Well then it seems to come down to a question of timing.  If this 'same conscious state' 
is before the action, then I can certainly imagine changing my mind.  And this holds all 
the way up to the action, which is why you are even unpredictable by yourself.  You don't 
know (for sure) what you'll do until you do it.  If the 'same conscious state' is at the 
moment of action, then it's not so clear.  It's not the usual case, but sometimes we are 
surprised by our own action.


If at subjective situation t we decided x, why would we decide otherwise if *exactly* 
the same subjective situation was again the case?


Of course, unconscious processes might make the difference (in fact, they do), but this 
is no help for a defender of free will, because he cannot maintain that decisions have, 
at bottom, an unconscious origin.


Why not.  That's the compatibilist view of 'free will' and that's apparently why Sam 
Harris disagrees with compatibilism: he defines 'free will' to be *conscious* authorship 
of decisions.  In the course of a day almost all my decisions are made without conscious 
thought, like which keys to strike in typing the previous line.  Earlier today I had to 
enter a computer generated random security code; I had to think about each character.  So 
was the latter an exercise of free will and the former wasn't??


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: modal logic's meta axiom

2012-06-12 Thread Bruno Marchal


On 12 Jun 2012, at 00:47, Russell Standish wrote:


On Thu, Jun 07, 2012 at 01:33:48PM +0200, Bruno Marchal wrote:


In fact we have p/p for any p. If you were correct we would have []p
for any p.


This is what I thought you said the meta-axiom stated?

How else do we get p/[]p for Kripke semantics?



Because if p is true in all worlds, then []p is true in all worlds OK?  
If p is true in all worlds (validity) then p is true in particular in  
all worlds accessible from alpha, and so []p is true in alpha, and  
this works for any alpha, so []p is true in all worlds.


 In a deduction, without proviso, p means that we have already proved  
p, (or in some case, that it is an assumption but you have to be  
careful).
The meta-semantics of a rule is dependent on the theory, and the way  
we define what constitutes a proof. In modal logic  it usually mean,  
in p/[]p that p occur at the end of a proof.


Self-reference is confusing because deduction p/q get represented by  
[](p-q), or []p-[]q.


[]p is the machine's language for the machine asserts (believes,  
proves) p. All universal machine, when they asserts p, will soon or  
later asserts []p. So []p - [][]p is true about all universal  
(correct) machines.


The löbian machine are those for which []p - [][]p is not only true,  
but they actually can justify why they know that []p - [][]p. It is  
the kind of truth they can communicate.


So the fact p/q is modeled by []p-[]q, in the language of the machine.

Löbian machines are close for the Löb rule. If they asserts []p - p,  
they will inevitably asserts soon or later p.
And they know that, which means they can prove []([]p-p)-[]p, for  
each p arithmetical sentences, in the arithmetical interpretations.


Bruno





--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: One subject

2012-06-12 Thread Bruno Marchal


On 12 Jun 2012, at 04:19, Pierz wrote:




On Monday, June 11, 2012 10:46:42 PM UTC+10, Bruno Marchal wrote:

On 11 Jun 2012, at 03:12, Pierz wrote:

 I'm starting this as a new thread rather than continuing under 'QTI
 and eternal torment', where this idea came up, because it's really a
 new topic.
 It seems to me an obvious corollary of comp that there is in reality
 (3p) only one observer, a single subject that undergoes all possible
 experiences. In a blog post I wrote a while back (before I learned
 about comp) I put forward this 'one observer' notion as the only
 solution to a paradox that occurred to me when thinking about the
 idea of cryogenic freezing and resuscitation. I started wondering
 how I could know whether the consciousness of the person being
 resuscitated was the 'same consciousness' (whatever that means) as
 the consciousness of the person who was frozen. That is, is a new
 subject created with all your memories (who will of course swear
 they are you), or is the new subject really you?
 This seems like a silly or meaningless point until you ask yourself
 the question, If I am frozen and then cryogenicaly resurrected
 should I be scared of bad experiences the resurrected person might
 have? Will they be happening to *me*, or to some person with my
 memories and personality I don't have to worry about? It becomes
 even clearer if you imagine dismantling and reassembling the brain
 atom by atom. What then provides the continuity between the pre-
 dismantled and the reassembled brain? It can only be the continuity
 of self-reference (the comp assumption) that makes 'me' me, since
 there is no physical continuity at all.
 But let's say the atoms are jumbled a little at reassembly,
 resulting in a slight personality change or the loss of some or all
 memories. Should I, about to undergo brain disassembly and
 reassembly, be worried about experiences of this person in the
 future who is now not quite me? What then if the reassembled brain
 is changed enough that I am no longer recognizable as me? Following
 this through to its logical conclusion, it becomes clear that the
 division between subjects is not absolute. What separates
 subjectivities is the contents of consciousness (comp would say the
 computations being performed), not some kind of other mysterious
 'label' or identifier that marks certain experiences as belonging to
 one subject and not another (such as, for instance, being the owner
 of a specific physical brain).
 I find this conclusion irresistible - and frankly terrifying. It's
 like reincarnation expanded to the infinite degree, where 'I' must
 ultimately experience every subjective experience (or at least every
 manifested subjective experience, if I stop short of comp and the
 UD). What it does provide is a rationale for the Golden Rule of
 morality. Treat others as I would have them treat me because they
 *are* me, there is no other! If we really lived with the knowledge
 of this unity, if we grokked it deep down, surely it would change
 the way we relate to others. And if it were widely accepted as fact,
 wouldn't it lead to the optimal society, since
 everyone would know that they will be/are on the receiving end of
 every action they commit? Exploitation is impossible since you can
 only steal from yourself.

I can agree, but it is not clear if it is assertable (it might belong
to variant of G*, and not of G making that kind of moral proposition
true but capable of becoming false if justified  too much, like all
protagorean virtues (happiness, free-exam, intelligence, goodness,
etc.). Cf hell is paved with good intentions.

Also, a masochist might become a sadist by the same reasoning, which,
BTW, illustrates that the (comp) moral is not don't do to the others
what you don't want the others do to you, but don't do to the others
what *the others* don't want you do to them.
In fact, unless you defend your life,  just respect the possible adult
No Thanks.  (It is more complex with the children, you must add
nuances like as far as possible).


I don't know what G* and G are, but I get the gist, and I agree. In  
fact, questions like how to deal with punishment become interesting  
when considered through this 'one subject' lens. When 'I' am the  
offender, I don't want to be punished for my crimes, but 'I' as the  
victim and the broader community think the offender should be. We  
have to balance competing views. Also, there is sense in looking  
after oneself ahead of others to the extent that I of all people am  
best equipped to look after my own needs, and I have the same rights  
to happiness, material wellbeing etc as others. The question is,  
what course of action brings the greatest good if all adopt it as  
their moral code? It's no use everybody giving away all their  
worldly goods to charity - there will be no-one to receive them!



  Of course, if comp is true, moral action becomes meaningless in one
 sense since everything happens anyway, so you 

Re: free will and mathematics

2012-06-12 Thread R AM
On Tue, Jun 12, 2012 at 7:44 PM, meekerdb meeke...@verizon.net wrote:


 Well then it seems to come down to a question of timing.  If this 'same
 conscious state' is before the action, then I can certainly imagine
 changing my mind.


Yes, but why would you do that? You didn't change your mind in the first
situation. Why would you change your mind if exactly the same conscious
state is repeated?


 And this holds all the way up to the action, which is why you are even
 unpredictable by yourself.  You don't know (for sure) what you'll do until
 you do it.


I agree, but that's not exactly what I'm saying. I'm trying to make sense
of the I could have done otherwise. What does it mean? Or in other words,
if the same situation is repeated I would do otherwise. But it's
difficult to explain (I might be wrong too).

OK, let's suppose that exactly the same conscious state is repeated N
times. If each time we do a different action, even opposite ones (such as
killing or not killing someone), then our decision making is basically
random. I don't think that is what is meant by free will.

Let's go to an extreme case. We have to make an important decision. We
spend one year pondering our alternatives, and a decision is reached (we
will kill someone). We are pretty certain about it. Do you think that if
we repeat the same conscious state of just before making the decision, we
would conclude not to kill?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: free will and mathematics

2012-06-12 Thread R AM
On Tue, Jun 12, 2012 at 7:23 PM, Bruno Marchal marc...@ulb.ac.be wrote:


 No. But the gangster does not know this determination. So although at that
 level he could not do otherwise, from his perspective, it still can make
 genuine sense that he could have done otherwise, from our embedded pov
 perspective. Only for God, it does not make sense, but locally we are not
 God.



 More specifically. You are in a situation where you crave for spaghetti,
 you haven't had spaghetti in the last month, you know spaghetti is good for
 er ... whatever. You therefore make the decision to eat spaghetti. Now, you
 are put again in exactly the same situation and ... do you really think you
 could choose strawberries instead? would you choose strawberries?




If I am craving spaghetti I could not do otherwise.


Well, parents routinely punish their children for eating too much candy.
Why do they do that, if their children could not do otherwise?


 But then I would not have said it. The situation is when I remember having
 hesitate, and the day after, despite the determination, I can think that I
 could have done otherwise, because I cannot be aware of the complete
 determination. And, indeed, after that hesitation, I might well have taken
 the strawberry.


Yes, but for the sake of the argument, I wanted you to consider the case
where you are pretty certain about eating spaghetti. Defenders of free will
would say that free will is active whenever you make a decision, hesitating
or not hesitating.







 Determinism is just not incompatible with genuine free will or will,
 for the will is not playing at the same level than the determination. If
 they were on the same level, you could trivially justify all your act by I
 am just obeying the physical laws, which is just false, because you are an
 abstract person, not a body.



I am not really  talking about physical determination. But in any case, I
think the justification is correct. This is not important, though, because
we do not actually punish people because they could have done otherwise. We
punish people so that they will not repeat their bad behaviour in the
future (among other reasons).


 He will convince nobody  because we all believe that he (and all of us)
 could have done otherwise. And we all believe that because, for some
 reason, we believe it is unfair to punish someone if he cannot do
 otherwise. What I'm saying is that belief in free-will is just a
 justification for punishing people.


 OK. And rightly so, unless unfair trial of course.



What i'm saying is that we believe in free will  (although it is a false
belief) so that we can punish people without feeling guilty. Usually, the
opposite is claimed: we punish people because they have free will (but I'm
claiming that's wrong).



 Actually this is not proved, and some argue that going in jail can augment
 the probability of recurrence of certain type of crime. But that's not
 relevant. So OK.



I agree, but if that's the case, we should change the punishment.




 He learned to do otherwise.



Agreed. But that's what I'm saying. Making people responsible has nothing
to do with their free will, but with reinforcement and learning. Belief in
free will is just a excuse to discipline people.


 Let's suppose that a person forgets everything every morning. Would it
 make any sense to punish someone like that, because he just could have done
 otherwise?

 Someone like that must go to an hospital, be cured, and then can be judged
 responsible or not. It can depend on many factors. There are no general
 rules, nor any scientific criteria for judging with any certainty the
 responsibility.




Agreed. However, If we punish people because they have free will (i.e. they
could have done otherwise), then this person should also be punished. Again
and again. It's not his free will that is failing, it's his memory.
However, it makes no sense to punish such a person, because having no
memory, the punishment will not change his future behavior.






 But exactly the same subjective experience is ambiguous. Our doing
 depends also on unconscious processing, of the luminosity of the sky, of
 possible subliminal messages from peers, of hormone concentration, and all
 those factors might be unknown.


But that's basically randomness! you cannot be sent to Hell because of the
luminosity of the sky! I don't think that would be considered free will.
Free will should be the result of deliberation, even if at the end you
decide to do something random.



 Or something equivalent, if we were put again in exactly the same
 subjective situation, would we do otherwise? I don't think so, but If yes,
 why?


 We can't. Given your condition. But the determination being unknown, we
 can correctly conceive of having done otherwise, for a little unknown
 reason which would have influence the choice made after some hesitation.
 Even without hesitation, there is still, even more, free will.


If we make up our mind, and we are 

Re: free will and mathematics

2012-06-12 Thread R AM
On Tue, Jun 12, 2012 at 7:44 PM, meekerdb meeke...@verizon.net wrote:


 Why not.  That's the compatibilist view of 'free will' and that's
 apparently why Sam Harris disagrees with compatibilism: he defines 'free
 will' to be *conscious* authorship of decisions.


I think that is what is meant by typical defenders of free will too.


 In the course of a day almost all my decisions are made without conscious
 thought, like which keys to strike in typing the previous line.  Earlier
 today I had to enter a computer generated random security code; I had to
 think about each character.  So was the latter an exercise of free will and
 the former wasn't??


That's a good question for defenders of free will to answer. I think they
would say that you can always stop consciously your unconscious will
(that's one of the defences against Libet's experiments). However, Most of
the day we are not even conscious that we could exercise that kind of free
will, so ... I gues 99% of the time our decisions are not free willed.
And it makes no difference, of course.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: free will and mathematics

2012-06-12 Thread meekerdb

On 6/12/2012 11:42 AM, R AM wrote:



On Tue, Jun 12, 2012 at 7:44 PM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:



Well then it seems to come down to a question of timing.  If this 'same 
conscious
state' is before the action, then I can certainly imagine changing my mind.

Yes, but why would you do that? You didn't change your mind in the first situation. Why 
would you change your mind if exactly the same conscious state is repeated?


And this holds all the way up to the action, which is why you are even 
unpredictable
by yourself.  You don't know (for sure) what you'll do until you do it.

I agree, but that's not exactly what I'm saying. I'm trying to make sense of the I 
could have done otherwise. What does it mean?


I means that, in retrospect, I can't trace back to external (to me) causes, a 
deterministic sequence that inevitably led me to do that.  Conceivably we could make an 
intelligent machine that could keep a record of all its internal states so that when did 
something it could then cite the sequence of internal states and say, See I had to do 
it.  It was just physics.


Or in other words, if the same situation is repeated I would do otherwise. But it's 
difficult to explain (I might be wrong too).
OK, let's suppose that exactly the same conscious state is repeated N times. If each 
time we do a different action, even opposite ones (such as killing or not killing 
someone), then our decision making is basically random. I don't think that is what is 
meant by free will.


I think that's wrong. You are equating unpredictable with random.  Suppose the same 
conscious state is repeated and one second later you either shoot someone or you punch 
him.  In that second different unconscious processes may determine what you do; so that 
which you do is unpredictable.  But it is only 'random' within a range which is determined 
by who you are - and in this case you are very angry with the someone - so it is still an 
exercise of your will.  And it's not constrained or coerced, so it's 'free will'.


Let's go to an extreme case. We have to make an important decision. We spend one year 
pondering our alternatives, and a decision is reached (we will kill someone). We are 
pretty certain about it. Do you think that if we repeat the same conscious state of just 
before making the decision, we would conclude not to kill?


Yes, it's possible. Of course there are feelings of resolve or hesitancy that make it more 
or less likely we will carry out a plan. But in that moment some different unconscious 
process could change our mind, or an external event, such as seeing child might remind us 
our intended victim was once an innocent child, might change our mind.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: free will and mathematics

2012-06-12 Thread R AM
On Tue, Jun 12, 2012 at 9:39 PM, meekerdb meeke...@verizon.net wrote:


 I means that, in retrospect, I can't trace back to external (to me)
 causes, a deterministic sequence that inevitably led me to do that.


Isn't that randomness?


   Conceivably we could make an intelligent machine that could keep a
 record of all its internal states so that when did something it could then
 cite the sequence of internal states and say, See I had to do it.  It was
 just physics.


And the machine would be right ...



  Or in other words, if the same situation is repeated I would do
 otherwise. But it's difficult to explain (I might be wrong too).

 OK, let's suppose that exactly the same conscious state is repeated N
 times. If each time we do a different action, even opposite ones (such as
 killing or not killing someone), then our decision making is basically
 random. I don't think that is what is meant by free will.


 I think that's wrong. You are equating unpredictable with random.  Suppose
 the same conscious state is repeated and one second later you either shoot
 someone or you punch him.  In that second different unconscious processes
 may determine what you do; so that which you do is unpredictable.


Agreed, but then the reason is unconscious. To me, that's not free will.


 But it is only 'random' within a range which is determined by who you are
 - and in this case you are very angry with the someone -


OK, but I think a defender of free will would say that you could have also
kissed that person instead of attacking him.


 so it is still an exercise of your will.  And it's not constrained or
 coerced, so it's 'free will'.


But you are removing all possible decisions except different ways of
attaking the victim, so it is not free will, at least not that feeling that
I could have done anything no matter what.





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: QTI and eternal torment

2012-06-12 Thread David Nyman
On 12 June 2012 17:36, Bruno Marchal marc...@ulb.ac.be wrote:


Yes, but the expression from the current state of any universal
 machine (different sense of universal, of course) already *assumes*
 the restriction of universal attention to a particular state of a
 particular machine.


 But is that not the result of the fact that each machine has only access
 to its own configuration?


That's too quick for me.  To say that each machine has only access to its
own configuration, is still merely to generalise; to go from this to *some
particular machine* requires one instance to be discriminated from the
whole class.  So what, you may retort, your states just discriminate
themselves as you.  The problem to my mind, with looking at things in this
way, is that for there to be a *universal* knower, each state must *
primarily* belong to you qua that knower (which is what makes it
universal) and only secondarily to you qua some local specification.  If
this be so, it is circular to invoke those secondary characteristics, which
become definite only after discrimination, to justify the discrimination in
the first place.

ISTM that the two of us must actually be thinking of something rather
different when we conceive a universal person or knower.  For you, IIUC,
this idea is consistent with many different states of consciousness
obtaining all together; consequently the viewpoint of this species of
universal person can never be reducible to any particular single
perspective.  I'm unsatisfied with this (as presumably was Hoyle) because
it leaves me with no way of justifying why am I David that isn't
circular.  I can of course say that I'm David because the given state
(here, now) happens to be one of David's states of mind, but the problem in
this view is that this is completely consistent, mutatis mutandis, with
Bruno's saying exactly the same. By contrast, Hoyle's heuristic allows me
to say I'm David because a state of David happens momentarily to be the *unique
perspective* of the whole.  As Schrödinger puts it, not a *piece* of the
whole, but in a *certain sense* the whole; Hoyle's heuristic makes explicit
that certain sense.

I suspect that the difference between us is that it is not your intention
to justify the feeling of change directly from your mathematical treatment,
but rather to demonstrate the existence of an eternal structure from which
that experience could be recovered extra-mathematically.  You often refer
to the inside view of numbers in this rather inexplicit manner (forgive me
if I have inadvertently missed your making the details explicit elsewhere).
 Hoyle however seemed to be directly concerned with rationalising this
feeling by associating it with a unique dynamic process operating over the
system as whole.  That's the difference, I think, and it may be
irreconcilable.

David

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: modal logic's meta axiom

2012-06-12 Thread Russell Standish
On Tue, Jun 12, 2012 at 08:17:38PM +0200, Bruno Marchal wrote:
 
 On 12 Jun 2012, at 00:47, Russell Standish wrote:
 
 On Thu, Jun 07, 2012 at 01:33:48PM +0200, Bruno Marchal wrote:
 
 In fact we have p/p for any p. If you were correct we would have []p
 for any p.
 
 This is what I thought you said the meta-axiom stated?
 
 How else do we get p/[]p for Kripke semantics?
 
 
 Because if p is true in all worlds, then []p is true in all worlds
 OK? 

No.  I didn't say that. p means p is true in a world. p true in all
worlds would be written []p.


-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: One subject

2012-06-12 Thread Pierz


On Wednesday, June 13, 2012 12:14:26 AM UTC+10, Stephen Paul King wrote:

  On 6/11/2012 10:19 PM, Pierz wrote:
  


 On Monday, June 11, 2012 10:46:42 PM UTC+10, Bruno Marchal wrote: 


 On 11 Jun 2012, at 03:12, Pierz wrote: 

  I'm starting this as a new thread rather than continuing under 'QTI   
  and eternal torment', where this idea came up, because it's really a   
  new topic. 
  It seems to me an obvious corollary of comp that there is in reality   
  (3p) only one observer, a single subject that undergoes all possible   
  experiences. In a blog post I wrote a while back (before I learned   
  about comp) I put forward this 'one observer' notion as the only   
  solution to a paradox that occurred to me when thinking about the   
  idea of cryogenic freezing and resuscitation. I started wondering   
  how I could know whether the consciousness of the person being   
  resuscitated was the 'same consciousness' (whatever that means) as   
  the consciousness of the person who was frozen. That is, is a new   
  subject created with all your memories (who will of course swear   
  they are you), or is the new subject really you? 
  This seems like a silly or meaningless point until you ask yourself   
  the question, If I am frozen and then cryogenicaly resurrected   
  should I be scared of bad experiences the resurrected person might   
  have? Will they be happening to *me*, or to some person with my   
  memories and personality I don't have to worry about? It becomes   
  even clearer if you imagine dismantling and reassembling the brain   
  atom by atom. What then provides the continuity between the pre- 
  dismantled and the reassembled brain? It can only be the continuity   
  of self-reference (the comp assumption) that makes 'me' me, since   
  there is no physical continuity at all. 
  But let's say the atoms are jumbled a little at reassembly,   
  resulting in a slight personality change or the loss of some or all   
  memories. Should I, about to undergo brain disassembly and   
  reassembly, be worried about experiences of this person in the   
  future who is now not quite me? What then if the reassembled brain   
  is changed enough that I am no longer recognizable as me? Following   
  this through to its logical conclusion, it becomes clear that the   
  division between subjects is not absolute. What separates   
  subjectivities is the contents of consciousness (comp would say the   
  computations being performed), not some kind of other mysterious   
  'label' or identifier that marks certain experiences as belonging to   
  one subject and not another (such as, for instance, being the owner   
  of a specific physical brain). 
  I find this conclusion irresistible - and frankly terrifying. It's   
  like reincarnation expanded to the infinite degree, where 'I' must   
  ultimately experience every subjective experience (or at least every   
  manifested subjective experience, if I stop short of comp and the   
  UD). What it does provide is a rationale for the Golden Rule of   
  morality. Treat others as I would have them treat me because they   
  *are* me, there is no other! If we really lived with the knowledge   
  of this unity, if we grokked it deep down, surely it would change   
  the way we relate to others. And if it were widely accepted as fact,   
  wouldn't it lead to the optimal society, since 
  everyone would know that they will be/are on the receiving end of   
  every action they commit? Exploitation is impossible since you can   
  only steal from yourself. 

  
 Hi Pierz,

 A few comments. What is the process or relation that defines the I? 
 If there is one I, as you discuss here, would not that I have 
 experiences that are mutually contradictory? How would this not do damage 
 to the idea that a conscious experience is an integrated whole and thus 
 contains no contradiction?

 The idea of a single mind or observer does not imply that everything is 
happening at once in that mind - or rather, it does not imply that the I is 
aware of everything at once. That is patently not the case. It is hard to 
define in objective terms what is meant by the 'I', because the I is the 
process of subjectivity itself and so not amenable to objectification. But 
one way I have conceptualised it as follows. Our normal view posits the 
existence of multiple separate minds, each of which has extension in time 
(but, oddly, not space - we aren't talking about brains). Whereas the one 
mind view would see that all apparently separate minds are as it were 
different perspectives of and on the same single mind. An examination of 
the logical consequences of an extension of mind in time (the cryogenic 
paradox or the disassembly/reassembly thought experiment) shows that there 
can be no hidden identity to consciousness beyond the contents of that 
consciousness. No mutual contradiction occurs in the same way that the 
shape of the underside of an elephant does not 

Re: One subject

2012-06-12 Thread Pierz


On Wednesday, June 13, 2012 4:27:29 AM UTC+10, Bruno Marchal wrote:


 On 12 Jun 2012, at 04:19, Pierz wrote:



 On Monday, June 11, 2012 10:46:42 PM UTC+10, Bruno Marchal wrote:


 On 11 Jun 2012, at 03:12, Pierz wrote: 

  I'm starting this as a new thread rather than continuing under 'QTI   
  and eternal torment', where this idea came up, because it's really a   
  new topic. 
  It seems to me an obvious corollary of comp that there is in reality   
  (3p) only one observer, a single subject that undergoes all possible   
  experiences. In a blog post I wrote a while back (before I learned   
  about comp) I put forward this 'one observer' notion as the only   
  solution to a paradox that occurred to me when thinking about the   
  idea of cryogenic freezing and resuscitation. I started wondering   
  how I could know whether the consciousness of the person being   
  resuscitated was the 'same consciousness' (whatever that means) as   
  the consciousness of the person who was frozen. That is, is a new   
  subject created with all your memories (who will of course swear   
  they are you), or is the new subject really you? 
  This seems like a silly or meaningless point until you ask yourself   
  the question, If I am frozen and then cryogenicaly resurrected   
  should I be scared of bad experiences the resurrected person might   
  have? Will they be happening to *me*, or to some person with my   
  memories and personality I don't have to worry about? It becomes   
  even clearer if you imagine dismantling and reassembling the brain   
  atom by atom. What then provides the continuity between the pre- 
  dismantled and the reassembled brain? It can only be the continuity   
  of self-reference (the comp assumption) that makes 'me' me, since   
  there is no physical continuity at all. 
  But let's say the atoms are jumbled a little at reassembly,   
  resulting in a slight personality change or the loss of some or all   
  memories. Should I, about to undergo brain disassembly and   
  reassembly, be worried about experiences of this person in the   
  future who is now not quite me? What then if the reassembled brain   
  is changed enough that I am no longer recognizable as me? Following   
  this through to its logical conclusion, it becomes clear that the   
  division between subjects is not absolute. What separates   
  subjectivities is the contents of consciousness (comp would say the   
  computations being performed), not some kind of other mysterious   
  'label' or identifier that marks certain experiences as belonging to   
  one subject and not another (such as, for instance, being the owner   
  of a specific physical brain). 
  I find this conclusion irresistible - and frankly terrifying. It's   
  like reincarnation expanded to the infinite degree, where 'I' must   
  ultimately experience every subjective experience (or at least every   
  manifested subjective experience, if I stop short of comp and the   
  UD). What it does provide is a rationale for the Golden Rule of   
  morality. Treat others as I would have them treat me because they   
  *are* me, there is no other! If we really lived with the knowledge   
  of this unity, if we grokked it deep down, surely it would change   
  the way we relate to others. And if it were widely accepted as fact,   
  wouldn't it lead to the optimal society, since 
  everyone would know that they will be/are on the receiving end of   
  every action they commit? Exploitation is impossible since you can   
  only steal from yourself. 

 I can agree, but it is not clear if it is assertable (it might belong   
 to variant of G*, and not of G making that kind of moral proposition   
 true but capable of becoming false if justified  too much, like all   
 protagorean virtues (happiness, free-exam, intelligence, goodness,   
 etc.). Cf hell is paved with good intentions. 

 Also, a masochist might become a sadist by the same reasoning, which,   
 BTW, illustrates that the (comp) moral is not don't do to the others   
 what you don't want the others do to you, but don't do to the others   
 what *the others* don't want you do to them. 
 In fact, unless you defend your life,  just respect the possible adult   
 No Thanks.  (It is more complex with the children, you must add   
 nuances like as far as possible). 


 I don't know what G* and G are, but I get the gist, and I agree. In fact, 
 questions like how to deal with punishment become interesting when 
 considered through this 'one subject' lens. When 'I' am the offender, I 
 don't want to be punished for my crimes, but 'I' as the victim and the 
 broader community think the offender should be. We have to balance 
 competing views. Also, there is sense in looking after oneself ahead of 
 others to the extent that I of all people am best equipped to look after my 
 own needs, and I have the same rights to happiness, material wellbeing etc 
 as others. The question is, what course of 

Re: free will and mathematics

2012-06-12 Thread meekerdb

On 6/12/2012 1:06 PM, R AM wrote:



On Tue, Jun 12, 2012 at 9:39 PM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


I means that, in retrospect, I can't trace back to external (to me) causes, 
a
deterministic sequence that inevitably led me to do that.

Isn't that randomness?


No, it's unpredictablity - something we may fruitfully model by a mathematical theory of 
randomness even though the dynamics are perfectly deterministic, when we don't know enough 
to use the dynamics to predict results.  Except in quantum mechanics, where events may be 
inherently random, 'randomness' is just modeling uncertainty due to ignorance and so it is 
relative to what is known.



  Conceivably we could make an intelligent machine that could keep a record 
of all
its internal states so that when did something it could then cite the 
sequence of
internal states and say, See I had to do it.  It was just physics.

And the machine would be right ...



Or in other words, if the same situation is repeated I would do 
otherwise. But
it's difficult to explain (I might be wrong too).
OK, let's suppose that exactly the same conscious state is repeated N 
times. If
each time we do a different action, even opposite ones (such as killing or 
not
killing someone), then our decision making is basically random. I don't 
think that
is what is meant by free will.


I think that's wrong. You are equating unpredictable with random.  Suppose 
the same
conscious state is repeated and one second later you either shoot someone 
or you
punch him.  In that second different unconscious processes may determine 
what you
do; so that which you do is unpredictable.

Agreed, but then the reason is unconscious. To me, that's not free will.


That's a problem with 'free will'.  Some people, like Sam Harris, insist that it means the 
same thing it did in the middle ages, a supernatural ability to do the nomologically 
impossible by conscious thought.  Some people, like Daniel Dennett, look at how the 
concept functions in society and redefine it so it doesn't require the supernatural but 
has the same extension in social and legal discourse.



But it is only 'random' within a range which is determined by who you are - 
and in
this case you are very angry with the someone -

OK, but I think a defender of free will would say that you could have also kissed that 
person instead of attacking him.


But would he be wrong?  We, as external observers, might say that if his brain had been in 
exactly that state a second earlier it's extremely unlikely that he could have done 
differently (he might have been hit by a gamma ray, but...).  Or suppose we, as external 
observer, knew exactly that part of state of his brain that determined his *conscious* 
purpose, but not the other part, the unconscious.  Then we would assign a probability 
measure over the unconscious part and some part of it would result him doing the same and 
some in doing differently and we could assign probabilities to the various outcomes.  
We're modeling our ignorance of the unconscious part by a random model.  And so we'd 
conclude he could (in light of our imperfect knowledge) have done xi with probability pi 
for i=0,1,...  And that's exactly the same position he is in.  He has access to the 
conscious part of his brain, but not the unconscious.



so it is still an exercise of your will.  And it's not constrained or 
coerced, so
it's 'free will'.

But you are removing all possible decisions except different ways of attaking the 
victim, so it is not free will, at least not that feeling that I could have done 
anything no matter what.


But you know that's not the case.  You have a certain character, a certain consistency of 
behavior so that your friends can trust you NOT to do anything at random.  And having this 
consistency is essentially part of defining you and defining who it is who has 
compatibilist free will.  The fact that almost all this character is subconscious is 
irrelevant to the social meaning of 'free will'.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: One subject

2012-06-12 Thread meekerdb

On 6/12/2012 4:40 PM, Pierz wrote:
I didn't say that we would all turn into self-deniers concerned only to help others. I 
said we would achieve an optimal moral society. Such a society would always bear in mind 
the absolute equality of all subjects (not in the 'royal subject' sense!), with each 
person knowing their actions are received by none other than themselves. The best moral 
action would be the selfish action, seen from the perspective of the entire self rather 
than the fragmentary self. Imagine you share an island with a person for one year, and 
you know that the next year, you will become the other person on the island at the start 
of the same year again - ie, you will experience everything from their perspective. How 
will it change the way you behave?


So does this universal person include dogs? apes? spiders? rocks?

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.