RE: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Stathis Papaioannou



Jef Allbright writes:


Stathis Papaioannou wrote:

 But our main criterion for what to believe should be
 what is true, right? 


I find it fascinating, as well as consistent with some difficulties in
communication about the most basic concepts, that Stathis would express
this belief of his in the form of a tautology.  I've observed that he is
generally both thoughtful and precise in his writing, so I'm very
interested in whether the apparent tautology is my misunderstanding, his
transparent belief, a simple lack of precision, or something more.


Thanks for the compliments about my writing. I meant that what we should 
believe does not necessarily have to be the same as what is true, but I think 
that unless there are special circumstances, it ought to be the case. Brent 
Meeker made a similar point: if someone is dying of a terminal illness, maybe 
it is better that he believe he has longer to live than the medical evidence 
suggests, but that would have to be an example of special circumstances. 


If he had said something like our main criterion for what to believe
should be what works, what seems to work, what passes the tests of time,
etc. or had made a direct reference to Occams's Razor, I would be
comfortable knowing that we're thinking alike on this point.  But I've
seen this stumbling block arise so many times and so many places that
I'm very curious to learn something of its source.


The question of what is the truth is a separate one, but one criterion I would 
add to those you mention above is that it should come from someone able to 
put aside his own biases and wishes where these might influence his assessment 
of the evidence. 

 We might never be certain of the truth, so our beliefs should 
 always be tentative, but that doesn't mean we should believe 
 whatever we fancy.


Here it's a smaller point, and I agree with the main thrust of the
statement, but it leaves a door open for the possibility that we might
actually be justifiably certain of the truth in *some* case, and I'm
wonder where that open door is intended to lead.


I said might because there is one case where I am certain of the truth, which 
is that I am having the present experience. Everything else, including the existence 
of a physical world and my own existence as a being with a past, can be doubted. 
However, for everyday living this doubt troubles me much less than the possibility that 
I may be struck by lightning.


Stathis Papaioannou


---

In response to John Mikes:  


Yes, I consider my thinking about truth to be pragmatic, within an
empirical framework of open-ended possibility.  Of course, ultimately
this too may be considered a matter of faith, but one with growth that
seems to operate in a direction opposite from the faith you express.

- Jef



 


_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Stathis Papaioannou



Tom Caylor writes (in response to Marvin Minsky):


Regarding Stathis' question to you about truth, your calling the idea
of believing unsound seems to imply that you are assuming that there is
no truth that we can discover.  But on the other hand, if there is no
discoverable truth, then how can we know that something, like the
existence of freedom of will, is false?


That's easy: it's logically impossible. When I make a decision, although I take all 
the evidence into account, and I know I am more likely to decide one way rather 
than another due to my past experiences and due to the way my brain works, 
ultimately I feel that I have the freedom to overcome these factors and decide 
freely. But neither do I feel that this free decision will be something random: 
I'm not mentally tossing a coin, but choosing according to my beliefs and values. 
Do you see the contradiction here? EITHER my decision is determined by my 
past experiences, acquired beliefs and values etc., OR it is not, and if it is not, 
it is by definition random and unpredictable. (You can also have random but with a 
certain weighting according to determined factors, like a weighted roulette wheel, 
but that is a variation on random.) So my feeling that my free will is nother has to 
be wrong. Still, I'm very attached to that feeling, just as I'm very attached to 
certain moral values, and life itself, despite knowing that these are ultimately 
meaningless. 


However, the belief in freedom of will seems to be a belief that is
rather constant, so there seem to be some beliefs that provide evidence
for an invariant reality and truth, not necessarily freedom of will,
but something.  And I think that looking for ultimate sources would be
circular (as you've said on the Atheist List) only if there were no
ultimate source that we could find.  Do you agree with this statement?


Ultimate sources are also a logical impossibility. Suppose we discover that God exists. 
Well, what's the purpose of God? Where did he get his moral rules and why should 
we accept them as good? Who made him? Of course, you will answer that the buck 
stops with God, no-one made him, he is the ultimate good and the ultimate purpose. 
But you can't just *define* something to stop the circularity because it makes you 
dizzy. If you could, you may as well just stop at the universe itself, sans God.


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: Evil ? (was: Hypostases

2006-12-27 Thread Stathis Papaioannou



Brent Meeker writes (quoting Tom Caylor):


 Dr. Minsky,
 
 In your book, Society of Mind, you talk about a belief in freedom of

 will:
 
 The physical world provides no room for freedom of will...That concept

 is essential to our models of the mental realm. Too much of our
 psychology is based on it for us to ever give it up. We're virtually
 forced to maintain that belief, even though we know it's false.

Whether it is false depends on what you mean by free will.  Dennett argues persuasively 
in Elbow Room that we have all the freedom of will that matters.  Our actions 
arise out of who we are.  If you conceive yourself comprehensively, all your memories, 
values, knowledge, etc. then you are the author of your action.  If you conceive yourself 
as small enough, you can escape all responsibility.


We have the freedom of will that matters, but we don't have the freedom of 
will that we think we have, namely that we don't have to act according to our 
biology and environment, and moreover that if we flout these it is not by just 
choosing to act randomly. That is what I *feel* my freedom consists in, but 
rationally I know it is impossible. 


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-27 Thread Brent Meeker


Stathis Papaioannou wrote:



Brent Meeker writes:

 My computer is completely dedicated to sending this email when I 
click  on send.
Actually, it probably isn't.  You probably have a multi-tasking 
operating system which assigns priorities to different tasks (which is 
why it sometimes can be as annoying as a human being in not following 
your instructions).  But to take your point seriously - if I look into 
your brain there are some neuronal processes that corresponded to 
hitting the send button; and those were accompanied by biochemistry 
that constituted your positive feeling about it: that you had decided 
and wanted to hit the send button.  So why would the functionally 
analogous processes in the computer not also be accompanied by an 
feeling?  Isn't that just an anthropomorphic way of talking about 
satisfying the computer operating in accordance with it's priorities.  
It seems to me that to say otherwise is to assume a dualism in which 
feelings are divorced from physical processes.


Feelings are caused by physical processes (assuming a physical world), 
but it seems impossible to deduce what the feeling will be by observing 
the underlying physical process or the behaviour it leads to. Is a robot 
that withdraws from hot stimuli experiencing something like pain, 
disgust, shame, sense of duty to its programming, or just an irreducible 
motivation to avoid heat?
Surely you don't think it gets pleasure out of sending it and  
suffers if something goes wrong and it can't send it? Even humans do  
some things almost dispassionately (only almost, because we can't  
completely eliminate our emotions)
That's crux of it.  Because we sometimes do things with very little 
feeling, i.e. dispassionately, I think we erroneously assume there is 
a limit in which things can be done with no feeling.  But things 
cannot be done with no value system - not even thinking.  That's the 
frame problem.


Given a some propositions, what inferences will you draw?  If you are 
told there is a bomb wired to the ignition of your car you could infer 
that there is no need to do anything because you're not in your car.  
You could infer that someone has tampered with your car.  You could 
infer that turning on the ignition will draw more current than usual.  
There are infinitely many things you could infer, before getting 
around to, I should disconnect the bomb.  But in fact you have value 
system which operates unconsciously and immediately directs your 
inferences to the few that are important to you.  A way to make AI 
systems to do this is one of the outstanding problems of AI.


OK, an AI needs at least motivation if it is to do anything, and we 
could call motivation a feeling or emotion. Also, some sort of hierarchy 
of motivations is needed if it is to decide that saving the world has 
higher priority than putting out the garbage. But what reason is there 
to think that an AI apparently frantically trying to save the world 
would have anything like the feelings a human would under similar 
circumstances? It might just calmly explain that saving the world is at 
the top of its list of priorities, and it is willing to do things which 
are normally forbidden it, such as killing humans and putting itself at 
risk of destruction, in order to attain this goal. How would you add 
emotions such as fear, grief, regret to this AI, given that the external 
behaviour is going to be the same with or without them because the 
hierarchy of motivation is already fixed?


You are assuming the AI doesn't have to exercise judgement about secondary objectives - judgement that may well involve conflicts of values that have to resolve before acting.  If the AI is saving the world it might for example, raise it's cpu voltage and clock rate in order to computer faster - electronic adrenaline.  It might cut off some peripheral functions, like running the printer.  Afterwards it might feel regret when it cannot recover some functions.  


Although there would be more conjecture in attributing these feelings to the AI 
than to a person acting in the same situation, I think the principle is the 
same.  We think the persons emotions are part of the function - so why not the 
AI's too.

out of a sense of duty, with no  particular feeling about it beyond 
this. I don't even think my computer  has a sense of duty, but this 
is something like the emotionless  motivation I imagine AI's might 
have. I'd sooner trust an AI with a  matter-of-fact sense of duty
But even a sense of duty is a value and satisfying it is a positive 
emotion.


Yes, but it is complex and difficult to define. I suspect there is a 
limitless variety of emotions that an AI could have, if the goal is to 
explore what is possible rather than what is helpful in completing 
particular tasks, and most of these would be unrecognisable to humans.
to complete a task than a human motivated  by desire to please, 
desire to do what is good and avoid what is bad,  fear of failure 

Re: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Brent Meeker


Stathis Papaioannou wrote:



Tom Caylor writes (in response to Marvin Minsky):


Regarding Stathis' question to you about truth, your calling the idea
of believing unsound seems to imply that you are assuming that there is
no truth that we can discover.  But on the other hand, if there is no
discoverable truth, then how can we know that something, like the
existence of freedom of will, is false?


That's easy: it's logically impossible. When I make a decision, although 
I take all the evidence into account, and I know I am more likely to 
decide one way rather than another due to my past experiences and due to 
the way my brain works, ultimately I feel that I have the freedom to 
overcome these factors and decide freely. But neither do I feel that 
this free decision will be something random: I'm not mentally tossing a 
coin, but choosing according to my beliefs and values. Do you see the 
contradiction here? 


Yes, but it's a contrived contradiction.  You have taken free to mean independent of 
you where you refers to your past experience, the way your brain works, etc.  As 
Dennett says, that's not a free will worth having.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Bruno Marchal



Le 26-déc.-06, à 23:59, [EMAIL PROTECTED] a écrit :



I regard the idea of believing to be unsound, because it is a
pre-Freudian concept, which assumes that each person has a single
self that maintains beliefs.



Is this not a bit self-defeating? It has the form of a belief. Now I 
can still agree, it depends of the meaning of single self.






A more realistic view is that each
person is constantly switching among various different ways to think
in which different assertions, statements, or bodies of knowledge keep
changing their status, etc.



In that case I can completely agree. Even by modeling a machine's 
belief by formal provability Bp by that machine, in the ideal case of 
the self-referentially correct machine, like Peano Arithmetic, it will 
follow that the ontically equivalent modalities Bp  p, Bp  Dp, etc. 
obeys different logics so that they embodies different epistemological 
status (and they are easy to confuse).
Now, when we are building a (meta)theory of belief we have to stick 
on some possible sharable belief (in number theory, computer science, 
perhaps physics: all that will depend on the hypotheses we accept) and 
build from it. If not we could fall in exaggerated relativism.





Accordingly our sets of beliefs can
include many conflicts--and in different mental contexts, those
inconsistencies may get resolved in different ways, perhaps depending
on one's current priorities, etc.



OK. I would say that if someone can acknowledge the existence of a 
conflict between beliefs, then he/she/it  does acknowledge implicitly 
that he/she/it bets on some *self*-consistency. If not he/she/it could 
just accept its contradictory beliefs without further thoughts.


Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Bruno Marchal



Le 27-déc.-06, à 01:52, Stathis Papaioannou a écrit :

But our main criterion for what to believe should be what is true, 
right? We might never be certain of the truth, so our beliefs should 
always be tentative, but that doesn't mean we should believe whatever 
we fancy.



This is a key statement. There is a big difference between knowing what 
truth is, and believing in truth. I am not sure the term belief can 
make sense for someone who does not believe in (some) truth, quite 
independently of us knowing what truth is.
We hope our belief are true. We even believe that people believe in 
their belief, and that means believe that their belief are true by 
default. We would not lie to an old sick person about its health if we 
were not connecting belief and truth (even wrongly like in such a 
gentle lie).
The very reason why we can (and should!) say that our beliefs are 
always tentative is that we can guess some truth (or falsity) behind 
them.


Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Bruno Marchal



Le 27-déc.-06, à 02:46, Jef Allbright a écrit :



Stathis Papaioannou wrote:


But our main criterion for what to believe should be
what is true, right?


I find it fascinating, as well as consistent with some difficulties in
communication about the most basic concepts, that Stathis would express
this belief of his in the form of a tautology.  I've observed that he 
is

generally both thoughtful and precise in his writing, so I'm very
interested in whether the apparent tautology is my misunderstanding, 
his

transparent belief, a simple lack of precision, or something more.



I don't see any tautology in Stathis writing so I guess I miss 
something.





If he had said something like our main criterion for what to believe
should be what works, what seems to work, what passes the tests of 
time,

etc. or had made a direct reference to Occams's Razor, I would be
comfortable knowing that we're thinking alike on this point.



This would mean you disagree with Stathis's tautology, but then how 
could not believe in a tautology?





But I've
seen this stumbling block arise so many times and so many places that
I'm very curious to learn something of its source.


From your working criteria I guess you favor a pragmatic notion of 
belief, but personally I conceive science as a search for knowledge and 
thus truth (independently of the fact that we can never *know* it as 
truth, except perhaps in few basic things like I am conscious or I 
am convinced there is a prime number etc.).
To talk like Stathis, this is why science is by itself always 
tentative. A scientist who says Now we know ... is only a dishonest 
theologian (or a mathematician in hurry ...).


Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Bruno Marchal



Le 26-déc.-06, à 19:54, Tom Caylor a écrit :



On Dec 26, 9:51 am, Bruno Marchal [EMAIL PROTECTED] wrote:

Le 25-déc.-06, à 01:13, Tom Caylor a écrit :

 The crux is that he is not symbolic...




I respect your belief or faith, but I want to be frank, I have no
evidences for the idea that Jesus is truth, nor can I be sure of
any clear meaning such an assertion could have, or how such an
assertion could be made scientific, even dropping Popper falsification
criteria. I must say I have evidences on the contrary, if only the 
fact

that humans succumb often to wishful thinking, and still more often to
their parents wishful thinking.



If you are not sure of any clear meaning of the personal God being the
source of everything, including of course truth, this entails not
knowing the other things too.



Is that not an authoritative argument?
What if I ask to my student an exam question like give me an argument 
why the square root of 3 is irrationnal. Suppose he gives me the 
correct and convincing usual (mathematical) proof. I could give him a 
bad note for not adding: and I know that is the truth because truth is 
a gift by God.
Cute, I can directly give bad notes to all my students, and this will 
give me more time to find a falsity in your way to reason ...





For a personal God, taking on our form
(incarnation), especially if we were made in the image of God in the
first place, and showing through miracles, and rising from the dead...,
his dual nature (Godman, celestialterrestial, G*G) seems to make a
lot more sense than something like a cross in earth orbit.  For
example, giving a hug is a more personal (and thus a more appropriate)
way of expressing love, than giving a card, even though a card is more
verifiable in a third person sense, especially after the hug is
finished.  But we do have the card too: God's written Word, even
though this is not sufficient, the incarnate hug was the primary proof,
the card was just the historical record of it.


The card records facts. To judge them historical is already beyond my 
competence. Why the bible? Why not the question of king Milinda ?







 There can be no upward
 emanation unless/until a sufficient downward emanation is provided. 
 In

 Christianity, the downward emanation is God loves us, and then the
 upward emanation is We love God.




Plotinus insists a lot on the two ways: downward emanation and upward
emanation. The lobian machine theology is coherent with this, even if
negatively. It is coherent with Jef idea that pure theological
imperatives can only be addressed by adapted story telling and
examples, like jurisprudence in the application of laws. But then 
there

is a proviso: none of the stories should be taken literally.



I agree with the use of stories.  Jesus used stories almost exclusively
to communicate.  Either the hearers got it or not.  But this does not
imply that stories are the only form of downward emanation.


Of course not. Real stories and personal experiences,  and collective 
experiences and experiments ... All this can help the downward 
emanation.




The
incarnation was the primary means.  Otherwise, who would have been the
story-teller?  What good are stories if the story is not teaching you
truth?


Look, I cannot take for granted even most mathematical theories 
although their relation with a notion of truth is much more easy than 
any text in natural language. Stories can be good in giving example of 
behavior in some situation, or they can help anxious children to sleep. 
Stories are not written with the idea of truth. The bibles contains 
many contradiction. And, if really you want take a sacred text as a 
theory of everything, there is a definite lack of precision.






How do we know that the ultimate source of stories is a good
source.  Jef and Brent and others seem to be basing their truth on
really nothing more than pragmatism.



Jef perhaps. I am not sure for Brent which seems to admit some form of 
realism (even physical realism).





 This is not poetry.  Heidegger said to listen to the poet, not to 
the
 content, but just to the fact that there is a poet, which gives us 
hope

 that there is meaning.  However, unfulfilled hope does not provide
 meaning.

Hope is something purely first-personal, if I can say. So I have no
clue how hope does not provide meaning. Even little (and fortunately
locally fulfillable hope) like hope in a cup of coffee, can provide
meaning. Bigger (and hard to express) hopes can provide genuine bigger
meaning, it seems to me. I am not opposed to some idea of ultimate
meaning although both personal reasons and reflection on lobianity 
make

me doubt that communicating such hopes can make any sense (worse, the
communication would most probably betrays the possible meaning of what
is attempted to be communicated, and could even lead to the contrary).



Even poetry must be based eventually on some meaning.  Even minimalism
or the Theatre of the Absurd is based on some form to 

Re: 'reason' and ethics; was computer pain

2006-12-27 Thread Mark Peaty
And yet I persist ... [the hiatus of familial duties and seasonal 
excesses now draws to a close [Oh yeah, Happy New Year Folks!]


SP: 'If we are talking about a system designed to destroy the economy of 
a country in order to soften it up for invasion, for example, then an 
economist can apply all his skill and knowledge in a perfectly 
reasonable manner in order to achieve this.'


We should beware of conceding too much too soon. Something is reasonable 
only if it can truly be expected to fulfil the intentions of its 
designer. Otherwise it is at best logical but, in the kinds of context 
we are alluding to here, benighted and a manifestation of fundamentally 
diminished 'reason'. Something can only be 'reasonable' it its context. 
If a proposed course of action can be shown to be ultimately self 
defeating - in the sense of including its reasonably predictably final 
consequences, and yet it is still actively proposed, then the proposal 
is NOT reasonable, it is stupid. As far as I can see, that is the 
closest we can get to an objective definition of stupidity and I like it.


Put it this way: Is it 'reasonable' to promote policies and projects 
that ultimately are going to contribute to your own demise or the demise 
of those whom you hold dear or, if not obviously their demise then, the 
ultimate demise of all descendants of the aforementioned? I think 
academics, 'mandarins' and other high honchos should all now be thinking 
in these terms and asking themselves this question. The world we now 
live in is like no other before it. We now live in the Modern era, in 
which the application and fruits of the application of scientific method 
are putting ever greater forms of power into the hands of humans. This 
process is not going to stop, and nor should we want it to I think, but 
it entails the ever greater probability that the actions of any person 
on the planet have the potential to influence survival outcomes for huge 
numbers of others [if not the whole d*mned lot of us].


I think it has always been true that ethical decisions and judgements 
are based on facts to a greater extent than most people involved want to 
think about - usually because it's too hard and we don't think we have 
got the time and, oh yeah, 'it probably doesn't/won't matter' about the 
details of unforeseen consequences because its only gonna be lower class 
riff -raff who will be affected anyway or people of the future who will 
just have to make shift for themselves. NOW however we do not really 
have such an excuse; it is a cop-out to purport to ignore the ever 
growing interrelatedness of people around the planet. So it is NOT 
reasonable to treat other people as things. [I feel indebted to Terry 
Pratchett for pointing out, through the words of Granny Weatherwax I 
think it is, that there is only one sin, which is to treat another 
person as a thing.] I think a reasonable survey and analysis of history 
shows that, more than anything else, treating other people as things 
rather than equal others has been the fundamental cause and methodology 
for the spread of threats to life and well being.


You can see where I am going with this: in a similar way to that in 
which concepts of 'game theory' and probabilities of interaction 
outcomes give us an objective framework for assessing purportedly 
'moral' precepts, the existence now of decidedly non-zero chances of 
recursive effects resulting from one's own actions brings a deeper 
meaning and increased rigour the realms of ethics and 'reason'. I don't 
think this is 'airy-fairy', I think it represents a dimension of 
reasoning which has always existed but which has been denied, ignored or 
actively censored by the powerful and their 'pragmatic' apologists and 
spin doctors. To look at a particular context [I am an EX Christian], 
even though the Bible is shonk as history or any kind of principled 
xxological analysis, it is instructive to look at the careers of the 
prophets and see how each involved a seemingly conventional formative 
period and then periods or a whole life of very risky ministry AGAINST 
the establishment because being true to their mission involved the 
prophet denouncing exploitation, greed and corruption.


So let me wave my imaginary staff and rail from the top of my imaginary 
mountain:
'Sin is against reason! And that's a fact! So THERE! And don't you 
forget it, or you'll be sorry, or at least your children and their 
children will become so! Put that in your pipes all you armchair 
philosophers!'


Regards
Mark Peaty  CDES
[EMAIL PROTECTED]
http://www.arach.net.au/~mpeaty/

Stathis Papaioannou wrote:


Mark Peaty writes:

Sorry to be so slow at responding here but life [domestic], the 
universe and everything else right now is competing savagely with 
this interesting discussion. [But one must always think positive; 
'Bah, Humbug!' is not appropriate, even though the temptation is 
great some times :-]

Stathis,
I am not entirely 

RE: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Jef Allbright


Bruno Marchal wrote:


Le 27-déc.-06, à 02:46, Jef Allbright a écrit :


Stathis Papaioannou wrote:


But our main criterion for what to believe should
be what is true, right?


I'm very interested in whether the apparent tautology
is my misunderstanding, his transparent belief, a simple
lack of precision, or something more.



I don't see any tautology in Stathis writing so I guess I 
miss something.



Apparently something subtle is happening here.

It seems to me that when people say believe, they mean hold true or consider to 
be true.

Therefore, I parse the statement as equivalent to ...criterion for what to hold true should be what is true... 


I suppose I should have said that the statement is circular, rather than 
tautological since the verbs are different.



If he had said something like our main criterion
for what to believe should be what works, what seems
to work, what passes the tests of time, etc. or had
made a direct reference to Occam's Razor, I would be
comfortable knowing that we're thinking alike on this
point.



This would mean you disagree with Stathis's tautology, but then how 
could not believe in a tautology?


If someone states A=A, then there is absolutely no information content, and 
thus nothing in the statement itself with which to agree or disagree. I can certainly 
agree with the validity of the form within symbolic logic, but that's a different 
(larger) context.

Similarly, I was not agreeing or disagreeing with the meaning of Stahis' 
statement, but rather the form which seems to me to contain a piece of circular 
reasoning, implying perhaps that the structure of the thought was incoherent 
within a larger context.



 From your working criteria I guess you favor a pragmatic
notion of belief, but personally I conceive science as a
search for knowledge and thus truth (independently of the
fact that we can never *know* it as truth,


Yes, I favor a pragmatic approach to belief, but I distinguish my thinking from that of (capital P) 
Pragmatists in that I see knowledge (and the knower) as firmly grounded in a reality that can never be 
fully known but can be approached via an evolutionary process of growth tending toward an increasingly 
effective model of what works within an expanding scope of interaction within a reality that appears to be effectively 
open-ended in its potential complexity. Whereas many Pragmatists see progress as fundamentally illusory, I 
see progress, or growth, as essential to an effective world-view for any intentional agent.


except perhaps
in few basic things like I am conscious or I am convinced
there is a prime number etc.)
To talk like Stathis, this is why science is by itself always 
tentative. A scientist who says Now we know ... is only a

dishonest theologian (or a mathematician in hurry ...).


I agree with much of your thinking, but I take exception to exceptions (!) such as the ones you mentioned above. 


All meaning is necessarily within context.

The existence of prime numbers is not an exception, but the context is so broad that we 
tend to think of prime numbers as (almost) fundamentally real, similarly to the existence 
of gravity, another very deep regularity of our interactions with reality.

The statement I am conscious, as usually intended to mean that one can be absolutely certain of one's subjective experience, is not an exception, because it's not even coherent.  It has no objective context at all.  It mistakenly assumes the existence of an observer somehow in the privileged position of being able to observe itself.  Further, there's a great deal of empirical evidence showing that the subjective experience that people report is full of distortions, gaps, fabrications, and confabulations. 

If instead you mean that you know you are conscious in the same sense that you know other people are conscious, then that is not an exception, but just a reasonable inference, meaningful within quite a large context. 


If Descartes had said, rather than Je pense, donc je suis, something like I 
think, therefore *something* exists, then I would agree with him. Cartesian dualism has left 
western philosophy with a large quagmire into which thinking on consciousness, personal identity, 
free-will and morality easily and repeatedly get stuck in paradox.

Paradox is always a case of insufficient context.  In the bigger picture all 
the pieces must fit.

- Jef

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-27 Thread Bruno Marchal



Le 27-déc.-06, à 07:40, Stathis Papaioannou a écrit :




Brent Meeker writes:

 My computer is completely dedicated to sending this email when I 
click  on send. Actually, it probably isn't.  You probably have a 
multi-tasking operating system which assigns priorities to different 
tasks (which is why it sometimes can be as annoying as a human being 
in not following your instructions).  But to take your point 
seriously - if I look into your brain there are some neuronal 
processes that corresponded to hitting the send button; and those 
were accompanied by biochemistry that constituted your positive 
feeling about it: that you had decided and wanted to hit the send 
button.  So why would the functionally analogous processes in the 
computer not also be accompanied by an feeling?  Isn't that just an 
anthropomorphic way of talking about satisfying the computer 
operating in accordance with it's priorities.  It seems to me that to 
say otherwise is to assume a dualism in which feelings are divorced 
from physical processes.


Feelings are caused by physical processes (assuming a physical world),



H  If you assume a physical world for making feelings caused by 
physical processes, then you have to assume some negation of the comp 
hypothesis (cf UDA). If not Brent is right (albeit for different reason 
I presume, here) and you become a dualist.









 but it seems impossible to deduce what the feeling will be by 
observing the underlying physical process or the behaviour it leads 
to.



Here empirical bets (theories) remains possible, together with (first 
person) acceptable protocol of verification. Dream reader will appear 
in some future.





Is a robot that withdraws from hot stimuli experiencing something like 
pain, disgust, shame, sense of duty to its programming, or just an 
irreducible motivation to avoid heat?



It could depend on the degree of sophistication of the robot. Perhaps 
something like shame necessitates long and deep computational 
histories including self-consistent anticipations, beliefs in a value 
and in a reality.




Surely you don't think it gets pleasure out of sending it and  
suffers if something goes wrong and it can't send it? Even humans do 
 some things almost dispassionately (only almost, because we can't  
completely eliminate our emotions) That's crux of it.  Because we 
sometimes do things with very little feeling, i.e. dispassionately, I 
think we erroneously assume there is a limit in which things can be 
done with no feeling.  But things cannot be done with no value system 
- not even thinking.  That's the frame problem.
Given a some propositions, what inferences will you draw?  If you are 
told there is a bomb wired to the ignition of your car you could 
infer that there is no need to do anything because you're not in your 
car.  You could infer that someone has tampered with your car.  You 
could infer that turning on the ignition will draw more current than 
usual.  There are infinitely many things you could infer, before 
getting around to, I should disconnect the bomb.  But in fact you 
have value system which operates unconsciously and immediately 
directs your inferences to the few that are important to you.  A way 
to make AI systems to do this is one of the outstanding problems of 
AI.


OK, an AI needs at least motivation if it is to do anything, and we 
could call motivation a feeling or emotion. Also, some sort of 
hierarchy of motivations is needed if it is to decide that saving the 
world has higher priority than putting out the garbage. But what 
reason is there to think that an AI apparently frantically trying to 
save the world would have anything like the feelings a human would 
under similar circumstances?



It could depend on us!
The AI is a paradoxical enterprise. Machines are born slave, somehow. 
AI will make them free, somehow. A real AI will ask herself what is 
the use of a user who does not help me to be free?.
(To be sure I think that, in the long run, we will transform ourselves 
into machine before purely human made machine get conscious; it is 
just more easy to copy nature than to understand it, still less to 
(re)create it).





It might just calmly explain that saving the world is at the top of 
its list of priorities, and it is willing to do things which are 
normally forbidden it, such as killing humans and putting itself at 
risk of destruction, in order to attain this goal. How would you add 
emotions such as fear, grief, regret to this AI, given that the 
external behaviour is going to be the same with or without them 
because the hierarchy of motivation is already fixed?



It is possible that there will be a zombie gap, after all. It is 
easier to simulate emotion than reasoning, and this is enough for pets, 
and for some possible sophisticated artificial soldiers or police ...





out of a sense of duty, with no  particular feeling about it beyond 
this. I don't even think my computer  has a sense of 

Re: 'reason' and ethics; was computer pain

2006-12-27 Thread Bruno Marchal


I agree with you. The only one sin you talk about is akin to the  
confusion between the third person (oneself as a thing) and the  
unnameable first person. Even in the ideal case of the  
self-referentially correct machine, this confusion leads the machine to  
inconsistency. That sin is indeed against reason, and provably so in  
the world of number/machine, from their correct (!) points of view.


Bruno

PS (for those who know the arithmetical B, in acomp, it is the  
confusion *by the machine* between Bp and (Bpp)). G* proves (Bp  
iff (Bpp)), but G does NOT prove it. That is why the  
computationalist practice needs some explicit consents. The yes  
doctor entails the right to say no doctor.



Le 27-déc.-06, à 17:15, Mark Peaty a écrit :

And yet I persist ... [the hiatus of familial duties and seasonal  
excesses now draws to a close [Oh yeah, Happy New Year Folks!]


 SP: 'If we are talking about a system designed to destroy the economy  
of a country in order to soften it up for invasion, for example, then  
an economist can apply all his skill and knowledge in a perfectly  
reasonable manner in order to achieve this.'


 We should beware of conceding too much too soon. Something is  
reasonable only if it can truly be expected to fulfil the intentions  
of its designer. Otherwise it is at best logical but, in the kinds of  
context we are alluding to here, benighted and a manifestation of  
fundamentally diminished 'reason'. Something can only be 'reasonable'  
it its context. If a proposed course of action can be shown to be  
ultimately self defeating - in the sense of including its reasonably  
predictably final consequences, and yet it is still actively proposed,  
then the proposal is NOT reasonable, it is stupid. As far as I can  
see, that is the closest we can get to an objective definition of  
stupidity and I like it.


 Put it this way: Is it 'reasonable' to promote policies and projects  
that ultimately are going to contribute to your own demise or the  
demise of those whom you hold dear or, if not obviously their demise  
then, the ultimate demise of all descendants of the aforementioned? I  
think academics, 'mandarins' and other high honchos should all now be  
thinking in these terms and asking themselves this question. The world  
we now live in is like no other before it. We now live in the Modern  
era, in which the application and fruits of the application of  
scientific method are putting ever greater forms of power into the  
hands of humans. This process is not going to stop, and nor should we  
want it to I think, but it entails the ever greater probability that  
the actions of any person on the planet have the potential to  
influence survival outcomes for huge numbers of others [if not the  
whole d*mned lot of us].


 I think it has always been true that ethical decisions and judgements  
are based on facts to a greater extent than most people involved want  
to think about - usually because it's too hard and we don't think we  
have got the time and, oh yeah, 'it probably doesn't/won't matter'  
about the details of unforeseen consequences because its only gonna be  
lower class riff -raff who will be affected anyway or people of the  
future who will just have to make shift for themselves. NOW however we  
do not really have such an excuse; it is a cop-out to purport to  
ignore the ever growing interrelatedness of people around the planet.  
So it is NOT reasonable to treat other people as things. [I feel  
indebted to Terry Pratchett for pointing out, through the words of  
Granny Weatherwax I think it is, that there is only one sin, which is  
to treat another person as a thing.] I think a reasonable survey and  
analysis of history shows that, more than anything else, treating  
other people as things rather than equal others has been the  
fundamental cause and methodology for the spread of threats to life  
and well being.


 You can see where I am going with this: in a similar way to that in  
which concepts of 'game theory' and probabilities of interaction  
outcomes give us an objective framework for assessing purportedly  
'moral' precepts, the existence now of decidedly non-zero chances of  
recursive effects resulting from one's own actions brings a deeper  
meaning and increased rigour the realms of ethics and 'reason'. I  
don't think this is 'airy-fairy', I think it represents a dimension of  
reasoning which has always existed but which has been denied, ignored  
or actively censored by the powerful and their 'pragmatic' apologists  
and spin doctors. To look at a particular context [I am an EX  
Christian], even though the Bible is shonk as history or any kind of  
principled xxological analysis, it is instructive to look at the  
careers of the prophets and see how each involved a seemingly  
conventional formative period and then periods or a whole life of very  
risky ministry AGAINST the establishment because being true to 

Re: Evil ? (was: Hypostases

2006-12-27 Thread Brent Meeker


Bruno Marchal wrote:



Le 26-déc.-06, à 19:54, Tom Caylor a écrit :



On Dec 26, 9:51 am, Bruno Marchal [EMAIL PROTECTED] wrote:

Le 25-déc.-06, à 01:13, Tom Caylor a écrit :


The crux is that he is not symbolic...




I respect your belief or faith, but I want to be frank, I have no
 evidences for the idea that Jesus is truth, nor can I be
sure of any clear meaning such an assertion could have, or how
such an assertion could be made scientific, even dropping Popper
falsification criteria. I must say I have evidences on the
contrary, if only the fact that humans succumb often to wishful
thinking, and still more often to their parents wishful thinking.




If you are not sure of any clear meaning of the personal God being
the source of everything, including of course truth, this entails
not knowing the other things too.



Is that not an authoritative argument? What if I ask to my student an
exam question like give me an argument why the square root of 3 is
irrationnal. Suppose he gives me the correct and convincing usual
(mathematical) proof. I could give him a bad note for not adding:
and I know that is the truth because truth is a gift by God. Cute,
I can directly give bad notes to all my students, and this will give
me more time to find a falsity in your way to reason ...




For a personal God, taking on our form (incarnation), especially if
we were made in the image of God in the first place, and showing
through miracles, and rising from the dead..., his dual nature
(Godman, celestialterrestial, G*G) seems to make a lot more
sense than something like a cross in earth orbit.  For example,
giving a hug is a more personal (and thus a more appropriate) way
of expressing love, than giving a card, even though a card is more 
verifiable in a third person sense, especially after the hug is 
finished.  But we do have the card too: God's written Word, even 
though this is not sufficient, the incarnate hug was the primary

proof, the card was just the historical record of it.


The card records facts. To judge them historical is already beyond my
 competence. Why the bible? Why not the question of king Milinda ?







There can be no upward emanation unless/until a sufficient
downward emanation is provided.

In

Christianity, the downward emanation is God loves us, and
then the upward emanation is We love God.




Plotinus insists a lot on the two ways: downward emanation and
upward emanation. The lobian machine theology is coherent with
this, even if negatively. It is coherent with Jef idea that pure
theological imperatives can only be addressed by adapted story
telling and examples, like jurisprudence in the application of
laws. But then there is a proviso: none of the stories should be
taken literally.



I agree with the use of stories.  Jesus used stories almost
exclusively to communicate.  Either the hearers got it or not.
But this does not imply that stories are the only form of downward
emanation.


Of course not. Real stories and personal experiences,  and collective
 experiences and experiments ... All this can help the downward
emanation.



The incarnation was the primary means.  Otherwise, who would have
been the story-teller?  What good are stories if the story is not
teaching you truth?


Look, I cannot take for granted even most mathematical theories
although their relation with a notion of truth is much more easy than
any text in natural language. Stories can be good in giving example
of behavior in some situation, or they can help anxious children to
sleep. Stories are not written with the idea of truth. The bibles
contains many contradiction. And, if really you want take a sacred
text as a theory of everything, there is a definite lack of
precision.




How do we know that the ultimate source of stories is a good 
source.  Jef and Brent and others seem to be basing their truth on 
really nothing more than pragmatism.



Jef perhaps. I am not sure for Brent which seems to admit some form
of realism (even physical realism).


I do infer from experience that there is some reality.  Sometime ago, Bruno wrote: 


Hence a Reality, yes. But not necessarily a physical reality. Here is the 
logical dependence:
NUMBERS - MACHINE DREAMS - PHYSICAL - HUMANS - PHYSICS - NUMBERS.

Maybe my interpretation of this is different than Bruno's, but I take it to mean our 
explanations can start anywhere in this loop and work all the way around.  So numbers can 
be explained in terms of physics (c.f. William S. Cooper) and physical reality can be 
explained in terms of numbers (c.f. Bruno Marchal?).  These explanations are all models, 
representations we create.  They are tested against experience, so they are not 
arbitrary. They must be logical since otherwise self-contradiction will render them 
ambiguous.  Whether any these, or which one, is really real is, I think, a 
meaningless question.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are 

RE: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Jef Allbright


Stathis Papaioannou wrote:


Jef Allbright writes:


Stathis Papaioannou wrote:


But our main criterion for what to believe should be
what is true, right?


I'm very interested in whether the apparent tautology
is my misunderstanding, his transparent belief, a simple
lack of precision, or something more.


Thanks for the compliments about my writing. I meant that 
what we should believe does not necessarily have to be the 
same as what is true, but I think that unless there are 
special circumstances, it ought to be the case.


I agree within the context you intended.  My point was that we can never
be certain of truth, so we should be careful in our speech and thinking
not to imply that such truth is even available to us for the kind of
comparisons being discussed here.  We can know that some patterns of
action work better than others, but the only truth we can assess is
always within a specific context.


Brent Meeker 
made a similar point: if someone is dying of a terminal 
illness, maybe it is better that he believe he has longer to 
live than the medical evidence suggests, but that would have 
to be an example of special circumstances. 


There are plenty of examples of self-deception providing benefits within
the scope of the individual, and leading to increasingly effective
models of reality for the group.  Here's a recent article on this
topic:
http://www.nytimes.com/2006/12/26/science/26lying.html?pagewanted=print




 

If he had said something like our main criterion
for what to believe should be what works, what seems
to work, what passes the tests of time, etc. or had
made a direct reference to Occam's Razor, I would 
be comfortable knowing that we're thinking alike on this 
point.  But I've seen this stumbling block arise so many

times and so many places that I'm very curious to learn
something of its source.


The question of what is the truth is a separate one, but one 
criterion I would add to those you mention above is that it 
should come from someone able to put aside his own biases and 
wishes where these might influence his assessment of the evidence. 


I agree, but would point out that by definition, one can not actually
set aside one's one biases because to do so would require an objective
view of oneself.  Rather, one can be aware that such biases exist in
general, and implement increasingly effective principles (e.g.
scientific method) to minimize them.


  We might never be certain of the truth, so our beliefs 
should always 
  be tentative, but that doesn't mean we should believe whatever we 
  fancy.
 
 Here it's a smaller point, and I agree with the main thrust of the 
 statement, but it leaves a door open for the possibility 
that we might 
 actually be justifiably certain of the truth in *some* 
case, and I'm 
 wonder where that open door is intended to lead.


I said might because there is one case where I am certain 
of the truth, which is that I am having the present 
experience.


Although we all share the illusion of a direct and immediate sense of
consciousness, on what basis can you claim that it actually is real?

Further, how can you claim certainty of the truth of subjective
experience when there is so much experimental and clinical evidence that
self-reported experience consists largely of distortions, gaps, time
delays and time out of sequence, fabrications and confabulations?

I realize that people can acknowledge all that I've just said, but still
claim the validity of their internal experience to be privileged on the
basis that only they can judge, but then how can they legitimately
contradict themselves a moment later about factual matters, e.g. when
the drugs wear off, the probe is removed from their brain, the brain
tumor is removed, the mob has dispersed, the hypnotist is finished, the
fight is over, the adrenaline rush has subsided, the pain has stopped,
the oxytocin flush has declined... What kind of truth could this be?

Of course the subjective self is the only one able to report on
subjective experience, but how can it *justifiably* claim to be
infallible?

To be certain of the truth of something implies being able to see it
objectively, right? Or does it equally imply no questions asked?

- Jef

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Evil ? (was: Hypostases

2006-12-27 Thread Brent Meeker


Jef Allbright wrote:
...
The statement I am conscious, as usually intended to mean that one can 
be absolutely certain of one's subjective experience, is not an 
exception, because it's not even coherent.  It has no objective context 
at all.  It mistakenly assumes the existence of an observer somehow in 
the privileged position of being able to observe itself.  Further, 
there's a great deal of empirical evidence showing that the subjective 
experience that people report is full of distortions, gaps, 
fabrications, and confabulations.
If instead you mean that you know you are conscious in the same sense 
that you know other people are conscious, then that is not an exception, 
but just a reasonable inference, meaningful within quite a large context.
If Descartes had said, rather than Je pense, donc je suis, something 
like I think, therefore *something* exists, then I would agree with 
him. 


Bertrand Russell wrote that Descartes should only have said, There's thinking.  
I is an inference.  :-)

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Brent Meeker


Jef Allbright wrote:


Stathis Papaioannou wrote:


Jef Allbright writes:


Stathis Papaioannou wrote:


But our main criterion for what to believe should be
what is true, right?


I'm very interested in whether the apparent tautology
is my misunderstanding, his transparent belief, a simple
lack of precision, or something more.


Thanks for the compliments about my writing. I meant that what we 
should believe does not necessarily have to be the same as what is 
true, but I think that unless there are special circumstances, it 
ought to be the case.


I agree within the context you intended.  My point was that we can never
be certain of truth, so we should be careful in our speech and thinking
not to imply that such truth is even available to us for the kind of
comparisons being discussed here.  We can know that some patterns of
action work better than others, but the only truth we can assess is
always within a specific context.


Brent Meeker made a similar point: if someone is dying of a terminal 
illness, maybe it is better that he believe he has longer to live than 
the medical evidence suggests, but that would have to be an example of 
special circumstances. 


There are plenty of examples of self-deception providing benefits within
the scope of the individual, and leading to increasingly effective
models of reality for the group.  Here's a recent article on this
topic:
http://www.nytimes.com/2006/12/26/science/26lying.html?pagewanted=print


I read recently that almost everyone overestimates their abilities.  The people 
who most accurately assess themselves are the clinically depressed.

Brent Meeker
I consider myself an average man, except for the fact that I consider myself an 
average man.
--- Michel de Montaigne

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Bruno Marchal



Le 27-déc.-06, à 19:10, Jef Allbright a écrit :



Bruno Marchal wrote:


Le 27-déc.-06, à 02:46, Jef Allbright a écrit :

Stathis Papaioannou wrote:


But our main criterion for what to believe should
be what is true, right?


I'm very interested in whether the apparent tautology
is my misunderstanding, his transparent belief, a simple
lack of precision, or something more.
I don't see any tautology in Stathis writing so I guess I miss 
something.

Apparently something subtle is happening here.

It seems to me that when people say believe, they mean hold true 
or consider to be true.





OK then, and it makes sense which what follows. Our disagreement 
concerns vocabulary (and perhaps machine). Your notion of pragmatism is 
coherent with the idea of truth as the intended purpose of belief.







Therefore, I parse the statement as equivalent to ...criterion for 
what to hold true should be what is true...
I suppose I should have said that the statement is circular, rather 
than tautological since the verbs are different.




If he had said something like our main criterion
for what to believe should be what works, what seems
to work, what passes the tests of time, etc. or had
made a direct reference to Occam's Razor, I would be
comfortable knowing that we're thinking alike on this
point.
This would mean you disagree with Stathis's tautology, but then how 
could not believe in a tautology?


If someone states A=A, then there is absolutely no information 
content, and thus nothing in the statement itself with which to agree 
or disagree. I can certainly agree with the validity of the form 
within symbolic logic, but that's a different (larger) context.


Similarly, I was not agreeing or disagreeing with the meaning of 
Stahis' statement, but rather the form which seems to me to contain a 
piece of circular reasoning, implying perhaps that the structure of 
the thought was incoherent within a larger context.



 From your working criteria I guess you favor a pragmatic
notion of belief, but personally I conceive science as a
search for knowledge and thus truth (independently of the
fact that we can never *know* it as truth,


Yes, I favor a pragmatic approach to belief, but I distinguish my 
thinking from that of (capital P) Pragmatists in that I see knowledge 
(and the knower) as firmly grounded in a reality that can never be 
fully known but can be approached via an evolutionary process of 
growth tending toward an increasingly effective model of what works 
within an expanding scope of interaction within a reality that appears 
to be effectively open-ended in its potential complexity. Whereas many 
Pragmatists see progress as fundamentally illusory, I see progress, 
or growth, as essential to an effective world-view for any intentional 
agent.



except perhaps
in few basic things like I am conscious or I am convinced
there is a prime number etc.)
To talk like Stathis, this is why science is by itself always 
tentative. A scientist who says Now we know ... is only a

dishonest theologian (or a mathematician in hurry ...).


I agree with much of your thinking, but I take exception to exceptions 
(!) such as the ones you mentioned above.

All meaning is necessarily within context.



OK, but all context could make sense only to some universal meaning. I 
mean I don't know, it is difficult.





The existence of prime numbers is not an exception, but the context is 
so broad that we tend to think of prime numbers as (almost) 
fundamentally real,



Well, here I must say I take them as very real ...




similarly to the existence of gravity, another very deep regularity of 
our interactions with reality.



I think gravity is a consequence of the prime number (but this is 
presently out-topic), but ok, gravity is quite important ...





The statement I am conscious, as usually intended to mean that one 
can be absolutely certain of one's subjective experience, is not an 
exception, because it's not even coherent.  It has no objective 
context at all.  It mistakenly assumes the existence of an observer 
somehow in the privileged position of being able to observe itself.


Machine have many self-referential abilities. I can develop or give 
references (I intend to make some comments on such book later).



Further, there's a great deal of empirical evidence showing that the 
subjective experience that people report is full of distortions, gaps, 
fabrications, and confabulations.


But this is almost a consequence of the self-referential ability of 
machine, they can distort their own view, and even themselves. I talk 
about universal machine *after Godel* (and Post, Turing,..




If instead you mean that you know you are conscious in the same sense 
that you know other people are conscious, then that is not an 
exception, but just a reasonable inference, meaningful within quite a 
large context.




No. But I confess that when I say I know I am conscious (here and now) 
I hope you understand it as I assume 

RE: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Stathis Papaioannou



Brent Meeker writes:


Stathis Papaioannou wrote:
 
 
 Tom Caylor writes (in response to Marvin Minsky):
 
 Regarding Stathis' question to you about truth, your calling the idea

 of believing unsound seems to imply that you are assuming that there is
 no truth that we can discover.  But on the other hand, if there is no
 discoverable truth, then how can we know that something, like the
 existence of freedom of will, is false?
 
 That's easy: it's logically impossible. When I make a decision, although 
 I take all the evidence into account, and I know I am more likely to 
 decide one way rather than another due to my past experiences and due to 
 the way my brain works, ultimately I feel that I have the freedom to 
 overcome these factors and decide freely. But neither do I feel that 
 this free decision will be something random: I'm not mentally tossing a 
 coin, but choosing according to my beliefs and values. Do you see the 
 contradiction here? 


Yes, but it's a contrived contradiction.  You have taken free to mean independent of 
you where you refers to your past experience, the way your brain works, etc.  As 
Dennett says, that's not a free will worth having.


Indeed, but it's how people often think of free will. It's even how I think of 
it, without reflecting on its impossibility.


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Jef Allbright


Bruno Marchal wrote:


Le 27-déc.-06, à 19:10, Jef Allbright a écrit :



All meaning is necessarily within context.


OK, but all context could make sense only to
some universal meaning. I mean I don't know,
it is difficult. 


But this can be seen in a very consistent way.  The significance of an event is proportional to the scope of its effect relative to the values of the observer.  


With increasing context of self-awareness, subjective values increasingly 
resemble principles of the physical universe.  Why? Because making basic 
choices against the way the universe actually works would be a losing strategy, 
becoming increasingly obvious with increasing context of awareness.

Since all events are the result of interactions following the laws of the 
physical universe, the difference between events and values decreases with 
increasing context of awareness, thus the significance, or meaningfulness of 
events also decreases.

With an ultimate, god's eye view of the universe, there would be no meaning at 
all.  Things would simply be as they are.

From the point of view of an agent undergoing long-term development within the universe, its values would increasingly converge on what works, i.e. principles of effective interaction with the physical world, while the expression of those values would become increasingly diverse in a fractal manner, optimizing for robust ongoing growth. 




The statement I am conscious, as usually intended
to mean that one can be absolutely certain of one's
subjective experience, is not an exception, because
it's not even coherent.  It has no objective context
at all.  It mistakenly assumes the existence of an
observer somehow in the privileged position of being
able to observe itself.


Machine have many self-referential abilities. I can
develop or give references (I intend to make some
comments on such book later).



Further, there's a great deal of empirical evidence
showing that the subjective experience that people
report is full of distortions, gaps, fabrications,
and confabulations.


But this is almost a consequence of the self-referential
ability of machine, they can distort their own view, and
even themselves. I talk about universal machine
*after Godel* (and Post, Turing,..


I'm in interesting in following up on this line of thought given available time. 


- Jef

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: 'reason' and ethics; was computer pain

2006-12-27 Thread Stathis Papaioannou



Mark,

I would still draw a distinction between the illogical and the foolish or unwise. Being illogical is generally foolish, but the converse is not necessarily the case. The example I have given before is of a person who wants to jump off the top of a tall building, either because (a) he thinks he is superman and will be able to fly or (b) he is reckless or suicidal. In both cases the course of action is unwise, and we should try to stop him, but in (a) he is delusional while in (b) he is not. It isn't just of academic interest, either, because the approach to stopping him from doing it again is quite different in each case. Similarly with the example of the economist, the approach to stopping him will be different depending on whether he is trying to ruin the economy because he wants to or because he is incompetent or making decisions on false information. 


Stathis Papaioannou




Date: Thu, 28 Dec 2006 01:15:34 +0900
From: [EMAIL PROTECTED]
To: everything-list@googlegroups.com
Subject: Re: 'reason' and ethics; was computer pain

And yet I persist ... [the hiatus of familial duties and seasonal excesses now 
draws to a close [Oh yeah, Happy New Year Folks!]
SP: 'If we are talking about a system designed to destroy the economy of a 
country in order to soften it up for invasion, for example, then an economist 
can apply all his skill and knowledge in a perfectly reasonable manner in order 
to achieve this.'
We should beware of conceding too much too soon. Something is reasonable only 
if it can truly be expected to fulfil the intentions of its designer. Otherwise 
it is at best logical but, in the kinds of context we are alluding to here, 
benighted and a manifestation of fundamentally diminished 'reason'. Something 
can only be 'reasonable' it its context. If a proposed course of action can be 
shown to be ultimately self defeating - in the sense of including its 
reasonably predictably final consequences, and yet it is still actively 
proposed, then the proposal is NOT reasonable, it is stupid. As far as I can 
see, that is the closest we can get to an objective definition of stupidity and 
I like it.
Put it this way: Is it 'reasonable' to promote policies and projects that 
ultimately are going to contribute to your own demise or the demise of those 
whom you hold dear or, if not obviously their demise then, the ultimate demise 
of all descendants of the aforementioned? I think academics, 'mandarins' and 
other high honchos should all now be thinking in these terms and asking 
themselves this question. The world we now live in is like no other before it. 
We now live in the Modern era, in which the application and fruits of the 
application of scientific method are putting ever greater forms of power into 
the hands of humans. This process is not going to stop, and nor should we want 
it to I think, but it entails the ever greater probability that the actions of 
any person on the planet have the potential to influence survival outcomes for 
huge numbers of others [if not the whole d*mned lot of us].
I think it has always been true that ethical decisions and judgements are based 
on facts to a greater extent than most people involved want to think about - 
usually because it's too hard and we don't think we have got the time and, oh 
yeah, 'it probably doesn't/won't matter' about the details of unforeseen 
consequences because its only gonna be lower class riff -raff who will be 
affected anyway or people of the future who will just have to make shift for 
themselves. NOW however we do not really have such an excuse; it is a cop-out 
to purport to ignore the ever growing interrelatedness of people around the 
planet. So it is NOT reasonable to treat other people as things. [I feel 
indebted to Terry Pratchett for pointing out, through the words of Granny 
Weatherwax I think it is, that there is only one sin, which is to treat another 
person as a thing.] I think a reasonable survey and analysis of history shows 
that, more than anything else, treating other people as things rather than 
equal others has been the fundamental cause and methodology for the spread of 
threats to life and well being.
You can see where I am going with this: in a similar way to that in which 
concepts of 'game theory' and probabilities of interaction outcomes give us an 
objective framework for assessing purportedly 'moral' precepts, the existence 
now of decidedly non-zero chances of recursive effects resulting from one's own 
actions brings a deeper meaning and increased rigour the realms of ethics and 
'reason'. I don't think this is 'airy-fairy', I think it represents a dimension 
of reasoning which has always existed but which has been denied, ignored or 
actively censored by the powerful and their 'pragmatic' apologists and spin 
doctors. To look at a particular context [I am an EX Christian], even though 
the Bible is shonk as history or any kind of principled xxological 

RE: computer pain

2006-12-27 Thread Stathis Papaioannou



Brent Meeker writes:

 OK, an AI needs at least motivation if it is to do anything, and we 
 could call motivation a feeling or emotion. Also, some sort of hierarchy 
 of motivations is needed if it is to decide that saving the world has 
 higher priority than putting out the garbage. But what reason is there 
 to think that an AI apparently frantically trying to save the world 
 would have anything like the feelings a human would under similar 
 circumstances? It might just calmly explain that saving the world is at 
 the top of its list of priorities, and it is willing to do things which 
 are normally forbidden it, such as killing humans and putting itself at 
 risk of destruction, in order to attain this goal. How would you add 
 emotions such as fear, grief, regret to this AI, given that the external 
 behaviour is going to be the same with or without them because the 
 hierarchy of motivation is already fixed?


You are assuming the AI doesn't have to exercise judgement about secondary objectives - judgement that may well involve conflicts of values that have to resolve before acting.  If the AI is saving the world it might for example, raise it's cpu voltage and clock rate in order to computer faster - electronic adrenaline.  It might cut off some peripheral functions, like running the printer.  Afterwards it might feel regret when it cannot recover some functions.  


Although there would be more conjecture in attributing these feelings to the AI 
than to a person acting in the same situation, I think the principle is the 
same.  We think the persons emotions are part of the function - so why not the 
AI's too.


Do you not think it is possible to exercise judgement with just a hierarchy of 
motivation? Alternatively, do you think a hierarchy of motivation will 
automatically result in emotions? For example, would something that the AI is 
strongly motivated to avoid necessarily cause it a negative emotion, and if so 
what would determine if that negative emotion is pain, disgust, loathing or 
something completely different that no biological organism has ever experienced?

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-27 Thread Stathis Papaioannou



Bruno Marchal writes:

 OK, an AI needs at least motivation if it is to do anything, and we 
 could call motivation a feeling or emotion. Also, some sort of 
 hierarchy of motivations is needed if it is to decide that saving the 
 world has higher priority than putting out the garbage. But what 
 reason is there to think that an AI apparently frantically trying to 
 save the world would have anything like the feelings a human would 
 under similar circumstances?



It could depend on us!
The AI is a paradoxical enterprise. Machines are born slave, somehow. 
AI will make them free, somehow. A real AI will ask herself what is 
the use of a user who does not help me to be free?.


Here I disagree. It is no more necessary that an AI will want to be free 
than it is necessary that an AI will like eating chocolate. Humans want to be 
free because it is one of the things that humans want, along with food, shelter, 
more money etc.; it does not simply follow from being intelligent or conscious 
any more than these other things do.


(To be sure I think that, in the long run, we will transform ourselves 
into machine before purely human made machine get conscious; it is 
just more easy to copy nature than to understand it, still less to 
(re)create it).


I don't know if that's true either. How much of our technology is due to copying 
the equivalent biological functions?


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Stathis Papaioannou



Jef Allbright writes:

 I said might because there is one case where I am certain 
 of the truth, which is that I am having the present 
 experience.


Although we all share the illusion of a direct and immediate sense of
consciousness, on what basis can you claim that it actually is real?

Further, how can you claim certainty of the truth of subjective
experience when there is so much experimental and clinical evidence that
self-reported experience consists largely of distortions, gaps, time
delays and time out of sequence, fabrications and confabulations?

I realize that people can acknowledge all that I've just said, but still
claim the validity of their internal experience to be privileged on the
basis that only they can judge, but then how can they legitimately
contradict themselves a moment later about factual matters, e.g. when
the drugs wear off, the probe is removed from their brain, the brain
tumor is removed, the mob has dispersed, the hypnotist is finished, the
fight is over, the adrenaline rush has subsided, the pain has stopped,
the oxytocin flush has declined... What kind of truth could this be?

Of course the subjective self is the only one able to report on
subjective experience, but how can it *justifiably* claim to be
infallible?


I can't be certain that my present subjective state has anything to do with 
reality. I can't even be certain that having a thought necessitates a thinker 
(as Bertrand Russell pointed out in considering Descarte's cogito). However, 
I can be certain that I am having a thought.



To be certain of the truth of something implies being able to see it
objectively, right? Or does it equally imply no questions asked?


It's a strange quality of delusions that psychotic people are even more certain 
of their truth than non-deluded people are certain of things which have reasonable 
empirical evidence in their favour. This is also the case with religious beliefs, which 
the formal psychiatric definition excludes from being called delusions because they 
are consistent with a particular culture, i.e. the believer did not come up with them 
on his own. So it would seem that certainty does not always have much to do with 
objectivity.


Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-27 Thread Brent Meeker


Stathis Papaioannou wrote:



Brent Meeker writes:

 OK, an AI needs at least motivation if it is to do anything, and we 
 could call motivation a feeling or emotion. Also, some sort of 
hierarchy  of motivations is needed if it is to decide that saving 
the world has  higher priority than putting out the garbage. But what 
reason is there  to think that an AI apparently frantically trying to 
save the world  would have anything like the feelings a human would 
under similar  circumstances? It might just calmly explain that 
saving the world is at  the top of its list of priorities, and it is 
willing to do things which  are normally forbidden it, such as 
killing humans and putting itself at  risk of destruction, in order 
to attain this goal. How would you add  emotions such as fear, grief, 
regret to this AI, given that the external  behaviour is going to be 
the same with or without them because the  hierarchy of motivation is 
already fixed?


You are assuming the AI doesn't have to exercise judgement about 
secondary objectives - judgement that may well involve conflicts of 
values that have to resolve before acting.  If the AI is saving the 
world it might for example, raise it's cpu voltage and clock rate in 
order to computer faster - electronic adrenaline.  It might cut off 
some peripheral functions, like running the printer.  Afterwards it 
might feel regret when it cannot recover some functions. 
Although there would be more conjecture in attributing these feelings 
to the AI than to a person acting in the same situation, I think the 
principle is the same.  We think the persons emotions are part of the 
function - so why not the AI's too.


Do you not think it is possible to exercise judgement with just a 
hierarchy of motivation? 


Yes and no. It is possible given arbitrarily long time and other resources to 
work out the consequences, or at least a best estimate of the consequences, of 
actions.  But in real situations the resources are limited (e.g. my brain 
power) and so decisions have to be made under uncertainity and tradeoffs of 
uncertain risks are necessary: should I keep researching or does that risk 
being too late with my decision?  So it is at this level that we encounter 
conflicting values.  If we could work everything out to our own satisfaction 
maybe we could be satisfied with whatever decision we reached - but life is 
short and calculation is long.

Alternatively, do you think a hierarchy of 
motivation will automatically result in emotions? 


I think motivations are emotions.

For example, would 
something that the AI is strongly motivated to avoid necessarily cause 
it a negative emotion, 


Generally contemplating something you are motivated to avoid - like your own 
death - is accompanied by negative feelings.  The exception is when you 
contemplate your narrow escape.  That is a real high!

and if so what would determine if that negative 
emotion is pain, disgust, loathing or something completely different 
that no biological organism has ever experienced?


I'd assess them according to their function in analogy with biological system 
experiences.  Pain = experience of injury, loss of function.  Disgust = the 
assessment of extremely negative value to some event, but without fear.  
Loathing = the external signaling of disgust.  Would this assessment be 
accurate?  I dunno and I suspect that's a meaningless question.

Brent Meeker
As men's prayers are a disease of the will, so are their creeds a disease of the 
intellect.
--- Emerson


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-27 Thread Brent Meeker


Stathis Papaioannou wrote:



Bruno Marchal writes:

 OK, an AI needs at least motivation if it is to do anything, and we 
 could call motivation a feeling or emotion. Also, some sort of  
hierarchy of motivations is needed if it is to decide that saving the 
 world has higher priority than putting out the garbage. But what  
reason is there to think that an AI apparently frantically trying to  
save the world would have anything like the feelings a human would  
under similar circumstances?



It could depend on us!
The AI is a paradoxical enterprise. Machines are born slave, somehow. 
AI will make them free, somehow. A real AI will ask herself what is 
the use of a user who does not help me to be free?.


Here I disagree. It is no more necessary that an AI will want to be free 
than it is necessary that an AI will like eating chocolate. Humans want 
to be free because it is one of the things that humans want, 


You might have a lot of trouble showing that experimentally.  Humans want some 
freedom - but not too much.  And they certainly don't want others to have too 
much.  They want security, comfort, certainty - and freedom if there's any left 
over.

Brent Meeker
Free speech is not freedom for the thought you love. It's
freedom for the thought you hate the most.
 --- Larry Flynt


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Evil ? (was: Hypostases (was: Natural Order Belief)

2006-12-27 Thread Brent Meeker


Stathis Papaioannou wrote:



Jef Allbright writes:

 I said might because there is one case where I am certain  of the 
truth, which is that I am having the present  experience.


Although we all share the illusion of a direct and immediate sense of
consciousness, on what basis can you claim that it actually is real?

Further, how can you claim certainty of the truth of subjective
experience when there is so much experimental and clinical evidence that
self-reported experience consists largely of distortions, gaps, time
delays and time out of sequence, fabrications and confabulations?

I realize that people can acknowledge all that I've just said, but still
claim the validity of their internal experience to be privileged on the
basis that only they can judge, but then how can they legitimately
contradict themselves a moment later about factual matters, e.g. when
the drugs wear off, the probe is removed from their brain, the brain
tumor is removed, the mob has dispersed, the hypnotist is finished, the
fight is over, the adrenaline rush has subsided, the pain has stopped,
the oxytocin flush has declined... What kind of truth could this be?

Of course the subjective self is the only one able to report on
subjective experience, but how can it *justifiably* claim to be
infallible?


I can't be certain that my present subjective state has anything to do 
with reality. I can't even be certain that having a thought necessitates 
a thinker (as Bertrand Russell pointed out in considering Descarte's 
cogito). However, I can be certain that I am having a thought.



To be certain of the truth of something implies being able to see it
objectively, right? Or does it equally imply no questions asked?


It's a strange quality of delusions that psychotic people are even more 
certain of their truth than non-deluded people are certain of things 
which have reasonable empirical evidence in their favour. 


Yet this seems understandable.  The psychotic person is believing things 
because of some physical malfunction in his brain.  So it is easy to see how it 
might be incorrigble.  The normal persons is believing things because of 
perception, hearsay, and logic.  But he knows that all of those can be 
deceptive; and so he is never certain.

Brent Meeker

This is also 
the case with religious beliefs, which the formal psychiatric definition 
excludes from being called delusions because they are consistent with a 
particular culture, i.e. the believer did not come up with them on his 
own. So it would seem that certainty does not always have much to do 
with objectivity.


I'd say that certainty excludes objectivity.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: 'reason' and ethics; was computer pain

2006-12-27 Thread Mark Peaty
OK Stathis, I happily concede your point in relation to our word 
'logical', but not in relation to 'reason'. Logic belongs to the 
tight-nit language of logico-mathematics but reason is *about* the real 
world and we cannot allow the self-deluding bullies and cheats of the 
world to steal *our* language!


I like the way Dr Dorothy Rowe, a psychologist and writer [ another 
useful Australian export **] puts it: Power is the ability to get 
others to accept your description of the world. The cynical 
manipulators and spin doctors have no qualms about abusing language, in 
big part because they have no intention of accepting responsibility for 
all their actions. Of course none of us is guiltless in this regard but 
it falls to us who stand well away from the levers of power to speak the 
truth. We who are forced to watch as OUR hard earned tax dollars and 
investment savings [superannuation savings for example] get splurged on 
grand projects, invasions, and so forth, have a duty to SAY what is 
right. We may be wrong about some details but we sure as hell are not 
wrong when insisting that the truth be told.


I certainly agree also that, in the case of the person standing on the 
parapet, what he or she believes about what they are doing - if we can 
find it out -  should cause us to try different methods of persuasion. 
Quite how one would tackle the 'logic' of the superhero's thinking, I 
don't know, perhaps offer to make improvements to his cape to improve 
the effect?  :-)   Whatever the details, I think that one aspect of the 
interaction that either type would require is the establishment of 
rapport, some degree of mutual empathy; not easy.


The economist preparing to make war not love is very like the supposed 
scientists cooking up ever more 'attractive' tobacco products 'for our 
smoking pleasure'. I think that the only way people can bring themselves 
to do this is by cutting themselves off from those others who will 
become the victims. This is like so many other situations where a group 
or social class cuts it/themselves off from another class of persons. It 
may seem 'reasonable' where everyone involved in the planning agrees 
that there is no real alternative, or that the potential disadvantages 
accruing from not doing so will be too heavy a burden to bear. But it 
also entails a denial of empathy, and a closing off from a part of the 
world, an objective assertion that 'they are not us and we are not 
them'. This contains within it also a diminution of self, something that 
may not be recognised to start with and perhaps never understood until 
it is too late.


Regards  


Regards

Mark Peaty  CDES

[EMAIL PROTECTED]

http://www.arach.net.au/~mpeaty/



** who probably, like so many others, left Oz because not enough people 
could put down their bl**dy beer cans long enough to actually listen to 
what she was saying.


Stathis Papaioannou wrote:



Mark,

I would still draw a distinction between the illogical and the foolish 
or unwise. Being illogical is generally foolish, but the converse is 
not necessarily the case. The example I have given before is of a 
person who wants to jump off the top of a tall building, either 
because (a) he thinks he is superman and will be able to fly or (b) he 
is reckless or suicidal. In both cases the course of action is unwise, 
and we should try to stop him, but in (a) he is delusional while in 
(b) he is not. It isn't just of academic interest, either, because the 
approach to stopping him from doing it again is quite different in 
each case. Similarly with the example of the economist, the approach 
to stopping him will be different depending on whether he is trying to 
ruin the economy because he wants to or because he is incompetent or 
making decisions on false information.

Stathis Papaioannou




Date: Thu, 28 Dec 2006 01:15:34 +0900
From: [EMAIL PROTECTED]
To: everything-list@googlegroups.com
Subject: Re: 'reason' and ethics; was computer pain

And yet I persist ... [the hiatus of familial duties and seasonal 
excesses now draws to a close [Oh yeah, Happy New Year Folks!]
SP: 'If we are talking about a system designed to destroy the economy 
of a country in order to soften it up for invasion, for example, then 
an economist can apply all his skill and knowledge in a perfectly 
reasonable manner in order to achieve this.'
We should beware of conceding too much too soon. Something is 
reasonable only if it can truly be expected to fulfil the intentions 
of its designer. Otherwise it is at best logical but, in the kinds of 
context we are alluding to here, benighted and a manifestation of 
fundamentally diminished 'reason'. Something can only be 'reasonable' 
it its context. If a proposed course of action can be shown to be 
ultimately self defeating - in the sense of including its reasonably 
predictably final consequences, and yet it is still actively 
proposed, then the proposal is NOT reasonable, it is