On Mon, Jun 23, 2003 at 07:46:46PM -0500, Dan Minette wrote:

> OK, so what is the meaning of the word "ought?"  For example, that a
> man "ought not to torture, rape, and kill a 5 year old girl."  It is
> simply that his desire to do so conflicts with your desire to have him
> not do so?

At some level, yes. But all moralities aren't created equal. Some are
clearly better than others, in that some will almost surely lead to
a society that almost no one would want to live in. If everyone went
around indiscriminately hurting or killing each other, it would be an
awful world indeed. Also, some moralities are parasitic, in that if
everyone followed those morals, the desired result would not obtain
-- in other words, these moralities are only desirable to someone
if the majority do not follow the same morals. This can make for an
interesting game theory problem, but in general the "golden rule"
strategy is frequently the best game theory tactic. The whole thing is
a meme competition, and it seems to me that the meme that provides the
most "pleasantness" for the most number of people is likely to win. Of
course, "plesantness" is subjective, but since humans share a lot of the
same genetic heritage and similar environments, most of us will have
similar enough definitions to have compatible morals.

> What I am getting at is that most people explicitly or implicitly
> have understandings of universals when they discuss things like human
> rights, morality, etc.

But they aren't really universal, are they? The origin is mostly the
result of shared genetics and environment, logical thought, and rational
extrapolation. And of course, self-perpetuating memes arising from those
causes, since many people do not think these things through but rather
do as they were taught or indoctrinated.

> The criterion for every decision is "what's in it for me?"

As you have presented it, this is a short-sighted philosophy. As I
alluded to above, if EVERYONE followed such a philosophy, then life
would be miserable for everyone. If instead, some people followed
a "what's in it for me" strategy rationally, extrapolating what
would happen if it became universal, then they would NOT act in
short-sightedly selfish ways, since in the long-run it is NOT in
their best interests. Many things cannot be accomplished efficiently
alone -- cooperation is frequently the best strategy to achieve a
goal. Competition and greed are strong motivators, but if there isn't
also a strong degree of cooperation (teamwork, fairness, rule of law,
etc.) then progress will be agonizingly slow.

> You are willing to sacrifice your own direct interest to help others.

Yes, but usually because I believe it is in my own long-term direct
interest, and when it is ambiguous, I tend to err on the side of
cooperation rather than competition (in case some others are following
a strict tit-for-tat strategy, it is better for me to err on the
cooperative side). Human progress is NOT a zero-sum game -- the pie can
be greatly enlarged by cooperation.

> Best for whom?  If not for you, why bother?  You see, I'm guessing
> that there are assumptions by which you judged Bank's world.

But it IS best for me, long-term. Maybe I will live forever and see
it. But you are right, there is another assumption: it is not a white
and black, Culture good, not-quite-Culture bad world. Taking steps
closer towards that world is better for me, even if it isn't completely
obtainable in my lifetime.

> But, its really that one assumption that is critical.

Agreed.

>  Mine basis for morality is religious, and its that humans are created
> in the image and likeness of God, and must be treated in a manner that
> is consistent with this.  Human rights, the Golden Rule, etc. all flow
> from this postulate as theorems.  So, my assumption is also quite
> simple.

No, it is NOT so simple. William already replied to that:

  Even if man is 'created in the image and likeness of God' that says
  nothing about how men should treat each others without an additional
  assumption that 'those created in the image and likeness of God must
  be treated in such and such ways'.  So you might as well ditch the
  'image and likeness of God' part and go directly to the 'must be
  treated in such and such ways' part.  God is a redundant assumption
  that adds nothing to the line of argument.

I would add that although the concept of god IS redundant to that
argument, it may have been useful in persuading people to the 'must
be treated in such and such ways' point of view. But I question its
usefulness for that purpose today in places where we are enlightened
enough not to need fear and superpower to motivate and comfort us. Are
we not mature enough to persuade people to morality by honest argument,
trusting them to make their choices with their eyes open, rather than
tricking them into believing in fairy tales and fearing boogey-men?

> This, IMHO, makes morality somewhat moot.  It makes no more sense
> saying a man ought not to kill another man in cold blood than would
> make sense to argue that a lightning bolt ought not to have killed
> that golfer.  Both things just happened, the idea that the person made
> a choice and the lightning bolt didn't would just be an illusion.
>
> So, in additional to morality, it seems clear to me that free will and
> responsibility have to be dropped to embrace logical positivism.

We've had this discussion before -- the concept of free-will as you use
it is just as useless a concept as god. But morality, as I've argued
above, is quite useful in progressing towards goals.

It is absurd to compare a mind -- which is complex in a way that cannot
be modeled by a few simple equations, is capable of abstraction, logic,
and calculation -- to something like a star or a lightning bolt which
can be modeled and predicted accurately by a few equations. You can't
accurately predict what a mind will do with a simple model: you need to
simulate it in its full complexity, essentially creating another copy of
the mind. Furthermore, you can persuade a person not to do something;
but you cannot persuade a lightning-bolt not to strike. You are allowing
yourself to be afflicted by the dreaded physics-cyst spherical-cow
disease (modelitis), thinking that a simplistic model is an accurate
representation of a complex phenomenon.

I know you like to try out models until they "stick" (you tried equating
a mind to a star last time, now you are trying a lightning bolt) but the
last time this came up I mentioned about as useful a model as you're
likely to get: humans have free will in the same sense as Chamlis
Amalk-ney (or Mawhrin-Skel) has free will. No doubt you will complain
that that is not a very useful model. Yes! That is the point! Free will
as you bandy the term around is a poor concept and mostly useless.


-- 
"Erik Reuter" <[EMAIL PROTECTED]>       http://www.erikreuter.net/
_______________________________________________
http://www.mccmedia.com/mailman/listinfo/brin-l

Reply via email to