On Sat, Jul 05, 2003 at 11:02:26PM -0500, Dan Minette wrote:

> From: "Erik Reuter" <[EMAIL PROTECTED]>
>
> > On Mon, Jun 23, 2003 at 07:46:46PM -0500, Dan Minette wrote:
>
> > At some level, yes. But all moralities aren't created equal. Some
> > are clearly better than others, in that some will almost surely lead
> > to a society that almost no one would want to live in.
>
> It depends on what is desired from morality.  Some are better than
> others for reaching particular goals, certainly.  But, that naturally
> leads to the question "what goals?"  It's easy to label your goals
> "rational" and another's goals as "irrational."

I did not label goals rational and irrational -- in fact, in this thread
I specifically stated that my stated goal was subjective. Your reply
does not address my comment...I was just watching the Godfather. If
everyone behaved as the dons in that movie, almost no one would want
to live in the resulting world. And sure enough, the crime bosses are
largely gone now. Most people realized what would happen if such a
system were allowed to expand. This is not rational or irrational, it
is just that most people don't like to live in such a world. As I said
previously, this is mostly an accident of evolution and environment,
but it is certainly true that most people share some of these basic
sensibilities about what is desirable and what is not.

> I'll agree if you show that the conflict between the goals of
> different people is an illusion (i.e. you show that rational self
> interest is served by considering the needs of others as just as
> important as one's own), then you will have reduced the question of
> morality to a question of accurately gauging one's own self interest.
>
> But, that premise really doesn't match observation.  The question is
> complicated enough, so that it is probably not possible to actually
> falsify that hypothesis, but the overwhelming amount of evidence is
> against it.

Actually, the overwhelming amount of evidence is for it. That is why
humans have progressed from animal-like apes, to tribes, feudalism,
and finally liberal democracy with the rule of law. And progress has
accelerated, especially with the transition to liberal democracy and
rule of law.

> Part of the reason for that is the fact that, by the nature of the
> premise, you have set yourself a very high standard for proof.  The
> existence of win-win situations, where the predominant strategy
> for the individual benefits all is not sufficient.  Rather, it
> is necessary to show that win-lose scenarios do not exist to any
> significant extent.

No, you are thinking much too small. There are indeed many win-lose
scenarios if you look at thing myopically. But if you consider both
the long-term and the interaction of others if they all followed a
similar strategy, then the world is a big win-win scenario. I mentioned
this previously, but again you failed to address it. Surely you don't
think we could have made as much progress as we did in the 20th century
with everyone acting myopically in their own self interest? How do you
explain the huge growth in GDP per capita in the Western world in the
last 150 years?

> Let me give just one counter example now.  (Only one for space
> limitation, not for lack of examples.)  Tonight, on the local news,
> there was an apartment fire.  One man was taken to the hospital for
> smoke inhalation. He was at risk because, instead of just yelling fire
> and getting out of the complex, he went door to door knocking on the
> doors telling people to get out.
>
> He is up for a hero's award, which I think is reasonable. From a
> Christian standpoint, his actions are an example of the greatest
> form of love possible.  But, from the standpoint of enlightened
> self-interest, his actions were irrational.  On a cost/benefits basis,
> it was the wrong decision to make.

Not necessarily. If I thought my neighbor(s) would be likely to take
the same risks for me in the future, I would do it, and it would be
in my self-interest, unless I could trick my neighbors (or they could
trick me) into thinking I (they) would do it but really would not. Of
course, then honesty and trustworthiness comes into it. If I didn't
think my neighbors were honest and trustworthy in these matters, then
I would be less likely to do it since it would be much less in my own
self-interest.

> Sure, there are actions that can be identified as beneficial for the
> whole community if everyone does this. But, this begs the question
> "why worry about what benefits others?"

Because if you don't (and enough others don't), they won't, and everyone
loses.

> >This can make for an interesting game theory problem, but in general
> >the "golden rule" strategy is frequently the best game theory tactic.

> I looked up game theory, and found what seems to be a pretty decent source
> for it at:
> 
> http://william-king.www.drexel.edu/top/eco/game/game.html

That site is incomplete. Here are some key words for you: repeated
Prisoner's Dilemma Robert Axelrod Tit-for-Tat

http://www.gametheory.net/News/Items/026.html

> The idea that following the golden rule is a good general
> self-interest strategy runs against so many clear historical examples,
> that it would take a very detailed explanation to show why the first
> order inconsistency between this model and data is really meaningless.

Actually, it agrees with so many clear historical examples and
simulations that it is the best simple strategy there is.

> Historically, it hasn't.

Historically, it has. Unless you'd rather go back and live in the Middle
Ages? Or perhaps live like a Neanderthal?

> There are many examples in which systems that favor the welfare of
> the elite has survived for a very long period of time.  The idea of
> representative government, "of the people, by the people, and for
> the people" is a relatively new one.  For example, it is not clear
> that, if someone of Lincoln's stature was not president at the time,
> representative government would have survived.  IMHO, Lincoln was not
> engaged in hyperbola at Gettysburg.

Your examples support the cooperative strategies. How much progress have
we made in America, in the last say 200 years, compared to the 2000
years before?

> So, we have a brief period of human history, over which the type of
> society that is more like the one you suggest has barely survived and
> then prospered.  That certainly doesn't constitute a proof.

I don't claim proof, but it is very suggestive. And it is rather
interesting that you are demanding proof. Perhaps it is time for you to
answer the questions I asked earlier, and provide proof?

> Indeed, in an ironic twist, this system was created by a number of
> people who acted irrationally by your standards.

Nope, they acted rationally.

> Erik, with that sorta arm waving I could prove almost any system.
> I've seen at least three radically different systems "proven" with
> that type of argument.

Dan, with your sort of fantasy morality, you never even have to submit
your ideas to test, let alone proof. Why do you demand hard proof from
me when you are unwilling or unable to provide proof yourself?

> > As you have presented it, this is a short-sighted philosophy. As I
> > alluded to above, if EVERYONE followed such a philosophy, then life
> > would be miserable for everyone.
>
> That doesn't stop it from being the best strategy for each individual.

Yes, it does.

> >Human progress is NOT a zero-sum game -- the pie can be greatly
> >enlarged by cooperation.
>
> So?  That's not the point.  The point is whether actions that hurt the
> group can be beneficial for the individual.

So? That is not the point. Short-term, it may be best for an individual,
but not long-term if everyone else also does it. Of course, if everyone
cooperates then it can become beneficial to act selfishly. But the key
is how everyone else reacts. If everyone started acting selfishly, then
everyone loses. If almost everyone cooperates and works to identify the
selfish ones and prevent them from being selfish, everyone does better
than if everyone were selfish.

> Come on, Erik, you know better than that. If you were to live 10,000
> years, and the laws of physics were found to not even be a good
> approximation, then maybe you will live in a Culture world.  But, what
> are the odds on that?  Multiply that probability by the perceived
> benefit, and you will get the value of this possibility to you.  Its
> very small.

Come on Dan, your reading comprehension is better than that. I was
talking about the Culture SOCIETY. Any sufficiently advanced technology
could support such a society. The probability of the SPECIFIC technology
in his novels is small, but the probability of a technology sufficient
to support such a society in 10,000 years is quite good.

> Bank's work is good SF, with one suspense of disbelief, one can accept
> the world.  Well and good.  But, one does not deal with reality that
> way.

Heh. Dan talking about dealing with reality. Heh.

> Sure if everyone does that.  But again, that ignores the obvious.  You
> can do little to change the overall condition of society, but you can
> do a lot to help yourself.  If you look at self interest alone, there
> will be a number of times when you can get away with harming society
> more than you help yourself, but help yourself more than the loss to
> society harms you.

That is what laws and rules are for, Dan. In the short-term in a
cooperative society, it is frequently beneficial to be selfish. But
long-term, if everyone behaves that way, everyone loses. So we have
laws, rules, deterrents, and punishments to align majority long-term
interests with individual short-term interests.



On Sat, Jul 05, 2003 at 11:19:43PM -0500, Dan Minette wrote:

> As a useful fiction to persuade people, certainly (actually persuade
> assumes free will,

If you say so. Of course, that is a meaningless statement.

> But, "ought" is rather meaningless without free will.

That's okay. Free will is rather meaningless.

> I'll be happy to admit that the causal chain in people's actions
> includes hearing words.  But, that doesn't seem all that critical to
> me.

I see. So humans would behave the same as they do if they never learned
to communicate with each other?

That is absurd, Dan. I can predict what a goldfish will do in response
to something one year from now with much better accuracy than what a
human will do one year from now. Much of the difference is that the
human will be communicating with other humans during the year.

> I think your argument relies on complexity changing the fundamentals.

You think wrong. We've been through this before. I guess you forgot.

> > It is absurd to compare a mind -- which is complex in a way
> > that cannot be modeled by a few simple equations, is capable of
> > abstraction, logic, and calculation -- to something like a star or
> > a lightning bolt which can be modeled and predicted accurately by a
> > few equations.

> No, it is not absurd. I chose lightning and stars for a reason, not
> just because I was grasping for metaphors. It is impossible to predict
> where lightning will strike at a given time on a given day. I'm rather
> surprised you claim that it is simple; the inability to ever predict
> popup thunderstorms is classic. It is one of the best examples of
> macroscopic indetermancy.

Your exact words were:

  "It makes no more sense saying a man ought not to kill another man in
  cold blood than would make sense to argue that a lightning bolt ought
  not to have killed that golfer."

No, it IS absurd. Following your lead, I was not discussing predictions
of WHERE a lightning bolt would strike. I was discussing how the
lightning bolt behaves when it strikes a golfer. If the golfer is
standing on the tallest hill in a lightning storm holding his metal
golf club straight above his head, I can make a pretty good prediction
with some simple models what could happen. And I cannot persuade the
lightning to act otherwise. On the other hand, if a policeman approaches
the golfer, I cannot predict accurately what the policemen will do, but
I have a good chance of being able to make a difference in what the
policeman will do by my actions (assuming I am nearby).  I'm rather
surprised that you didn't understand this simple concept, Dan.

> Indeed, the behavior of stars, humans, and lightning bolts are all
> dependant on gravity and the physics of the standard model.  One could
> even argue that the star takes more physics to explain than humans,
> since one may have to consider QCD as well as the standard model.

No, Dan, you are making a couple mistakes. First, it is not necessary
to model the star so precisely to be able to make useful predictions of
what it will do. A few equations give a lot of useful results.

Second, even if you do use QCD, you are still basically simulating a
bunch of atomic (or sub-atomic) particles, each of which is obeying the
same equations. So the number of lines of code is fairly small, even if
you need a lot of memory and number crunching power. In contrast, to
model a mind you will need millions or billions of lines of code. That
is one reason why we still don't have very smart programs, but we can
describe stars much better.

>  I really expected you to know this, since you have a BA in physics.

You really need to work on the accuracy of your insults, Dan. I do NOT
have "a BA in physics". But, unlike you, mistakes ARE what I expect from
you on this subject, someone who believes his fantasy world is real --
after all, why worry about accuracy when your fantasies are true?

> Complexity doesn't add anything; it just makes it harder to calculate.
> A very complex perpetual motion machine is no more likely to work than
> a simple one.  There are occasions, indeed, where complexity results
> in counter-intuitive results.  There has never been a verified case
> where complexity introduces something truly new.

Which partly explains why you have so much trouble with free-will
concepts. It is apparently quite counter-intuitive to you.

> > accurately predict what a mind will do with a simple model: you need
> > to simulate it in its full complexity, essentially creating another
> > copy of the mind. Furthermore, you can persuade a person not to do
> > something; but you cannot persuade a lightning-bolt not to strike.
>
> That is a convenient fiction.

No, it is fact, for the common usage of persuade.

> Persuade is a convenient shorthand

True enough. Until we can fully understand the algorithms and procedures
of the mind, all we have is shorthands like this.

> No, I'd complain that stars and lightning bolts and people are real,
> Chamlis Amalk-ney is a fictional creation.

Hmmm, is fiction, but it is easy to see how something similar may be
created some day. Which is more real, Chamlis or god?

> So, you are willing to give up any description of human beings that is
> not directly reducable to QED?

Unless and until someone has an experiment that demonstrates that there
is more than that, sure.

-- 
"Erik Reuter" <[EMAIL PROTECTED]>       http://www.erikreuter.net/
_______________________________________________
http://www.mccmedia.com/mailman/listinfo/brin-l

Reply via email to