In this message, I neither want to support the ASSA nor utilitarism.
But I will argue that the former has remarkable consequences for the
To give a short overview of the concepts, I remind you that
utilitarism is a doctrine measuring the morality of an action only by
its outcome. Those actions are said to be more moral than others if
they cause a greater sum of happiness or pleasure (for all people
involved). Though this theory seems to be attractive, it has to cope
with a lot of problems. Maybe the most fundamental problem is to
define how 'happiness' and 'pleasure' are measured: In order to decide
which action is the most moral one, we need a 'felicific calculus'.
However, it seems that there is no chance to find a unique felicific
calculus everyone would agree upon. Until today, there is a lot of
- How do we measure happiness?
- How do we compare the happiness of different people?
- How do we account for pain and suffering? Which weight is assigned
- Even maximizing 'the sum of happiness' in some felicific calculus
does not necessarily determine a unique action. Maybe it's possible to
increase the happiness of some individuals and to decrease the
happiness of other individuals without changing the 'sum of
happiness'. What is preferable?
Most of us have a mathematical or scientific background. We know that
such a situation can lead to an infinity of possible felicific calculi
each one defined by arbitrary measures and parameters. In the
sciences, one would usually discard a theory that contains so much
arbitrariness (philosophy however is not that rigorous).
The application of the ASSA can help to surmount these conceptual
difficulties. Assuming the ASSA, we are able to define a uniquely
determined utilitarism. Nonetheless, the practical problem of deciding
which action one has to prefer remains rather unchanged.
1st step: Reducing the number of utilitarisms to the number of human
The ASSA states that my next experience is randomly chosen out of all
observer moments. For the decision of my action, only those observer
moments are of interest that are significantly influenced by my
decision (e.g. observer moments in the past aren't). Since my next
observer moment can be any of those observer moments, I am driven to a
utilitarian action. Utilitarism directly arises whenever an observer
wants to act rationally while assuming the ASSA. I could say that
utilitarism is 'egoism + ASSA'.
2nd step: The unique utilitarism.
Starting from the definition that utilitarism is egoism in combination
with the ASSA, I argue that all observers will agree upon the same
action. At first you might think that the preferred action depends on
the individual preferences of the deciding individual. For example, if
I was suffering from hunger, I could perform an action to minimize
hunger in the world. But this is a wrong conclusion. When I experience
another observer moment, I am no longer affected by my former needs
Directly speaking: Since all observers must expect to get their next
observer moments out of the same ensemble of observer moments, there
is no reason to insist on different preferences.
But there is still one problem left. Different observers have
different states of knowledge about the consequences of a potential
action. In theory, we can exclude this problem by defining utilitarism
as the rational decision of a hypothetic observer that knows all the
consequences of all potential actions (of course while assuming the
It's a nice feature of the ASSA that it naturally leads to a theory of
morality. The RSSA does not seem to provide such a result. Though, I'd
like to have similar concepts out of the RSSA (according to Stathis, I
belong to the RSSA camp).
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at