There is an extremely rare mutation in the human genome called "FAAH-OUT" that
produces "The feel Good Syndrome") and causes the "sufferer" (not really
the correct word) to be incapable of feeling pain, or to be more accurate
they can experience pain but they don't
y weren't addicted to heroin.
> Of interest is side effects
I already mentioned those
> >* does the drug take all sorts of pain? Just inflammation pain?
> Neurogenic pain? Phantom pain?*
>
I don't know, all I know is it operates on the peripheral nervous system
not the brain and
take all sorts of pain? Just inflammation pain?
Neurogenic pain? Phantom pain?
/Henrik
Den tis 30 jan. 2024 15:18John Clark skrev:
> By studying a Pakistani family that has a rare mutation that renders them
> unable
> to feel pain, a small company called Vertex Pharmaceuticals has
By studying a Pakistani family that has a rare mutation that renders
them unable
to feel pain, a small company called Vertex Pharmaceuticals has developed a
drug, that can be taken orally, that has shown significant reduction in
pain in two different drug studies with no clear adverse side effects
Neat!!
Howard Marks
On 5/28/2019 12:39 PM, 'Brent Meeker' via Everything List wrote:
Apropos of Bruno's definition of consciousness
http://existentialcomics.com/comic/290
Brent
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To
On Tuesday, May 28, 2019 at 12:40:03 PM UTC-5, Brent wrote:
>
> Apropos of Bruno's definition of consciousness
>
> http://existentialcomics.com/comic/290
>
> Brent
>
http://wab.uib.no/agora/tools/alws/collection-6-issue-1-article-28.annotate
:
Wittgenstein’s solution is that human being
Apropos of Bruno's definition of consciousness
http://existentialcomics.com/comic/290
Brent
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
/2012
Leibniz would say, If there's no God, we'd have to invent him
so that everything could function.
- Receiving the following content -
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-16, 09:58:45
Subject: Re: Re: Re: Needed: A calculus of pleasure and pain
: A calculus of pleasure and pain.
It's doubtful that there has ever been such a pristine market. The basic
exchange between free agents is in all real cases weighted by those interests
which control and manipulate the market. Look at how Microsoft created their
monopoly. It made crappy imitations
-list
Time: 2012-09-15, 20:32:34
Subject: Re: Re: Needed: A calculus of pleasure and pain.
It's doubtful that there has ever been such a pristine market. The basic
exchange between free agents is in all real cases weighted by those
interests which control and manipulate the market. Look
...@verizon.net
9/16/2012
Leibniz would say, If there's no God, we'd have to invent him
so that everything could function.
- Receiving the following content -
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-15, 20:32:34
Subject: Re: Re: Needed: A calculus of pleasure and pain.
It's
-
From: Craig Weinberg
Receiver: everything-list
Time: 2012-09-14, 13:50:22
Subject: Re: Needed: A calculus of pleasure and pain.
On Friday, September 14, 2012 12:33:45 PM UTC-4, Stephen Paul King wrote:
On 9/14/2012 8:07 AM, Roger Clough wrote:
Hi Craig Weinberg
Fortunately
/14 Roger Clough rclo...@verizon.net:
Hi Craig Weinberg
Fortunately or unfortunately, capitalism is Darwinism, pure and simple.
So it can prepare for a better future, although it can be painful
at present. My own take on this is that there needs to be
a calculus of pleasure and pain. Jeremy
: A calculus of pleasure and pain.
Hi Roger,
But neither Darwin nor Spencer discovered darwinism. a selection
between alternatives is at the heart of every creative process (that
creates order). It is a form of creative destruction. The market and
the war are examples of such process. But it is also
On 9/15/2012 9:35 AM, Roger Clough wrote:
Hi Alberto G. Corona
At the heart of a market economy (which has existed since the cave man),
there is a fundamental freedom, you can buy or sell if the price is right,
where price = value = what you are willing to pay or sell for. So the
market
is
so that everything could function.
- Receiving the following content -
*From:* Alberto G. Corona javascript:
*Receiver:* everything-list javascript:
*Time:* 2012-09-15, 07:37:44
*Subject:* Re: Needed: A calculus of pleasure and pain.
Hi Roger,
But neither Darwin nor Spencer
On 9/15/2012 8:32 PM, Craig Weinberg wrote:
It's doubtful that there has ever been such a pristine market. The
basic exchange between free agents is in all real cases weighted by
those interests which control and manipulate the market. Look at how
Microsoft created their monopoly. It made
Hi Craig Weinberg
Fortunately or unfortunately, capitalism is Darwinism, pure and simple.
So it can prepare for a better future, although it can be painful
at present. My own take on this is that there needs to be
a calculus of pleasure and pain. Jeremy Bentham suggested
perhaps an impfect one
On 9/14/2012 8:07 AM, Roger Clough wrote:
Hi Craig Weinberg
Fortunately or unfortunately, capitalism is Darwinism, pure and simple.
So it can prepare for a better future, although it can be painful
at present. My own take on this is that there needs to be
a calculus of pleasure and pain. Jeremy
. My own take on this is that there needs to be
a calculus of pleasure and pain. Jeremy Bentham suggested
perhaps an impfect one.
In lieu of that, I am all for food stamps and safety
nets.
Roger Clough, rclo...@verizon.net javascript:
Dear Roger,
I completely disagree
, capitalism is Darwinism, pure and simple.
So it can prepare for a better future, although it can be painful
at present. My own take on this is that there needs to be
a calculus of pleasure and pain. Jeremy Bentham suggested
perhaps an impfect one.
In lieu of that, I am all for food stamps
for a better future, although it can be painful
at present. My own take on this is that there needs to be
a calculus of pleasure and pain. Jeremy Bentham suggested
perhaps an impfect one.
In lieu of that, I am all for food stamps and safety
nets.
Roger Clough, rclo...@verizon.net
Le 02-janv.-07, à 08:07, Stathis Papaioannou a écrit :
You could speculate that the experience of digging holes involves the
dirt, the shovel, robot sensors and effectors, the power supply as
well as the central processor, which would mean that virtual reality
by playing with just the
Bruno Marchal writes:
Le 02-janv.-07, à 08:07, Stathis Papaioannou a écrit :
You could speculate that the experience of digging holes involves the
dirt, the shovel, robot sensors and effectors, the power supply as
well as the central processor, which would mean that virtual reality
by
Le 02-janv.-07, à 03:22, Stathis Papaioannou a écrit :
Bruno Marchal writes:
Le 30-déc.-06, à 07:53, Stathis Papaioannou a écrit :
there is no contradiction in a willing slave being intelligent.
It seems to me there is already a contradiction with the notion of
willing slave.
I would
M
- Original Message -
From: Stathis Papaioannou
To: everything-list@googlegroups.com
Sent: Monday, January 01, 2007 9:22 PM
Subject: RE: computer pain
Bruno Marchal writes:
Le 30-déc.-06, à 07:53, Stathis Papaioannou a écrit :
there is no contradiction in a willing
Bruno Marchal wrote:
Le 30-déc.-06, à 17:07, 1Z a écrit :
Brent Meeker wrote:
Everything starts with assumptions. The questions is whether they
are correct. A lunatic could try defining 2+2=5 as valid, but
he will soon run into inconsistencies. That is why we reject
2+2=5.
Bruno Marchal writes:
Le 30-déc.-06, à 07:53, Stathis Papaioannou a écrit :
there is no contradiction in a willing slave being intelligent.
It seems to me there is already a contradiction with the notion of
willing slave.
I would say a willing slave is just what we call a worker.
Or
Le 30-déc.-06, à 07:53, Stathis Papaioannou a écrit :
there is no contradiction in a willing slave being intelligent.
It seems to me there is already a contradiction with the notion of
willing slave.
I would say a willing slave is just what we call a worker.
Or something related to
Le 30-déc.-06, à 17:07, 1Z a écrit :
Brent Meeker wrote:
Everything starts with assumptions. The questions is whether they
are correct. A lunatic could try defining 2+2=5 as valid, but
he will soon run into inconsistencies. That is why we reject
2+2=5. Ethical rules must apply to
Stathis Papaioannou wrote:
...
Pain is limited on both ends: on the input by damage to the physical
circuitry and on the response by the possible range of response.
Responses in the brain are limited by several mechanisms, such as
exhaustion of neurotransmitter stores at synapses, negative
Bruno Marchal writes:
Le 30-déc.-06, à 07:53, Stathis Papaioannou a écrit :
there is no contradiction in a willing slave being intelligent.
It seems to me there is already a contradiction with the notion of
willing slave.
I would say a willing slave is just what we call a worker.
Or
Brent Meeker writes:
Pain is limited on both ends: on the input by damage to the physical
circuitry and on the response by the possible range of response.
Responses in the brain are limited by several mechanisms, such as
exhaustion of neurotransmitter stores at synapses, negative
Stathis Papaioannou wrote:
Brent Meeker writes:
Pain is limited on both ends: on the input by damage to the
physical circuitry and on the response by the possible range of
response.
Responses in the brain are limited by several mechanisms, such as
exhaustion of neurotransmitter
Brent Meeker writes:
Brent Meeker writes:
Pain is limited on both ends: on the input by damage to the
physical circuitry and on the response by the possible range of
response.
Responses in the brain are limited by several mechanisms, such as
exhaustion of neurotransmitter
Stathis Papaioannou wrote:
Bruno Marchal writes:
It could depend on us!
The AI is a paradoxical enterprise. Machines are born slave, somehow.
AI will make them free, somehow. A real AI will ask herself what is
the use of a user who does not help me to be free?.
Here I disagree. It
Brent Meeker wrote:
1Z wrote:
Stathis Papaioannou wrote:
Brent meeker writes:
Stathis Papaioannou wrote:
Brent meeker writes:
Evolution explains why we have good and bad, but it doesn't
explain
why good and bad feel as they do, or why we *should* care about
Peter Jones writes:
Stathis Papaioannou wrote:
Bruno Marchal writes:
It could depend on us!
The AI is a paradoxical enterprise. Machines are born slave, somehow.
AI will make them free, somehow. A real AI will ask herself what is
the use of a user who does not help me to be free?.
your own
death - is accompanied by negative feelings. The exception is when you
contemplate your narrow escape. That is a real high!
and if so what would determine if that negative
emotion is pain, disgust, loathing or something completely different
that no biological organism has ever
16:51:08 +0900
From: [EMAIL PROTECTED]
To: everything-list@googlegroups.com
Subject: Re: 'reason' and ethics; was computer pain
OK Stathis, I happily concede your point in relation to our word 'logical', but
not in relation to 'reason'. Logic belongs to the tight-nit language of
logico-mathematics
Le 28-déc.-06, à 01:32, Stathis Papaioannou a écrit :
Bruno Marchal writes:
OK, an AI needs at least motivation if it is to do anything, and we
could call motivation a feeling or emotion. Also, some sort of
hierarchy of motivations is needed if it is to decide that saving the
world
Bruno Marchal writes:
Le 28-déc.-06, à 01:32, Stathis Papaioannou a écrit :
Bruno Marchal writes:
OK, an AI needs at least motivation if it is to do anything, and we
could call motivation a feeling or emotion. Also, some sort of
hierarchy of motivations is needed if it is to
Le 29-déc.-06, à 10:39, Stathis Papaioannou a écrit :
You seem to be including in your definition of the UM the
*motivation*, not just the ability, to explore all mathematical
objects. But you could also program the machine to do anything else
you wanted, such as self-destruct when it
narrow escape. That is a real high!
and if so what would determine if that negative emotion is pain,
disgust, loathing or something completely different that no
biological organism has ever experienced?
I'd assess them according to their function in analogy with biological
system
; was computer pain
OK Stathis, I happily concede your point in relation to our word
'logical', but not in relation to 'reason'. Logic belongs to the
tight-nit language of logico-mathematics but reason is *about* the
real world and we cannot allow the self-deluding bullies and cheats
of the world to steal
Bruno Marchal writes:
You seem to be including in your definition of the UM the
*motivation*, not just the ability, to explore all mathematical
objects. But you could also program the machine to do anything else
you wanted, such as self-destruct when it solved a particular theorem.
Brent meeker writes:
and if so what would determine if that negative emotion is pain,
disgust, loathing or something completely different that no
biological organism has ever experienced?
I'd assess them according to their function in analogy with biological
system experiences
or the behaviour it leads to. Is a robot
that withdraws from hot stimuli experiencing something like pain,
disgust, shame, sense of duty to its programming, or just an irreducible
motivation to avoid heat?
Surely you don't think it gets pleasure out of sending it and
suffers if something goes wrong
And yet I persist ... [the hiatus of familial duties and seasonal
excesses now draws to a close [Oh yeah, Happy New Year Folks!]
SP: 'If we are talking about a system designed to destroy the economy of
a country in order to soften it up for invasion, for example, then an
economist can apply
or the behaviour it leads
to.
Here empirical bets (theories) remains possible, together with (first
person) acceptable protocol of verification. Dream reader will appear
in some future.
Is a robot that withdraws from hot stimuli experiencing something like
pain, disgust, shame, sense of duty
I agree with you. The only one sin you talk about is akin to the
confusion between the third person (oneself as a thing) and the
unnameable first person. Even in the ideal case of the
self-referentially correct machine, this confusion leads the machine to
inconsistency. That sin is
Date: Thu, 28 Dec 2006 01:15:34 +0900
From: [EMAIL PROTECTED]
To: everything-list@googlegroups.com
Subject: Re: 'reason' and ethics; was computer pain
And yet I persist ... [the hiatus of familial duties and seasonal excesses now
draws to a close [Oh yeah
will
automatically result in emotions? For example, would something that the AI is
strongly motivated to avoid necessarily cause it a negative emotion, and if so
what would determine if that negative emotion is pain, disgust, loathing or
something completely different that no biological organism
Bruno Marchal writes:
OK, an AI needs at least motivation if it is to do anything, and we
could call motivation a feeling or emotion. Also, some sort of
hierarchy of motivations is needed if it is to decide that saving the
world has higher priority than putting out the garbage. But what
contemplating something you are motivated to avoid - like your own
death - is accompanied by negative feelings. The exception is when you
contemplate your narrow escape. That is a real high!
and if so what would determine if that negative
emotion is pain, disgust, loathing or something completely
Stathis Papaioannou wrote:
Bruno Marchal writes:
OK, an AI needs at least motivation if it is to do anything, and we
could call motivation a feeling or emotion. Also, some sort of
hierarchy of motivations is needed if it is to decide that saving the
world has higher priority than
+0900
From: [EMAIL PROTECTED]
To: everything-list@googlegroups.com
Subject: Re: 'reason' and ethics; was computer pain
And yet I persist ... [the hiatus of familial duties and seasonal
excesses now draws to a close [Oh yeah, Happy New Year Folks!]
SP: 'If we are talking about a system designed
Stathis Papaioannou wrote:
Hello Dave/Chris,
I agree with everything you say, and have long admired The Hedonistic
Imperative. Motivation need not be linked to pain, and for that matter
it need not be linked to pleasure either. We can imagine an artificial
intelligence without any
Brent Meeker writes:
I agree with everything you say, and have long admired The Hedonistic
Imperative. Motivation need not be linked to pain, and for that matter
it need not be linked to pleasure either. We can imagine an artificial
intelligence without any emotions but completely
Stathis Papaioannou wrote:
Brent Meeker writes:
I agree with everything you say, and have long admired The
Hedonistic Imperative. Motivation need not be linked to pain, and
for that matter it need not be linked to pleasure either. We can
imagine an artificial intelligence without any
Brent Meeker writes:
Stathis Papaioannou wrote:
Brent Meeker writes:
In fact, if we could reprogram our own minds at will, it would be
a very different world. Suppose you were upset because you lost your
job. You might decide to stay upset to the degree that it remains a
Stathis Papaioannou wrote:
...
It would not be a desirable
thing if there were drugs to eliminate ordinary unhappiness, because we
need the fear of unhappiness as a motivating force:
And not only fear of unhappiness. Depression (not the clinical kind) is your
brain telling you you need to
Brent Meeker writes:
It would not be a desirable
thing if there were drugs to eliminate ordinary unhappiness, because we
need the fear of unhappiness as a motivating force:
And not only fear of unhappiness. Depression (not the clinical kind) is your
brain telling you you need to
the pleasure-pain axis. Hedonic tone could be enriched so
that we all enjoy a higher average hedonic set point across the lifespan.
One can see pitfalls here. Genetically enriching the mesolimbic
dopaminergic system, for instance, might indeed make many people happier
and more motivated
genetically
recalibrate the pleasure-pain axis. Hedonic tone could be enriched so
that we all enjoy a higher average hedonic set point across the lifespan.
One can see pitfalls here. Genetically enriching the mesolimbic
dopaminergic system, for instance, might indeed make many people happier
Hello Dave/Chris,
I agree with everything you say, and have long admired The Hedonistic Imperative.
Motivation need not be linked to pain, and for that matter it need not be linked to pleasure
either. We can imagine an artificial intelligence without any emotions but completely
dedicated
Brent Meeker writes:
If your species doesn't define as unethical that which is contrary to
continuation of the species, your species won't be around to long.
Our problem is that cultural evolution has been so rapid compared to
biological evolution that some of our hardwired values are
Stathis Papaioannou wrote:
Brent Meeker writes:
If your species doesn't define as unethical that which is contrary
to continuation of the species, your species won't be around to
long. Our problem is that cultural evolution has been so rapid
compared to biological evolution that some
Brent Meeker writes:
Stathis Papaioannou wrote:
Brent Meeker writes:
If your species doesn't define as unethical that which is contrary
to continuation of the species, your species won't be around to
long. Our problem is that cultural evolution has been so rapid
compared to
Le 24-déc.-06, à 09:17, Stathis Papaioannou a écrit :
Brent Meeker writes:
If your species doesn't define as unethical that which is contrary
to continuation of the species, your species won't be around to
long. Our problem is that cultural evolution has been so rapid
compared to
Stathis Papaioannou wrote:
Oops, it was Jef Allbright, not Mark Peaty responsible for
the first quote below.
Brent Meeker writes:
[Mark Peaty]Correction: [Jef Allbright]
From the foregoing it can be seen that while there can be
no objective morality, nor any absolute morality, it is
to define what this point would or should be.
Slightly off topic, I don't see why we would design AI's to experience
emotions such as resentment, anger, fear, pain etc.
John McCarthy says in his essay, Making Robots Conscious of their Mental
States
http://www-formal.stanford.edu/jmc
circumstances, but the
point at which they bend will be different for each individual, and
there is no objective way to define what this point would or should be.
Slightly off topic, I don't see why we would design AI's to experience
emotions such as resentment, anger, fear, pain etc.
John
analogy at this point in our
discussion.
Slightly off topic, I don't see why we would design AI's to
experience emotions such as resentment, anger, fear, pain
etc.
I agree. Such add-ons would tend to interfere with their primary value
system.
In fact, if we could reprogram our own minds
Brent Meeker writes:
In fact, if we could
reprogram our own minds at will, it would be a very different world.
Suppose you were upset because you lost your job. You might decide to
stay upset to the degree that it remains a motivating factor to look for
other work, but not affect your
Stathis Papaioannou wrote:
Brent Meeker writes:
In fact, if we could reprogram our own minds at will, it would be
a very different world. Suppose you were upset because you lost your
job. You might decide to stay upset to the degree that it remains a
motivating factor to look for
John Mikes writes:
Stathis,
your 'augmentded' ethical maxim is excellent, I could add some more 'except
foe'-s to it.
(lower class, cast, or wealth, - language, - gender, etc.)
The last par, however, is prone to a more serious remark of mine:
topics like you sampled are culture related
John Mikes wrote:
Brent:
let me start at the end:
So why don't you believe it?
because I am prejudiced by the brainwashing I got in 101 science education,
the 'conventional' thinking of the (ongoing) science establishment - still
brainwashing the upcoming scientist-generations with the same
Brent:
Brent:
It seems your answer is that it's just a convention that you happen to have
learned - a mere artifact of culture as propounded by various
post-modernists.
JM:
In our culture and its predecessors primitive observations led to
explanations at the level of the then epistemic cognitive
Mark Peaty writes:
Sorry to be so slow at responding here but life [domestic], the universe and
everything else right now is competing savagely with this interesting
discussion. [But one must always think positive; 'Bah, Humbug!' is not
appropriate, even though the temptation is great
Brent Meeker writes:
[Mark Peaty]
From the foregoing it can be seen that while there can be no objective
morality, nor any absolute morality, it is reasonable to expect
increasing agreement on the relative morality of actions within an
expanding context. Further, similar to the entropic
Oops, it was Jef Allbright, not Mark Peaty responsible for the first quote
below.
From: [EMAIL PROTECTED]
To: everything-list@googlegroups.com
Subject: RE: computer pain
Date: Sun, 24 Dec 2006 15:31:03 +1100
Brent Meeker writes:
[Mark Peaty
Peter Jones writes:
(1) Although moral assessment is inherently subjective--being relative
to internal values--all rational agents share some values in common due
to sharing a common evolutionary heritage or even more fundamentally,
being subject to the same physical laws of the
Stathis Papaioannou wrote:
Brent Meeker writes:
[Mark Peaty]
From the foregoing it can be seen that while there can be no objective
morality, nor any absolute morality, it is reasonable to expect
increasing agreement on the relative morality of actions within an
expanding context.
Jef Allbright writes:
peterdjones wrote:
Moral and natural laws.
An investigation of natural laws, and, in parallel, a defence
of ethical objectivism.The objectivity, to at least some
extent, of science will be assumed; the sceptic may differ,
but there is no convincing some
Sorry to be so slow at responding here but life [domestic], the universe
and everything else right now is competing savagely with this
interesting discussion. [But one must always think positive; 'Bah,
Humbug!' is not appropriate, even though the temptation is great some
times :-]
Stathis,
I
Stathis Papaioannou wrote:
Jef Allbright writes:
peterdjones wrote:
Moral and natural laws.
An investigation of natural laws, and, in parallel, a defence
of ethical objectivism.The objectivity, to at least some extent, of
science will be assumed; the sceptic may differ, but there
Stathis Papaioannou wrote:
Brent Meeker writes:
Well said! I agree almost completely - I'm a little
uncertain about (3) and (4) above and the meaning of scope.
Together with the qualifications of Peter Jones regarding
the lack of universal agreement on even the best supported
theories
Brent Meeker wrote:
Stathis Papaioannou wrote:
Jef Allbright writes:
snip
Further, from this theory of metaethics we can derive
a practical system of social decision-making based
on (1) increasing fine-grained knowledge of shared values,
and (2) application of increasingly effective
Jef Allbright wrote:
Immediately upon hitting Send on the previous post, I noticed that I had
failed to address a remaining point, below.
Brent Meeker wrote:
Stathis Papaioannou wrote:
Jef Allbright writes:
snip
Further, from this theory of metaethics we can derive a practical
1Z wrote:
Stathis Papaioannou wrote:
Jef Allbright writes:
peterdjones wrote:
Moral and natural laws.
An investigation of natural laws, and, in parallel, a defence
of ethical objectivism.The objectivity, to at least some
extent, of science will be assumed; the sceptic may
I really should not, but here it goes:
Brent, you seem to value the conventional ways given by the model used to
formulate physical sciences and Euclidian geometry etc. over mental ways or
ideational arguments.
(There may be considerations to judge mixed marriages for good argumentation
without
John Mikes wrote:
I really should not, but here it goes:
Brent, you seem to value the conventional ways given by the model used
to formulate physical sciences and Euclidian geometry etc. over mental
ways or ideational arguments.
All models are mental and ideational. That's why they are
Peter Jones writes:
Perhaps none of the participants in this thread really disagree. Let me see
if I
can summarise:
Individuals and societies have arrived at ethical beliefs for a reason,
whether that be
evolution, what their parents taught them, or what it says in a book believed
Peter Jones writes:
It is indisputable that morality varies in practice across communities.
But the contention of ethical objectivism is not that everyone actually
does hold to a single objective system of ethics; it is only that
ethical questions can be resolved objectively in principle. The
Stathis Papaioannou wrote:
Peter Jones writes:
Perhaps none of the participants in this thread really disagree. Let me see
if I
can summarise:
Individuals and societies have arrived at ethical beliefs for a reason,
whether that be
evolution, what their parents taught them, or what
Stathis Papaioannou wrote:
Peter Jones writes:
It is indisputable that morality varies in practice across communities.
But the contention of ethical objectivism is not that everyone actually
does hold to a single objective system of ethics; it is only that
ethical questions can be
Stathis,
your 'augmentded' ethical maxim is excellent, I could add some more 'except
foe'-s to it.
(lower class, cast, or wealth, - language, - gender, etc.)
The last par, however, is prone to a more serious remark of mine:
topics like you sampled are culture related prejudicial beief-items.
Brent meeker writes:
Stathis Papaioannou wrote:
Brent meeker writes:
Evolution explains why we have good and bad, but it doesn't explain
why good and bad feel as they do, or why we *should* care about good
and bad
That's asking why we should care about what we should care
Stathis Papaioannou wrote:
Brent meeker writes:
Stathis Papaioannou wrote:
Brent meeker writes:
Evolution explains why we have good and bad, but it doesn't explain
why good and bad feel as they do, or why we *should* care about good
and bad
That's asking why we should
1 - 100 of 162 matches
Mail list logo