On Feb 22, 7:42 am, Craig Weinberg <whatsons...@gmail.com> wrote:
> Has someone already mentioned this?
> I woke up in the middle of the night with this, so it might not make
> The idea of saying yes to the doctor presumes that we, in the thought
> experiment, bring to the thought experiment universe:
> 1. our sense of own significance (we have to be able to care about
> ourselves and our fate in the first place)
I can't see why you would think that is incompatible with CTM
> 2. our perceptual capacity to jump to conclusions without logic (we
> have to be able feel what it seems like rather than know what it
> simply is.)
Whereas that seems to be based on a mistake. It might be
that our conclusions ARE based on logic, just logic that
we are consciously unaware of. Altenatively, they might
just be illogical...even if we are computers. It is a subtle
fallacy to say that computers run on logic: they run on rules.
They have no guarantee to be rational. If the rules are
wrong, you have bugs. Humans are known to have
any number of cognitive bugs. The "jumping" thing
could be implemented by real or pseudo randomness, too.
> Because of 1, it is assumed that the thought experiment universe
> includes the subjective experience of personal value - that the
> patient has a stake, or 'money to bet'.
What's the problem ? the experience (quale) or the value?
Do you know the value to be real? Do you think a computer
could not be deluded about value?
> Because of 2, it is assumed
> that libertarian free will exists in the scenario
I don't see that FW of a specifically libertarian aort is posited
in the scenario. It just assumes you can make a choice in
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org.
To unsubscribe from this group, send email to
For more options, visit this group at