Let us suppose you're right and... but hold on! We can't do that. That would be "circular". That would be sneaking in the assumption that you're right from the outset. That would be "shifty', "fishy", etc etc. You just don't seem to grasp the rudiments of philosophical reasoning. 'Yes doctor' is not an underhand move. It asks you up-front to assume that comp is true in order then to examine the implications of that, whilst acknowledging (by calling it a 'bet') that this is just a hypothesis, an unprovable leap of faith. You complain that using the term 'bet' assumes non-comp (I suppose because computers can't bet, or care about their bets), but that is just daft. You might as well argue that the UDA is invalid because it is couched in natural language, which no computer can (or according to you, could ever) understand. If we accepted such arguments, we'd be incapable of debating comp at all.
Saying 'no' to the doctor is anyone's right - nobody forces you to accept that first step or tries to pull the wool over your eyes if you choose to say 'yes'. Having said no you can then either say "I don't believe in comp because (I just don't like it, it doesn't feel right, it's against my religion etc)" or you can present a rational argument against it. That is to say, if asked to justify why you say no, you can either provide no reason and say simply that you choose to bet against it - which is OK but uninteresting - or you can present some reasoning which attempts to refute comp. You've made many such attempts, though to be honest all I've ever really been able to glean from your arguments is a sort of impressionistic revulsion at the idea of humans being computers, yet one which seems founded in a fundamental misunderstanding about what a computer is. You repeatedly mistake the mathematical construct for the concrete, known object you use to type up your posts. This has been pointed out many times, but you still make arguments like that thing about one's closed eyes being unlike a switched-off screen, which verged on ludicrous. I should say I'm no comp proponent, as my previous posts should attest. I'm agnostic on the subject, but at least I understand it. Your posts can make exasperating reading. On Feb 24, 8:14 am, Craig Weinberg <[email protected]> wrote: > On Feb 23, 3:25 pm, 1Z <[email protected]> wrote: > > > On Feb 22, 7:42 am, Craig Weinberg <[email protected]> wrote: > > > > Has someone already mentioned this? > > > > I woke up in the middle of the night with this, so it might not make > > > sense...or... > > > > The idea of saying yes to the doctor presumes that we, in the thought > > > experiment, bring to the thought experiment universe: > > > > 1. our sense of own significance (we have to be able to care about > > > ourselves and our fate in the first place) > > > I can't see why you would think that is incompatible with CTM > > It is not posed as a question of 'Do you believe that CTM includes X', > but rather, 'using X, do you believe that there is any reason to doubt > that Y(X) is X.' > > > > > > 2. our perceptual capacity to jump to conclusions without logic (we > > > have to be able feel what it seems like rather than know what it > > > simply is.) > > > Whereas that seems to be based on a mistake. It might be > > that our conclusions ARE based on logic, just logic that > > we are consciously unaware of. > > That's a good point but it could just as easily be based on > subconscious idiopathic preferences. The patterns of human beings in > guessing and betting vary from person to person whereas one of the > hallmarks of computation is to get the same results. By default, > everything that a computer does is mechanistic. We have to go out of > our way to generate sophisticated algorithms to emulate naturalistic > human patterns. Human development proves just the contrary. We start > out wild and willful and become more mechanistic through > domestication. > > > Altenatively, they might > > just be illogical...even if we are computers. It is a subtle > > fallacy to say that computers run on logic: they run on rules. > > Yes! This is why they have a trivial intelligence and no true > understanding. Rule followers are dumb. Logic is a form of > intelligence which we use to write these rules that write more rules. > The more rules you have, the better the machine, but no amount of > rules make the machine more (or less) logical. Humans vary widely in > their preference for logic, emotion, pragmatism, leadership, etc. > Computers don't vary at all in their approach. It is all the same rule > follower only with different rules. > > > They have no guarantee to be rational. If the rules are > > wrong, you have bugs. Humans are known to have > > any number of cognitive bugs. The "jumping" thing > > could be implemented by real or pseudo randomness, too. > > > > Because of 1, it is assumed that the thought experiment universe > > > includes the subjective experience of personal value - that the > > > patient has a stake, or 'money to bet'. > > > What's the problem ? the experience (quale) or the value? > > The significance of the quale. > > > Do you know the value to be real? > > I know it to be subjective. > > > Do you think a computer > > could not be deluded about value? > > I think a computer can't be anything but turned off and on. > > > > > > Because of 2, it is assumed > > > that libertarian free will exists in the scenario > > > I don't see that FW of a specifically libertarian aort is posited > > in the scenario. It just assumes you can make a choice in > > some sense. > > It assumes that choice is up to you and not determined by > computations. > > Craig -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

