2012/2/23 Craig Weinberg <whatsons...@gmail.com>

> On Feb 23, 4:32 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> > On 23 Feb 2012, at 06:42, Craig Weinberg wrote:
> >
> > > On Feb 22, 6:10 pm, Pierz <pier...@gmail.com> wrote:
> > >> 'Yes doctor' is merely an establishment of the assumption of comp.
> > >> Saying yes means you are a computationalist. If you say no the you
> > >> are
> > >> not one, and one cannot proceed with the argument that follows -
> > >> though then the onus will be on you to explain *why* you don't
> > >> believe
> > >> a computer can substitute for a brain.
> >
> > > That's what is circular. The question cheats by using the notion of a
> > > bet to put the onus on us to take comp for granted in the first place
> > > when there is no reason to presume that bets can exist in a universe
> > > where comp is true. It's a loaded question, but in a sneaky way. It is
> > > to say 'if you don't think the computer is happy, that's fine, but you
> > > have to explain why'.
> >
> > It is circular only if we said that "saying yes" was an argument for
> > comp, which nobody claims.
>
> I'm not saying comp is claimed explicitly. My point is that the
> structure of the thought experiment implicitly assumes comp from the
> start. It seats you at the Blackjack table with money and then asks if
> you want to play.
>
> > I agree with Stathis and Pierz comment.
> >
> > You do seem to have some difficulties in the understanding of what is
> > an assumption or an hypothesis.
>
> From my perspective it seems that others have difficulties
> understanding when I am seeing through their assumptions.
>
> >
> > We defend comp against non valid refutation, this does not mean that
> > we conclude that comp is true. it is our working hypothesis.
>
> I understand that is how you think of it, but I am pointing out your
> unconscious bias. You take consciousness for granted from the start.
>

Because it is... I don't know/care for you, but I'm conscious... the
existence of consciousness from my own POV, is not a discussion.


> It may seem innocent, but in this case what it does it preclude the
> subjective thesis from being considered fundamental. It's a straw man
>

Read what is a straw man... a straw man is taking the opponent argument and
deforming it to means other things which are obvious to disprove.

http://en.wikipedia.org/wiki/Straw_man


> of the possibility of unconsciousness.
>
> >
> >
> >
> > >> If you've said yes, then this
> > >> of course entails that you believe that 'free choice' and 'personal
> > >> value' (or the subjective experience of them) can be products of a
> > >> computer program, so there's no contradiction.
> >
> > > Right, so why ask the question? Why not just ask 'do you believe a
> > > computer program can be happy'?
> >
> > A machine could think (Strong AI thesis) does not entail comp (that we
> > are machine).
>
> I understand that, but we are talking about comp. The thought
> experiment focuses on the brain replacement, but the argument is
> already lost in the initial conditions which presuppose the ability to
> care or tell the difference and have free will to choose.


But I have that ability and don't care to discuss it further. I'm
conscious, I'm sorry you're not.


> It's subtle,
> but so is the question of consciousness. Nothing whatsoever can be
> left unchallenged, including the capacity to leave something
> unchallenged.
>
> > The fact that a computer program can be happy does not logically
> > entail that we are ourself computer program. may be angels and Gods
> > (non machine) can be happy too. To sum up:
> >
> > COMP implies STRONG-AI
> >
> > but
> >
> > STRONG-AI does not imply COMP.
>
> I understand, but Yes Doctor considers whether STRONG-AI is likely to
> be functionally identical and fully interchangeable with human
> consciousness. It may not say that we are machine, but it says that
> machines can be us


It says machines could be conscious as we are without us being machine.

==> strong ai.

Comp says that we are machine, this entails strong-ai, because if we are
machine, as we are conscious, then of course machine can be conscious...
But if you knew machine could be conscious, that doesn't mean the humans
would be machines... we could be more than that.


> - which is really even stronger, since we can only
> be ourselves but machines apparently can be anything.
>

No read upper.


>
> >
> > > When it is posed as a logical
> > > consequence instead of a decision, it implicitly privileges the
> > > passive voice. We are invited to believe that we have chosen to agree
> > > to comp because there is a logical argument for it rather than an
> > > arbitrary preference committed to in advance. It is persuasion by
> > > rhetoric, not by science.
> >
> > Nobody tries to advocate comp. We assume it. So if we get a
> > contradiction we can abandon it. But we find only weirdness, even
> > testable weirdness.
>
> I understand the reason for that though. Comp itself is the rabbit
> hole of empiricism. Once you allow it the initial assumption, it can
> only support itself.


Then you could never show a contradiction for any hypothesis that you
consider true... and that's simply false, hence you cannot be correct.


> Comp has no ability to contradict itself,


You say so.


> but the
> universe does.
>
> >
> >
> >
> > >> In fact the circularity
> > >> is in your reasoning. You are merely reasserting your assumption that
> > >> choice and personal value must be non-comp,
> >
> > > No, the scenario asserts that by relying on the device of choice and
> > > personal value as the engine of the thought experiment. My objection
> > > is not based on any prejudice against comp I may have, it is based on
> > > the prejudice of the way the question is posed.
> >
> > The question is used to give a quasi-operational definition of
> > computationalism, by its acceptance of a digital brain transplant.
> > This makes possible to reason without solving the hard task to define
> > consciousness or thinking. This belongs to the axiomatic method
> > usually favored by mathematicians.
>
> I know. What I'm saying is that the axiomatic method precludes any
> useful examination of consciousness axiomatically. It's a screwdriver
> instead of a hot meal.
>
> >
> >
> >
> > >> but that is exactly what
> > >> is at issue in the yes doctor question. That is precisely what we're
> > >> betting on.
> >
> > > If we are betting on anything then we are in a universe which has not
> > > been proved to be supported by comp alone.
> >
> > That is exactly what we try to make precise enough so that it can be
> > tested. Up to now, comp is 'saved' by the quantum weirdness it implies
> > (MW, indeterminacy, non locality, non-cloning), without mentioning the
> > candidate for consciousness, qualia, ... that is, the many things that
> > a machine can produce as 1p-true without any 3p-means to justify them.
>
> It's not precise though, it's emotional. Precise is to say 'true' to
> the doctor.
>
> Craig
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>


-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to