2012/2/23 Craig Weinberg <whatsons...@gmail.com>

> On Feb 23, 9:26 am, Quentin Anciaux <allco...@gmail.com> wrote:
> >
> > > I understand that is how you think of it, but I am pointing out your
> > > unconscious bias. You take consciousness for granted from the start.
> >
> > Because it is... I don't know/care for you, but I'm conscious... the
> > existence of consciousness from my own POV, is not a discussion.
>
> The whole thought experiment has to do specifically with testing the
> existence of consciousness and POV. If we were being honest about the
> scenario, we would rely only on known comp truths to arrive at the
> answer. It's cheating to smuggle in human introspection in a test of
> the nature of human introspection. Let us think only in terms of
> 'true, doctor'. If comp is valid, there should be no difference
> between 'true' and 'yes'.
>
> >
> > > It may seem innocent, but in this case what it does it preclude the
> > > subjective thesis from being considered fundamental. It's a straw man
> >
> > Read what is a straw man... a straw man is taking the opponent argument
> and
> > deforming it to means other things which are obvious to disprove.
> >
> > http://en.wikipedia.org/wiki/Straw_man
>
> "a superficially similar yet unequivalent proposition (the "straw
> man")"
>
> I think that yes doctor makes a straw man of the non-comp position. It
> argues that we have to choose whether or not we believe in comp, when
> the non-comp position might be that with comp, we cannot choose to
> believe in anything in the first place.
>
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > > of the possibility of unconsciousness.
> >
> > > > >> If you've said yes, then this
> > > > >> of course entails that you believe that 'free choice' and
> 'personal
> > > > >> value' (or the subjective experience of them) can be products of a
> > > > >> computer program, so there's no contradiction.
> >
> > > > > Right, so why ask the question? Why not just ask 'do you believe a
> > > > > computer program can be happy'?
> >
> > > > A machine could think (Strong AI thesis) does not entail comp (that
> we
> > > > are machine).
> >
> > > I understand that, but we are talking about comp. The thought
> > > experiment focuses on the brain replacement, but the argument is
> > > already lost in the initial conditions which presuppose the ability to
> > > care or tell the difference and have free will to choose.
> >
> > But I have that ability and don't care to discuss it further. I'm
> > conscious, I'm sorry you're not.
>
> But you aren't in the thought experiment.
>
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > > It's subtle,
> > > but so is the question of consciousness. Nothing whatsoever can be
> > > left unchallenged, including the capacity to leave something
> > > unchallenged.
> >
> > > > The fact that a computer program can be happy does not logically
> > > > entail that we are ourself computer program. may be angels and Gods
> > > > (non machine) can be happy too. To sum up:
> >
> > > > COMP implies STRONG-AI
> >
> > > > but
> >
> > > > STRONG-AI does not imply COMP.
> >
> > > I understand, but Yes Doctor considers whether STRONG-AI is likely to
> > > be functionally identical and fully interchangeable with human
> > > consciousness. It may not say that we are machine, but it says that
> > > machines can be us
> >
> > It says machines could be conscious as we are without us being machine.
> >
> > ==> strong ai.
>
> That's what I said. That makes machines more flexible than organically
> conscious beings. They can be machines or like us, but we can't fully
> be machines so we are less than machines.
>
> Either we are machines or we are not... If machines can be conscious and
we're not machines then we are *more* than machines... not less.


>  >
> > Comp says that we are machine, this entails strong-ai, because if we are
> > machine, as we are conscious, then of course machine can be conscious...
> > But if you knew machine could be conscious, that doesn't mean the humans
> > would be machines... we could be more than that.
>
> More than that in what way?


We must contain infinite components if we are not machines emulable. So we
are *more* than machines if machines can be conscious and we're not
machines.


> Different maybe, but Strong AI by
> definition makes machines more than us, because we cannot compete with
> machines at being mechanical but they can compete as equals with us in
> every other way.
>
> >
> > > - which is really even stronger, since we can only
> > > be ourselves but machines apparently can be anything.
> >
> > No read upper.
>
> No read upper.
>
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > > > > When it is posed as a logical
> > > > > consequence instead of a decision, it implicitly privileges the
> > > > > passive voice. We are invited to believe that we have chosen to
> agree
> > > > > to comp because there is a logical argument for it rather than an
> > > > > arbitrary preference committed to in advance. It is persuasion by
> > > > > rhetoric, not by science.
> >
> > > > Nobody tries to advocate comp. We assume it. So if we get a
> > > > contradiction we can abandon it. But we find only weirdness, even
> > > > testable weirdness.
> >
> > > I understand the reason for that though. Comp itself is the rabbit
> > > hole of empiricism. Once you allow it the initial assumption, it can
> > > only support itself.
> >
> > Then you could never show a contradiction for any hypothesis that you
> > consider true... and that's simply false, hence you cannot be correct.
>
> You are doing exactly what I just said. You assume initially that all
> truths are bound by Aristotelian logic. You cannot contradict any
> hypothesis that says you aren't a zombie, hence you are a zombie. My
> whole point is that consciousness is not like any other subject. You
> cannot stand aloof from it and point to it's parts in a power point.
> It is the elephant in every room.
>
> >
> > > Comp has no ability to contradict itself,
> >
> > You say so.
>
> Is it not true?
>
> Craig
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>


-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to