On Apr 27, 9:14 pm, meekerdb <meeke...@verizon.net> wrote:
> On 4/27/2012 11:57 AM, 1Z wrote:
>
>
>
>
>
>
>
>
>
>
>
> > On Apr 27, 7:13 pm, meekerdb<meeke...@verizon.net>  wrote:
> >> On 4/27/2012 11:07 AM, 1Z wrote:
>
> >>> On Apr 27, 6:50 pm, meekerdb<meeke...@verizon.net>    wrote:
> >>>> On 4/27/2012 10:42 AM, 1Z wrote:
> >>>>> On Apr 27, 6:13 pm, meekerdb<meeke...@verizon.net>      wrote:
> >>>>>> On 4/27/2012 7:29 AM, 1Z wrote:
> >>>>>>> On Apr 25, 10:25 pm, meekerdb<meeke...@verizon.net>        wrote:
> >>>>>>>> On 4/25/2012 11:45 AM, Evgenii Rudnyi wrote:
> >>>>>>>>> On 24.04.2012 22:22 meekerdb said the following:
> >>>>>>>>> ...
> >>>>>>>>>> As I've posted before, when we know how look at a brain and infer 
> >>>>>>>>>> what
> >>>>>>>>>> it's thinking and we know how to build a brain that behaves as we 
> >>>>>>>>>> want,
> >>>>>>>>>> in other words when we can do consciousness engineering, the "hard
> >>>>>>>>>> problem" will be bypassed as a metaphysical non-question, like 
> >>>>>>>>>> "Where
> >>>>>>>>>> did the elan vital go?"
> >>>>>>>>>> Brent
> >>>>>>>>> This is a position expressed by Jeffrey Gray as follows (he does 
> >>>>>>>>> not share it):
> >>>>>>>>> What looks like a Hard Problem will cease to be one when we have 
> >>>>>>>>> understood the errors
> >>>>>>>>> in our ways of speaking about the issues involved. If the route 
> >>>>>>>>> were successful, we
> >>>>>>>>> would rejoin the normal stance: once our head have been 
> >>>>>>>>> straightened out, science could
> >>>>>>>>> again just get on with the job of filling in the details of 
> >>>>>>>>> empirical knowledge.
> >>>>>>>>> Evgenii
> >>>>>>>>>http://blog.rudnyi.ru/tag/jeffrey-a-gray
> >>>>>>>> I think the main mistake in formulating the 'hard problem' is 
> >>>>>>>> thinking that we can't
> >>>>>>>> explain consciousness with mathematical theories like mechanics, 
> >>>>>>>> astrophysics, quantum
> >>>>>>>> mechanics.  The mistake isn't that we can explain consciousness, 
> >>>>>>>> it's supposing that we
> >>>>>>>> can explain physics.  We don't explain mechanics or gravity or 
> >>>>>>>> electrodynamics - we have
> >>>>>>>> models for them that work, they are predictive and can be used to 
> >>>>>>>> control and design
> >>>>>>>> things.  Bruno points out that *primitive matter* doesn't add 
> >>>>>>>> anything to physics.  When
> >>>>>>>> asked what explained the gravitational force Newton said, "Hypothesi 
> >>>>>>>> non fingo".  Someday,
> >>>>>>>> consciousness will be looked at similarly.
> >>>>>>>> Brent
> >>>>>>> Is that any different to regarding cosnc. as fundamental, as dualists
> >>>>>>> do?
> >>>>>> I think it is.  We don't regard elan vital as fundamental, we just 
> >>>>>> gave up looking for
> >>>>>> it.  We decided life is a process, not a substance.
> >>>>>> Brent
> >>>>> So if I decide consc. is a process not a substance, will my pains stop
> >>>>> hurting and my food stop tasing and my vision stop being colourful?
> >>>> Not unless that stops the process.
> >>>> Brent
> >>> And will ceasing to look for any kidn of cosnc. beyond the process
> >>> mean i can explain
> >>> why pains hurt, etc? I seem to recall that we stopped lookign for Elan
> >>> Vital after we came
> >>> up with better explanations, not  vice versa.
> >> I said that we'd stop asking the 'hard question' when we had consciousness 
> >> engineering.
>
> > There's a HQ *about* engineering. We don't know how to get started on
> > engineering  qualia, although
> > we can get started on memory. cognition, pattern recognition. language
> > etc.
>
> > We can engineer conscious-style behaviour, but there is still the
> > doubt that an AI has real
> > phenomenality: no behaviour can prove it does.
>
> >> Being able to manipulate and synthesize something is a 'better 
> >> explanation' in a different
> >> sense of 'explanation'.
> > Manipulate and synthesise what? How do you tell that your
> > manipulations are having the desired
> > effect on phenomenality? Don't you need qualiometers in a properly
> > equipped Consciousness Engineering
> > lab?
>
> That's why I said, except for people who believe in philosophical zombies.
>
> Brent

A quailess AI isn;t a p-zombie. A p-zombie is physically identical to
a human. An AI will be
made out of silicon or something, which could naturalsitically explain
its lack of qualia.
That is a different matter. With the possible exception of Craig, we
all think our  toasters are
zombies.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to