On 20 May 2014 05:06, meekerdb <[email protected]> wrote:

>  On 5/19/2014 2:38 AM, LizR wrote:
>
>  His main interest is the mind-body problem; and my interest in that
>> problem is more from an engineering viewpoint.  What does it take to make a
>> conscious machine and what are the advantages or disadvantages of doing
>> so.  Bruno says a machine that can learn and do induction is conscious,
>> which might be testable - but I think it would fail.  I think that might be
>> necessary for consciousness, but for a machine to appear conscious it must
>> be intelligent and it must be able to act so as to convince us that it's
>> intelligent.
>>
>>  That is fair enough, but it (of course) assumes primary materialism -
>
>  No it doesn't.  Why do you think that?  I think "assuming primary
> materialism" is a largely imaginary fault Bruno accuses his critics of.
> Sure physicists study physics and it's a reasonable working hypothesis; but
> nobody tries to even define "primary matter" they just look to see if
> another layer will be a better layer of physics or not.
>

Ah, OK. I assumed that taking an engineering viewpoint was akin to assuming
something like Max Tegmark's conscious matter was involved, but if not, not.

otherwise a conscious machine, as commonly understood, might have other
attributes that can't be deduced from its structure, and hence the
engineering approach will fail. (Hence to be fully confident in this
approach you should perhaps show what is wrong with Bruno's starting
assumptions, or his deductions.)

 I'm assuming it doesn't and that I can make conscious machine from any
> assemblage that can interact with the world in a certain way.
>
> And I have shown what I think is wrong with Bruno's deductions.  In his
> MGA he relies on the MG being isolated, not part of a world - or when
> challenged on the point he says it can be expanded to be as large as the
> whole universe, i.e. to be a world.  But I think it makes a difference.  I
> think the MG can only be conscious relative to a world in which it can
> learn and act.  Bruno (being a logician and mathematician) thinks that
> consciousness doesn't need any external referents.  It's not a conclusive
> refutation, but a point of evidence is that humans in sensory deprivation
> tanks tend to have their thoughts enter a loop - which I would say shows
> that they need external reference.
>

Assuming I understand the MGA, all the inputs to the consciousness that
occurred on the "original run" (or whatever it should be called) are
present in the subsequent runs, the ones that are supposed to duplicate the
consciousness of the original run. Unless one assumes that under identical
conditions, a conscious mind will operate differently, the experiences
should be identical (Surely QM would indicate that it is at least possible
that a mind WILL operate differently on successive runs? But perhaps not a
robust digital mind with suitable error correction, etc.) I'm not sure what
you mean by "relative to a world..." etc, or rather, I can't see why it's
significant when the MGA involves replaying a series of conscious
experiences under identical conditions. Surely whatever influence was due
to the world it was relative to will also be recreated? If not, why not?


>   I tend to agree with JKC that intelligence is harder (and more
> important) than consciousness.
>
> Well, a lot of animals can apparently do both - they can *certainly* do
intelligence, and at least appear to act as though conscious. So I don't
know which is harder.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to