On 9/24/2013 9:21 PM, LizR wrote:
On 25 September 2013 16:03, meekerdb <meeke...@verizon.net <mailto:meeke...@verizon.net>> wrote:


    On 9/24/2013 8:44 PM, LizR wrote:
    On 25 September 2013 15:41, meekerdb <meeke...@verizon.net
    <mailto:meeke...@verizon.net>> wrote:

        On 9/24/2013 6:32 PM, LizR wrote:
        On 25 September 2013 13:38, Russell Standish <li...@hpcoders.com.au
        <mailto:li...@hpcoders.com.au>> wrote:

            This is also true of materialism. Whether you think this is a 
problem
            or not depends on whether you think the "hard problem" is a problem 
or not.


        Indeed. I was about to say something similar (to the effect that it's 
hard to
        imagine how "mere atoms" can have sights, sounds, smells etc either).

        As a rule, if you want to explain X you need to start from something 
without X.

    Absolutely.

    If you know of such an explanation, or even the outlines of one, I'd be 
interested
    to hear it. As Russell said, this is the so-called "hard problem" so any 
light (or
    sound, touch etc) on it would be welcome.

    My 'solution' to the hard problem is to prognosticate that when we have 
built
    intelligent robots we will have learned the significance of having an 
internal
    narrative memory.  We will have learned what emotions and feelings are at 
the level
    of sensors and computation and action. And when we have done that 'the hard 
problem'
    will be seen to have been an idle question - like "What is life." proved to 
be in
    the 20th century.

Yes, that certainly seems like a possible solution (or maybe "dissolution") to the problem, although I wouldn't say that "what is life?" proved to be an idle question. Some of the proposed /solutions/ turned out to be "idle" (such as ones involving an "elan vital", which of course didn't answer the question at all). Life is intimately bound up with at least two major fields (evolution and computation) and we've learned a lot about a lot of things by studying it (and we aren't finished yet, I would say).

Right. "Idle" isn't exactly the right word. I think that like "life" consciousness will be seen to be different things and there will be distinguished different kinds of consciousness and we'll design robots to have more or less and this kind or that. And we'll design drugs and brain implants to change and augment human brains based on our understanding of these different things that we now tend to lump under "consciousness".


So that answers Russell's question, at least from your point of view as (I assume) a "primary materialist", and (interestingly, IMHO) equally answers the criticism (if it was intended as such) that Craig leveled against comp.

As I understand Bruno's theory it also 'dissolves' the hard problem by reducing it to a property of certain logics, namely a computational system is (or can be) conscious if it is Lobian, i.e. if it can prove Godel's incompleteness about itself. This seems too narrowly technical to me (does it actually have to have done the proof, or just be potentially able to do it?), but I can see that it would be a facet of intelligence that would contribute a certain aspect of consciousness.

Of course Bruno proposes that the logic (as in arithmetic for example) exists in platonia and that the physical world is just an aspect of relations in arithmetic. I'm not sure about that, but I suspect that if fully worked out the derivative physical world will prove necessary for the logic to produce consciousness - so physics is maybe not so derivative.

Brent

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to