On 10/9/2018 9:45 PM, Philip Thrift wrote:


On Tuesday, October 9, 2018 at 8:16:59 PM UTC-5, John Clark wrote:

    On Tue, Oct 9, 2018 at 7:54 PM Pierz <pie...@gmail.com
    <javascript:>> wrote:

        >/I refuse to accept that "axiom", and I also do not feel
        compelled to embrace solipsism./


    You are able to function is the world so you must have some method
    of deciding when something is conscious and when it is not, if its
    not intelligent behavior what is it?

        >///I think it is entirely possible - and indeed sensible - to
        believe that some entities that behave "intelligently", like
        the chess app on my iPhone, are insentient./


    I don't know what the quotation marks in the above means but if
    something acts intelligently then it is sensible to say it has
    some degree of sentience.

        > /Whereas some entities that behave unintelligently (like
        Donald Trump (sorry, I really shouldn't)) are sentient./


    I admit it's a imperfect tool but it's all we've got and all we'll
    ever have so we just have to make good with what we have. A
    failure to act intelligently does not necessarily mean its
    non-sentient, perhaps both a rock and Donald Trump are really
    brilliant but are just pretending to be stupid. If so then both
    are conscious and both are very good actors.

        > /The absence of an objective test for third-party sentience
        does not force one into solipsism. It may point to 1) a
        problem with your ontology (qualia aren't "real")/


    That means nothing. I detect qualia from direct experience and
    that outranks everything, it even outranks the scientific method;
    so if qualia isn't real then nothing is real which would be
    equivalent to everything being real which is equivalent to "real"
    having no meaning because meaning needs contrast.

        > /or 2) a deficient state of knowledge wth respect to the
        (pre) conditions of consciousness./


    I don't know what that means either.

        > Seeing as you have no theory of consciousness at all,


    Yes I do. My theory is that consciousness is the way data feels
    when it is being processed and that is a brute fact, meaning it
    terminates a chain of "why is that?" questions.

        > /statements like "you have no alternative but to..." don't
        have much force. There are plenty of alternatives,/


    Name one! I ask once more, in you everyday life when you're not
    being philosophical you must have some method of determining when
    something is conscious, if its not intelligent behavior what on
    earth is it?

        > a refusal to engage it as a problem, in spite of the
        increasingly widespread acceptance among scientists that it
        /is /a real problem, and possibly the biggest problem of all
        in our current state of knowledge


    I think intelligence implies consciousness but consciousness does
    not necessarily imply intelligence, so the problem I want answered
    is abut how intelligence works not consciousness.

    John K Clark



One could look at it that way. In terms of biological evolution, what has turned out to be intelligent beings (us!) are also conscious beings. When we started making computers and programming languages and such (inventing a field called Artificial Intelligence), it got a little confusing. Is IBM Watson https://en.wikipedia.org/wiki/Watson_(computer) ] "intelligent"? Some might say yes, others, no. There are some AI scientists (or SI - Synthetic Intelligence, to contrast with AI https://en.wikipedia.org/wiki/Synthetic_intelligence ] who say to make truly intelligent artifacts they must be conscious.

So the question remains no matter how one parses intelligence and consciousness: How do you make a conscious robot?

I'm obviously not sure, but here's an idea of how consciousness might occur based on Jeff Hawkins ideas in his book “On Intelligence”.  I refer to the intuition pump of an AI Mars Rover:



The sensors of the MR would define the current status, both internal and external.  This goes into a predictor that estimates how the current status will change if there's no change in the current plan.  The prediction from the previous cycle is compared to the new current status.  If there's not significant difference, it's “Ho Hum” and action proceeds as planned.  But if the comparison shows a deviation from expectation That is something to take note of.  It's noted in long-term memory which is a searchable database which can be used to learn from.  And it initiates a need to update the plan. So what rises to the level of  consciousness is something that is surprising and may need a change of plan.  And if you ask the MR what happened, it will refer to it's long term memory to give an account based on what it saw as significant

Brent
P.S.  That rainbow pyramid thing is a hierarchy of values per Maslow that are used in evaluating a plan.

 This comports with the idea that conscious thought is a kind of post-hoc commentary on what you're thinking, and explanation you can tell yourself and other people. Remember that one of the criticisms of neural nets is that they don't explain themselves.  That means if you want an explanation from an  ANN it has to be a separate function which can also be implemented by some more NN.  But then you have no guarantee that the explanation is the real one.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to