On 10/30/2015 11:36 AM, Quentin Anciaux wrote:


2015-10-30 19:20 GMT+01:00 Brent Meeker <meeke...@verizon.net <mailto:meeke...@verizon.net>>:



    On 10/30/2015 9:30 AM, Quentin Anciaux wrote:


    2015-10-30 17:13 GMT+01:00 Quentin Anciaux <allco...@gmail.com
    <mailto:allco...@gmail.com>>:



        2015-10-30 17:01 GMT+01:00 John Clark <johnkcl...@gmail.com
        <mailto:johnkcl...@gmail.com>>:

            On Thu, Oct 29, 2015 at 11:55 AM, Quentin Anciaux
            <allco...@gmail.com <mailto:allco...@gmail.com>>wrote:

                    ​ >>​
                    And I repeat, ​If the microprocessor made of
                    matter that obeys the laws of physics can't sense
                    **any** information in the AI program then the AI
                    program is not running, it's​ not intelligent
                    it's just a inert list of instructions DOING
                    nothing.


                ​ > ​
                Re-read what is written above...

            ​
            ​OK​
            ​ :​

            ​ "​
            And I repeat, ​If the microprocessor made of matter that
            obeys the laws of physics can't sense **any** information
            in the AI program then the AI program is not running,
            it's​ not intelligent it's just a inert list of
            instructions DOING nothing.
            ​ "​

                        ​ > ​
                        how can that affect if ? how can it knows
                        that external world ?*


                    ​
                    ​ >> ​
                    It could have memories of that external world
                    before the sensors were ​detached, if they were
                    never attached then it could have no knowledge of
                    that world


                ​ > ​
                That's the point... as no information of that
                "external" world is fed to it.


            ​Not true. The AI's physical memory banks are in that
            external world and the information in it is sure as hell
            fed into it, as are the results of calculations made by
            the physical microprocessors that are also in that
            external physical world.


        That's *not* information *about the physical world*... you
        are confusing level as usual.


    I'll try again... fooling myself again in believing you're honest
    here... with a simple real life example.

    So let's pretend our "AI" is in fact a Nintendo Entertainment
    System game... that game, can be run on a physical NES, or can be
    run in an emulator running on a physical computer... or on an
    emulator running on an emulator running on a physical computer...

    From the POV of the game, it is unable to distinguish those cases
    because all informations it has from the substrate(machine
    running it) is the same, the NES game cannot know it is not
    really running on a physical NES...

    Same thing with the AI, the information it has access to, it
    cannot tell the ontological status of what that information
    represent... those informations could come from a really real
    ontological physical world... or an emulation of the  really real
    ontological physical world or an emulation of an emulation of the
     really real ontological physical world.... it has no way to
    decide the ontological status of that "external world".

    I think you are confusing a virtual world with an AI.


No I'm not.

    An AI must be embedded in a world, a context, which it interacts
    with.


where this "must" come from ? an AI has to have a context sure... a world "as in our everyday world"... why would it ?


    It can only be intelligent in the sense of interacting intelligently.


Either it is conscious or it is not... if it is conscious, it knows it, it doesn't care if you see it behaving "intelligently" or not.

    A virtual world can't be intelligent.


What are you talking about ? I'm talking about the ability for an AI to give an ontological status about the information it has on something external to it... it has simply no way of doing it, as my example with the game program illustrate it, the program cannot know if it is run directly on the "metal" or on an emulation of it... which means from it's POV, the direct "metal" or a virtual "metal" has the same "realness"... which makes the "metal" from it's POV an hypothetical (it can or not be ontological, the AI as simply no way to tell).

OK. I thought you were saying the video game was intelligent. I agree that the AI within a virtual reality must form theories about what it interacts with and can't know that it's reality is virtual. But I doubt that there is anything that can be inferred from "we might be in the Matrix". It's just another way of saying we can't know Kant's ding an sich.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to