Quentin Anciaux wrote:



2010/1/12 Brent Meeker <meeke...@dslextreme.com <mailto:meeke...@dslextreme.com>>

    Quentin Anciaux wrote:



        2010/1/12 Brent Meeker <meeke...@dslextreme.com
        <mailto:meeke...@dslextreme.com>
        <mailto:meeke...@dslextreme.com <mailto:meeke...@dslextreme.com>>>

           Quentin Anciaux wrote:



               2010/1/12 Brent Meeker <meeke...@dslextreme.com
        <mailto:meeke...@dslextreme.com>
               <mailto:meeke...@dslextreme.com
        <mailto:meeke...@dslextreme.com>>
               <mailto:meeke...@dslextreme.com
        <mailto:meeke...@dslextreme.com>

               <mailto:meeke...@dslextreme.com
        <mailto:meeke...@dslextreme.com>>>>


                  Stathis Papaioannou wrote:

                      2010/1/12 Brent Meeker <meeke...@dslextreme.com
        <mailto:meeke...@dslextreme.com>
               <mailto:meeke...@dslextreme.com
        <mailto:meeke...@dslextreme.com>>
                      <mailto:meeke...@dslextreme.com
        <mailto:meeke...@dslextreme.com>

               <mailto:meeke...@dslextreme.com
        <mailto:meeke...@dslextreme.com>>>>:


                                        I know.  I'm trying to see
        what exactly is being
               assumed
                          about the
                          computation being "the same".  Is it the
        same Platonic
                          algorithm?   Is it
                          one that has the same steps as described in
               FORTRAN, but
                          not those in LISP?
                           Is it just one that has the same
        input-output?  I
               think
                          these are questions
                          that have been bypassed in the "yes doctor"
        scenario.
                           Saying "yes" to the
                          doctor seems unproblematic when you think of
               replacing a
                          few neurons with
                          artificial ones - all you care about is the
               input-output.
                           But then when you
                          jump to replacing a whole brain maybe you care
               about the
                          FORTRAN/LISP
                          differences. Yet on this list there seems to
        be an
                          assumption that you can
                          just jump to the Platonic algorithm or even
        a Platonic
                          computation that's
                          independent of the algorithm.   Bruno pushes
        all this
                          aside by referring to
                          "at the appropriate level" and by doing all
        possible
                          algorithms.  But I'm
                          more interested in the question of what would I
               have to do
                          to make a
                          conscious AI.  Also, it is the assumption of
        a Platonic
                          computation that
                          allows one to slice it discretely into OMs.
Start by replacing neurons with artificial neurons
               which are
                      driven by
                      a computer program and whose defining
        characteristic is
               that
                      they copy
                      the I/O behaviour of biological neurons. The
        program has to
                      model the
                      internal workings of a neuron down to a certain
        level.
               It may
                      be that
                      the position and configuration of every molecule
        needs
               to be
                      modelled,
                      or it may be that shortcuts such as a single
        parameter
               for the
                      permeability of ion channels in the cell
        membrane make no
                      difference
                      to the final result. In any case, there are many
               possible programs
                      even if the same physical model of a neuron is used,
               and the same
                      basic program can be written in any language and
               implemented
                      on any
                      computer: all that matters is that the artificial
               neuron works
                      properly. (As an aside, we don't need to worry about
               whether these
                      artificial neurons are zombies, since that would
        lead
               to absurd
                      conclusions about the nature of consciousness.)
        From the
                      single neuron
                      we can progress to replacing the whole brain,
        the end
               result
                      being a
                      computer program interacting with the outside
        world through
                      sensors
                      and effectors. The program can be implemented in any
               way - any
                      language, any hardware - and the consciousness
        of the
               subject will
                      remain the same as long as the brain behaviour
        remains
               the same.


                                You're asserting that neuron I/O
        replication is the
               "appropriate
                  level" to make "brain behavior" the same; and I tend to
               agree that
                  would be sufficient (though perhaps not necessary).
         But that's
                  preserving a particular algorithm; one more specific
        than the
                  Platonic computation of its equivalence class.  I
        suppose a
               Turing
                  machine could perform the same computation, but it would
               perform
                  it very differently.  And I wonder how the Turing
        machine would
                  manage perception.  The organs of perception would
        have their
                  responses digitized into bit strings and these would be
               written to
                  the TM on different tapes?  I think this illustrates
        my point
                  that, while preservation of consciousness under the
        digital
               neuron
                  substitution seems plausible, there is still another
        leap in
                  substituting an abstract computation for the digital
        neurons.

                  Also, such an AI brain would not permit slicing the
               computations
                  into arbitrarily short time periods because there is
               communication
                  time involved and neurons run asynchronously.


               Yes you can, freeze the computation, dump memory...
        then load
               memory back, and defreeze. If the time inside the
        computation
               is an internal feature (a counter inside the program),
        the AI
               associated to the computation cannot notice anything if
        on the
               other hand the time inside of the computation is an input
               parameter from some external then it can notice... but I
               always can englobe the whole thing and feed that
        external time
               from another program or whatever.

           That assumes that the AI brain is running synchronously,
        i.e. at a
           clock rate small compared to c/R where R is the radius of the
           brain.  But I think the real brain runs asynchronously, so
        if the
           AI brain must do the simulation at a lower level to take
        account
           of transmission times, etc. and run at a much higher clock rate
           than do neurons.  But is it then still "the same" computation?



               The fact that you can disrupt a computation and restart it
               with some different parameters doesn't mean you can't
        restart
               it with *exactly* the same parameters as when you froze it.


           That's arbitrarily excluding the physical steps in
        "freezing" and
           "starting" a computation, as though you can pick out the "real"
           computation as separate from the physical processes.  Which
        is the
           same as assuming that consciousness attaches to the Platonic
           "real" computation and those extra physical steps somehow don't
           count as "computations".


        But it is the same... When I write a program in say java...
        that's not my "program"  that is run on the machine, it is a
        translation in the target machine language. Yet it is the
        same, that's what we use UTM for.

        Also the restarted computation has no knowledge at all of
        having been freezed and restarted in the first place.


    Well maybe I'm confused about this, but you're talking about a
    program that has the same input/output and therefore all the
    processes after the input and before the program halts are
    irrelevant.  You're saying it's "the same" if the same inputs halt
    with the same outputs - which is the abstract Turing machine or
    Platonic meaning of "the computation".  I don't see that as being
    the same as attaching consciousness to brain processes which don't
    halt?  How are you mapping the Turing computation to the brain
    process?  If you replace each neuron with a Turing machine that
    reproduces the same input/output, I can understand the mapping.
     But then you reproduce that collection of interacting Turing
    machines with a single, super-Turing machine which includes some
    simulation of the local environment too.  Because the neuron level
    Turing machines ran asynchronously, the super-Turing machine has
    to include a lot of additional computation to keep track of the
    signal passing.  Yet we are to bet that it instantiates the same
    stream of consciousness because - exactly why?...because it will
    halt on the same outputs for the same inputs?

Because it is isomorphic to the "platonic" computation.

But there's my hang-up. What does "isomorphic" mean in this context. It certainly doesn't going thru the same physical states, since a computer could do arithmetic base 3 instead of base 2. I can understand isomorphic to the program ADD, it means given two input numbers n and m it halts with n+m in the output, i.e. it is functionally isomorphic. But how does that meaning translate to an AI brain that presumably doesn't halt?

If it was not the case... I don't see how I could write a program in any language in the first place, if a compilator couldn't make an equivalent program for the target machine.

Just incidentally (it probably has no bearing on your point) but compilers generally don't realize whole functions - i.e. for some inputs they crash. And it seems likely to me that for some inputs brains crash too.

Also what you're saying is that somehow the machine which executes it adds something to the computation...

I'm saying the implementation adds something to the Platonic abstraction.

that somehow super AI could know (without having necessary "sensor" for) information outside the computation. That it could sense that it is run on super Quantum Computer 360 and not on the super x86_2048 or on a virtual machine running on ... ?

Or written out on a piece of paper. I'm not saying it could sense anything that specific, but that it might make a difference in the consciousness realized. To say otherwise is to simply assume that consciousness = Platonic abstract computation, since that the only thing invariant among the different realizations.

This thread really originated with a discussion of observer-moments though. Of course any synchronous computer run can in principle be stopped and started and modulo some initialization, go through the same computational steps. But what bothered me was the idea that a state of such computation, which realisitically for a human being would have a cycle time of a nanosecond, would correspond to a "thought" or an OM. ISTM that a thought would have a duration of many millions of cycles and hence might be said to overlap preceding and suceeding "thoughts" and this would provide an ordering that did not depend on some memory content inherent in the OM.

Brent




Quentin

    Brent

    --
    You received this message because you are subscribed to the Google
    Groups "Everything List" group.
    To post to this group, send email to
    everything-list@googlegroups.com
    <mailto:everything-list@googlegroups.com>.
    To unsubscribe from this group, send email to
    everything-list+unsubscr...@googlegroups.com
    <mailto:everything-list%2bunsubscr...@googlegroups.com>.
    For more options, visit this group at
    http://groups.google.com/group/everything-list?hl=en.






--
All those moments will be lost in time, like tears in rain.

------------------------------------------------------------------------

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


Reply via email to