Engineering. Yes.

But we do want to build artificial birds. Against the leaf eating
insects and other anthropization balancing items.

Other industries studied birds too. The aircraft industry studied
(underwater) boats more than birds. If they'll study more birds, planes
will be less shaky in strong winds.


On 01.07.2019 00:08, Matt Mahoney wrote:
Colin, you seemed confused. AGI is not science. It is engineering.
Science is about finding theories that make useful predictions and
testing them with experiments. Engineering is about designing and
building solutions to problems.

The goal of AGI is not to understand brains or build an artificial
brain. It is to figure out how to make machines that can do all the
things that humans can do. The funding for AGI comes from the desire
to automate human labor.

We study neurobiology and cognitive science in the same way that the
aircraft industry studied birds. It's not to build artificial birds.
It's because birds solved the problem we wanted to solve. In the same
way, we discovered neural networks by studying brains. But now, most
of the work in AGI is coming up with better designs.

So stop telling us we are doing science wrong.

On Sat, Jun 29, 2019, 11:22 PM Colin Hales <[email protected]
<mailto:[email protected]>> wrote:


    You will find below, a proof that the science of AGI is deformed
    and that the real origins of AGI are in the creation and
    comparison of both the left and the right side of (e)
    neuroscience, done primarily and initially to prove the nature of
    the similarities and differences. (e) left chips have brain
    physics (causality) on them. (e) right chips have the causality of
    a computer or that of a hardware model of the brain physics.
    Confusing these two with each other and the brain itself is the
    actual problem.

    The next installment I more fully detail the nature of the (e)
    neuroscience activity to include the science of consciousness. I
    use a revisit of the 'silicon replacement thought experiment'. By
    re-doing the experiment you can, by making the right choices, end
    up wither in (e) left or (e) right. It's a lead up to the full
    chip design that includes its role in a science of consciousness,
    which is also an (e) activity.

    Please comment.

    Thanks.

    Colin
    Let's add two new rows to the depiction of science in the previous
    post: (e)  Neuroscience and (f) Computer Science.

    image.png


      The underlying structure of the science of Artificial General
      Intelligence

    The question is “/Where in the diagram is the science of AGI
    depicted , including its empirical and its theoretical halves?’/.
    Here we have the original science diagram extended to two more
    situations: Neuroscience and Computer science. These are two
    separate disciplines. Neuroscience is a little over  100 years
    old. Computer science is about half its age at this time. They are
    separate disciplines and inhabited by different practitioners.


      (f) Computer Science

    Computer science is responsible for the ‘theory’ used to create
    the first electronic computers that made it into commercial
    production. This is the Von-Neumann architecture, which still
    dominates all the world’s computers. Call this scientific ‘theory’
    t_fvn . It was created by human imagination using mathematics,
    largely attributed to Alan Turing (it is a ‘Turing Machine’), and
    is an example of theoretical science created for something that
    does not exist in nature. This is why (f) middle is missing and
    does not exist. Computers conceptually originate in an act of
    theoretical science.

    Before electronic computers were built and proved, computer
    science had no formal empirical component. Empirical computer
    science occurs when a new computer is designed and built. In the
    case of the von-Neumann architecture, the empirical science
    literally, physically built the rules t_fvn . into physics
    (causality). That causality literally is the computer. That is,
    the causality of the computer follows the rules in t_fvn . Turing
    machines have a program (we call it software). The program is not
    inside the rules, t_fvn , used to build the computer. That is the
    reason for the power of this kind computer: the causality of the
    computer physics (e.g. transistors organised as per rules t_fvn )
    is completely disconnected from and degenerately related to the
    program it runs. The ‘causality’ of a program is different. It is,
    instead, implicit in and emergent from the syntax and grammar of
    the essentially infinite number of sequences of instructions the
    computer runs. For example, the program may be an exploration of
    the physics of oxidation in the science of (a) right. The
    formalisms (symbol system) is ultimately that of arithmetic that
    follows the rules of a calculus at the descriptive resolution of
    what must be called a ‘combustion simulator’. We can see how the
    real causality of the computer substrate is being used to explore
    the ‘virtual causality’ of the program/model it is running. The
    causality of oxidation is entirely missing.

    Thus, we see that when a new kind of computer is built from an
    abstract design for a computer like t_fvn , the work becomes the
    empirical work of computer science. That’s its primary job:
    creation of computers. A secondary job is then the creation of the
    program (models, abstract symbol system) explored by the computer.
    When the primary job is done, the causality of the computer is
    validated as correctly implementing the rules t_fvn . That done,
    the science is complete. Companies like IBM (Inc) and INTEL (Inc)
    do this on an industrial scale.

    The most recent and pertinent example in computer science is the
    creation of the ‘neuromorphic computer’. This is a modernised
    version of an analogue computer. It is a very different form of
    computer. Let’s call the rules of its causality t_fnm . This basis
    for a computer results in a highly parallel (fast) computer that
    runs only a very limited set of ‘programs’. The programs
    themselves are actually part of the definition of the causality of
    the neuromorphic computer (models are effectively hard-coded into
    the causality of the chip and are also within t_fnm. ). This is
    different to the Von Neumann architecture t_fvn and is a property
    shared with the original form of electronic analogue computers,
    where the causality of macroscopic hardware building blocks are
    literally wired to each other by humans. In modern neuromorphic
    chips, there is an equivalent process run by a user on a host
    computer. It supplies operating parameters that can control the
    causality on the chip within some range that is built into the
    hardware. There is no ‘language’. Instead the (specifically
    adaptive) causality of the model is on the chip and the variables
    in the model are representations ‘written in voltage ink’ rather
    than in binary coded digital numbers.

    The inspiration (basic architecture) of the neuromorphic chip came
    from (e) neuroscience. Some of the diffusion-currents in
    transmembrane ion transport biophysics obey rules similar to
    sub-threshold field-effect transistor physics. This specialised
    transistor physics (causality) is utilised to allow efficient
    exploration of models of brain signalling biophysics. Notice that
    I said MODELS of brain signalling biophysics. It is possible for
    the neuromorphic chip to express voltage waveforms that look
    qualitatively similar to real voltages found in the brain. But
    these voltages are not expressed by brain signalling biophysics.
    That is gone. Instead, the voltage is the representational output
    of a model. These two things are fundamentally different. The
    causality of the model is not the causality of the physics
    (membrane bioelectromagnetism) of the brain. That two entirely
    different forms of physics/causality can result in an outwardly
    identical voltage waveform does not make the two things (the brain
    and the model) identical. Classical electromagnetism allows this
    kind of degeneracy in voltage.

    The net result is that we now have a new kind of computer. A
    neuromorphic computer in location (f) left. What we cannot do is
    claim we have replicated that part of the original brain that
    inspired it. If we did that, the causality of the brain itself
    would be on the chip. Not the causality of the model, and it would
    be located at (e) left, not (f) left. It would be done by a
    neuroscientist for very different reasons.

    However, once the neuromorphic chip exists, it becomes another
    ‘computer’ for use in location (e) Right. It can be used to
    explore models of the natural brain in neuroscience. This very
    thing is literally being created on a large scale by the ‘Human
    Brain Project’. It is not a replicated brain (e) left. It is an
    exploration of models of the brain as part of (e) Right and all of
    the practitioners know that.


    (e) Neuroscience

    Unlike computer science, neuroscience has a natural phenomenon,
    (e) middle, to study. Outwardly it operates just like (a)…(d). On
    the right are neuroscience’s deliverables: models of the brain
    labelled t_e . Just like every other physical science, it can
    explore these models using computers built by computer science and
    then configured to explore the ‘syntax’ and ‘grammar’ of the
    formal model. A compartmental electrical equivalent circuit model
    of neurons is one such example. Software for this abounds (e.g.
    the NEURON platform).

    The computer exploration of models (a)…(d) right is a simulation
    of the appropriate nature. It is not an instance of the nature. By
    extension, (e) is a also simulation of the brain (or parts of it).
    It cannot (prima facie) be claimed to literally be an instance of
    (part of) brain (e) middle. That such a model, suitably embodied,
    may usefully stand-in for the function of a brain is moot to the
    formal structure of the science. It remains theoretical science in
    (e) right. If a use in technology is found for it (such as pattern
    recognition on images using convolutional neural network models),
    then it can be claimed to be automation inspired by the brain,
    even if it is a ‘learning’ (adaptive) model (machine learning). At
    least, at this level that is the only claim that can be made about
    the application. That is, it cannot obviously be claimed to be an
    actual instance of ‘artificial version of the natural
    intelligence’ upon which it is based. None of (a)…(d) do that, so
    (e) has to have a special case if such a claim is made. To a
    certain extent this is just quibbling about nomenclature. However,
    the calling computer-exploration of models inspired by natural
    brains ‘artificial intelligence’ is the universal behaviour found
    in mainstream descriptions of the process. This use of the term is
    premature and may ultimately be proved incorrect. It cannot be
    formally claimed until the relationship between neuroscience (e)
    and (f) computer science is completed.

    Just like its counterparts in (a)…(d) left, (e) left is a
    replication of the (e) middle nature, to some level of fidelity.
    That is the physics (/causality/) used in (e) middle is present in
    (e) left. If operating normally, the empirical science of (e) left
    feeds back into the formal science deliverables in (e) right (t_e
    ). (e) left, too, can be used to create technology. However, in
    (e) left, the technology emerges with prima facie stronger claim
    to literally be an artificial (engineered) instance of natural
    brain (e) middle. Insofar as brains are responsible for
    intelligence, that means (e) left has a stronger claim, to be
    creating artificial intelligence, than (e) right.

    Notice that unlike (a)…(d) left, there are no instances of (e)
    left that can be found proposed or implemented in the relevant
    neuroscience. That is the reason for the question mark. It has
    never been done. Reasons for this are easy to find. First, the
    brain’s natural causality (bioelectromagnetism) has only recently
    described well enough. Second, the technology needed to replicate
    it inorganically is also a recent invention. Chip foundry
    technology with feature sizes similar to the brain (~5nm) are now
    real.

    Has any detectable AGI practitioner been waiting around for the
    technical hurdles to be overcome, so that (e) left can start? It
    can be proved (by absence) that (e) left has ever been /proposed/
    or discussed by neuroscience in a manner that solidly locates an
    awareness of the distinctness of the (e) left neuroscience. As a
    result, all of the potential knowledge gained by neuroscience from
    the standard practice of science on the left side of the diagram,
    is missing. Also missing is the actual location of real artificial
    general intelligence, created as part of science and for
    deployment of technology in suitable contexts. To disprove this
    claim all that is needed is one instance in the literature of any
    discussion or instance of (e) left in a neuroscience context.
    Equally, a discussion that proves conclusively why (e) left should
    be completely abandoned would do.


      The Science of AGI: its structural malfunction

    The deformation in the science has been made evident. The AGI
    industry is currently located in (e) right, exploring theoretical
    models with computers built in (f) and then thinking that
    ‘artificial intelligence’ is the result. Meanwhile, real
    artificial (engineered) general intelligence is in (e) left and
    the relevant practitioners don’t know it.

    The neuroscience omission of (e) left is systemic. Throughout the
    literature and its commentary, the term ‘artificial intelligence’
    and use of the word ‘computer’ are deeply and problematically
    entangled. If you tell someone you are doing artificial
    intelligence, but not using a computer, puts you at odds with a
    gigantic cultural meme in science and out into the laity. If
    computers were not invented, and the neuroscience and chip
    technology progressed by using vast armies of human ‘computers’,
    AGI done on (e) left could still be alive and well.

    What kind of assumed principle might be used to uniquely confine
    an entire discipline, configuring it into this deformed state?
    When you ask for practitioners to idsentifiy the principle, you
    get different answers, amongst which are:

    ·“Brains are computers”

    ·“Brains are processing information, so do computers, therefore
    brains are computers”

    ·The “substrate independence hypothesis”

    ·The Church-Turing thesis

    ·“Can’t see the difference between a computer model of sufficient
    detail and a natural brain”

    ·The ‘black box’ concept

    ·“An AGI is just a whole lot of different models (narrow AI) on
    computers, piled up together and linked properly”.

    If there was a consistently understood, technically sound,
    scientifically (empirically) proved reason ( a scientifically
    proved principle) for deforming neuroscience discipline this way,
    then the industry should be able to cite it and you should be able
    to find a trail in the literature linking that ‘principle’ in
    support of a deliberate pattern of behaviour that avoids doing (e)
    left in neuroscience. Avoiding (e) left is what happened and no
    such literature trail can be found.

    If there was such a principle justifying the exclusion of (e) left
    from neuroscience, how would it be scientifically proved? To do
    that you’d have to do (e) left and then compare it with (e) right.
    Then the two would have to be proved to be indistinguishable or at
    least present a plausible case for accepting the equivalence, when
    theory and nature part company and under what context. Once
    proved, then you would have a scientifically proved reason for not
    bothering with (e) left. You’d know exactly what the nature of the
    difference between (e) left and (e) right is. You’d know how to
    recognise when (e) right fails and what to do about it.

    That is the nature of the deformation of the science of artificial
    general intelligence. “Brains are computers” is the most general
    claim, and this took root at the birth of cognitive science, where
    computers were accepted as a metaphor for a brain in psychology
    (e.g. (Pylyshyn, 1980). Somehow the metaphor became so strong it
    has caused a loss of a branch of empirical neuroscience. Computers
    are a useful metaphor that guides theoretical brain models.
    However, scientific proof they are literally functionally
    indistinguishable from brains requires backing by the appropriate
    empirical work done jointly in (e) left and (e) right.

    The malformation of the science can also be couched this way: AGI
    is a science discipline carried out by using computers in
    activities that are theoretical neuroscience and that have been
    confused, for complex cultural reasons, with the empirical
    neuroscience that is charged with the creation of artificial
    general intelligence. The confusion is equivalent to an unproved
    assumption that (e) left and (e) right are literally identical
    where such an identity exists nowhere else in science. It can also
    be described as a ‘map/territory’ confusion. However you describe
    it, it is a real observable deformation.

    If one is to claim (e) left is missing, then what would constitute
    an instance of (e) left that further demonstrates the deformity?
    The basis for the difference is conceptually simple: on (e) left
    you will find brain physics (causality) on the chips. On (e) right
    you will find the physics (causality) of a computer running its
    operational program/model. These two things are categorically
    different outcomes and should not be regarded identical without
    doing the science correctly.

    Finally, where does the science of AGI sit in the figure? The
    answer is now clear: it is a branch of empirical neuroscience
    carried out by neuroscientists, brain biophysics experts and
    engineers that can put that physics on chips. Computer science
    operates to help do the comparative science that determines the
    equivalence described at length above. Once that science is done,
    then computer science will know the extent to which AGI can be
    claimed to be within their ambit.

    None of this discussion constitutes a denial of a context when (e)
    left and (e) right are equivalent and functionally identical to
    (e) middle. This discussion has revealed that the science that
    tests the equivalence is deformed, and that the state of affairs
    goes unnoticed. We are denying ourselves valuable science
    knowledge and are unaware of it at a cultural/systemic level in
    science.





*Artificial General Intelligence List
<https://agi.topicbox.com/latest>* / AGI / see discussions
<https://agi.topicbox.com/groups/agi> + participants
<https://agi.topicbox.com/groups/agi/members> + delivery options
<https://agi.topicbox.com/groups/agi/subscription> Permalink
<https://agi.topicbox.com/groups/agi/T87761d322a3126b1-M686bebc9ceb7f695c1b04c6d>


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T87761d322a3126b1-Mdcb28759d5be032c84a92d45
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to