On Tue, Dec 22, 2020 at 1:56 PM Steve Richfield <steve.richfi...@gmail.com>
wrote:

> Colin,
>
> On Mon, Dec 21, 2020 at 1:11 PM Colin Hales <col.ha...@gmail.com> wrote:
>
>> Hi Steve,
>> OK. Let's try:
>>
>
> GREAT - some text to kick back and forth. Here goes...
>
>>
>> Page 2:
>> "In scientific behavior, empirical observation and theoretical science
>> face-off normally in the following three familiar science contexts:
>>
>> (i)                 Observation of a natural context (*empirical science*
>> ).
>>
>> (ii)              Observation of artificial versions of the natural
>> context. Call this engineered or replicated nature a
>> ‘scientifically-artificial’ version of nature (*empirical science*).
>>
>>
> This was pioneered with the "Harmon Neuron", but then quickly moved into
> programmable digital computers as neural networks.
>
> Neural network practitioners are cleanly divided into THREE camps, each
> having their obvious limitations, one being MUCH larger than the other:
> 1. 99% Pure empiricists, who twiddle with characteristics and properties
> to optimize some measure of performance.
> 2. 1% Pure mathematicians, who solve for the best network to optimize some
> measure of performance, and then propose characteristics and properties
> that parallel their mathematics. I used to be in this camp, until I
> discovered that neurons do an interesting sort of highly efficient
> bidirectional computation that is VERY different than what conventional
> digital computers are good at. I tried discussing this here, but apparently
> no one was able to carry on this particular conversation. I think I see a
> way to make "general purpose" computers that can do this and MUCH more, but
> with no one else on this bandwagon, it will probably pass when I eventually
> pass. There is considerable intersection between your field-theory view and
> my bidirectional computing view, nearly two sides of the same coin.
> 3.  Groups doing biological research, who attempt to as accurately as
> possible simulate neurons or parts of thereof. I was once part of such an
> effort at the University of Washington Department of Neurological Surgery.
>
> There is a computational method known as quadruple ledger accounting that
> is practiced by the World Bank and others to model the world economy, where
> people instead of neurons interact with each other in nonlinear and
> non-directional ways. It might be possible to "build out" quadruple ledger
> accounting methods to encompass both bidirectional and field computing, but
> the end result would probably be unrecognizable to everyone.
>
> I might be the only one, but I completely agree with you that fields are a
> BIG part of this. I even go a bit further, as I suspect that other field
> effects like the Hall Effect are probably also involved, which the Hall
> Effect can NOT be directly simulated, except at the same physical scale. It
> is all really complicated, but simply ignoring it can NEVER EVER lead to
> AGI as the others on this forum now hope. It appears to me that simulation
> methods CAN simulate field effects, but ONLY after they have been fully
> understood, and while I suspect your efforts won't directly lead to AGI, I
> DO suspect that your efforts might be absolutely necessary to EVER make an
> AGI.
>
> I see the path forward a little differently, but we might be converging on
> the same place:
> 1. We should publish a definition of "neurological simulation" that
> encompases both field and bidirectional effects, and "expose" efforts that
> fall short of this.
> 2. Once people see just HOW difficult it is to simulate real-world neurons
> in any useful way, people will start tackling the bidirectional problem.
> The bidirectional problem is a challenge, but doesn't look insurmountable.
> Electric circuit simulators like SPICE easily handle the bidirectional
> problem, at an *n log n* cost in time, which would be crushing for a
> large system like a brain, but which might be tolerable for simulating a
> flatworm's brain. I suspect you could simulate your theories on fields in
> SPICE.
>

My project is prototyping the EM field signalling. Just  the bare bones
physics of one patch of neuron membrane. Fully implemented (later), it will
do the dromic and antidromic propagation you mention as well as ephaptic
coupling. But I'll be focussing on the bare bones of the basic EM field
physics for now. It operates under science framework (ii). No models. No
emulation. No simulation. No software.

I am hoping this will push the issue over the line into mainstream thinking
and correct the currently distorted use of the science framework - where
(ii) is missing.


>
> (iii)            Creation of abstract models predictive of properties of
>> the natural context observable in (i) and (ii) (*theoretical science*)."
>>
>> This process is literally drawn in Figure 1 for 5 different science
>> contexts, all of which do exactly this (i)/(ii)/(iii) process EXCEPT in
>> (e), for the brain where:
>>
>> (A)  (ii) empirical science, in neuroscience and 'artificial
>> intelligence', *is missing from the science.*
>> (B) It just so happens that if you decide to do (ii), brain EM is the
>> thing that has been lost and that you replicate for the purposes. If you do
>> the science to explore that, then you are not using a general purpose
>> computer. You are exploring actual EM physics. It is empirical science.
>> (C) if you claim (iii) is all you need then you are distorting the
>> science in one place: *a unique, anomalous and unprecedented lack for
>> which empirical proof is required*. That proof arises through using (ii)
>> and (iii) *together*.
>>
>
> It looks to me like some of (iii) absolutely MUST precede (ii), or at
> least be intertwined with (ii), to provide enough guidance to ever make and
> debug anything that actually works. The last decade of AI "research" has
> absolutely PROVEN (at least to me) that even highly intelligent people
> can't blindly stumble onto the secret sauce for AGI.
>

I don't think we're quite there yet .... I am talking about getting the
neuroscience established properly in *all three* traditional areas by
restoring (ii) so that neuroscience/AI operates like a normal science
with normal empirical work. It currently does not do that. To clarify this,
let me cite a more completed definition of science from the paper. Page 2
again:

"In scientific behavior, empirical observation and theoretical science
face-off normally in the following three familiar science contexts:

(i)                 Observation of a natural context (empirical science).

(ii)              Observation of artificial versions of the natural
context. Call this engineered or replicated nature a
‘scientifically-artificial’ version of nature (empirical science).

(iii)            Creation of abstract models predictive of properties of
the natural context observable in (i) and (ii) (theoretical science).
*Activities (i)-(iii) meet each other in a mutual, reciprocating
distillation that converges on empirically proved ‘laws of nature’ that are
then published in the literature* (Rosenblueth and Wiener, 1945;Hales, 2014)
."

It is likely that most of the people on the AGI forum have never
encountered (ii). (i) and (ii) provide empirical evidence for comparison
with (iii) predictions. (iii) provides theoretical model predictions tested
under (i) and (ii). It reciprocates. This is how science works
everywhere *except
in neuroscience/AI.* We do not do (ii) in neuroscience/AI for no reason. It
is an accident/cultural habit handed down from the 1950s and
industrialised. Mistaking (iii) activities for (ii) is what the paper is
all about. Everything described with abstract equivalent circuits
(neuromorphic chips) and symbolic models (software) fits under (iii). The
natural (i)/(ii) physics is gone under (iii). In (iii) theoretical science
is emulation, simulation, models, software. In (ii) there is only (i)
physics and no models/software/emulation/simulation. I describe how the
(i)/(ii)/(iii) framework operates in great detail in Supplementary 2.

The proposed neuromimetic Xchip is the first time such a proposition for
(ii) has been proposed in the literature. It retains the (likely)
critically necessary natural (EM) physics of (i) for the purposes of
scientific characterisation of the brain under (ii) and so that
neuroscience/AI is normalised. Then and only then can the science properly
examine the anomalous, unique and unprecedented equivalence of (i) and
(iii), an unproved assumption only made in neuroscience/AI that may
actually be true. But we can't test it without (ii). Which we have never
done.

There is a professional obligation on all of us to recognise and accept a
flaw in our science conduct when we find it. The article details such a
situation. Can I suggest reading the conclusion? I can cite again:

Page 17. The way we conduct the science without (ii) ...

"... is methodologically equivalent to expecting to fly while never
actually using any flight physics and assuming, without any principled
reason explored by experimentation with flight physics, that flight can be
achieved by disposing of flight physics through completely replacing it
with the physics of a general-purpose computer, a state of
‘physics-independence’ not found in any other physics context. This sounds
like a harsh depiction of the science. It is merely a realistic description
of the situation. "

OK. Over the word limit we go. Turns out it takes many words to fix the
most complicated science mess in the history of science messes.

cheers,
colin

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf319c0e4c79c9397-M9f0d615ec03ab86b5b7474c1
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to