Derek,

Finally - a good opposing opinion...

On Mon, Jul 9, 2012 at 12:28 PM, Derek Zahn <[email protected]> wrote:

> >  Differential equations describe a system. Errors in the equations
> > map to errors in the system being described, and can be directly
> > corrected by adjusting the equations to eliminate the differences
> > between simulated/solved and observed results.
>
> You may not be getting as much engagement on this as you hope for because,
> perhaps, others are as baffled as me by what you are even aiming for here.
> Differential equations are *one* way of describing a system.


The BIG question here is whether diffy-Qs are appropriate for AGI use.
Clearly, some parts of us MUST work this way, e.g. the hypothalamus and
other control system elements. Beyond that, we are both speculating.

We care about that particular way because it has very cool and important
> correspondences with certain aspects of physics,


Diffy-Qs are the notation of preference for ALL physical systems. Since we
use our intelligence to operate within a physical environment, it seems
obvious (to me) that at minimum we need some diffy-Q capabilities. Sure,
beyond that we probably need more, but I can't imagine an intelligent
system capable of doing much useful without having some diffy-Q internals.


> which makes it a very useful way to model those physical phenomena.  Why
> this should be so is rather an awesome question


Actually it is quite simple. Pretty much everything works by integrating
the forces that affect it. You can write this in integral calculus form, or
differentiate both sides to get a diffy-Q.


> (see of course "The Unreasonable Effectiveness of Mathematics in the
> Natural Sciences" and subsequent discussion)...
>
> But what does that have to do with AGI?


Let me get this right. You expect to build intelligent systems while
willfully ignoring the very principles that governs all changes in our
environment?!!!


> "Differential Equations" are composed of particular types of terms
> representing particular mathematical functions... in domains that are
> readily described in those exact terms there is a chance for diffy-Q's to
> make a decent model, but thoughts aren't numerical,


Oops, there goes Bayesian math.


> concepts aren't continuous;


Neither are most systems of complex differential equations. Indeed, even
old vacuum tube analog computers had "comparators" that produced one of two
outputs, depending on the sign of the input. Using these, you could
fabricate almost any imaginable discontinuous function.


> the whole idea looks like a giant category error.
>

Obviously, you haven't grokked this yet.

>
> Of course it is true that neurons are made of atoms and brains are made of
> neurons and (biological) thoughts are the product of brains, so there is a
> sense that a good modelling method for the lowest level kind of models the
> higher level as well, but in practice this kind of reductionism very rarely
> actually works


I agree that it takes more than mere modeling. Even old vacuum tube analog
computers had supervisory functions, e.g. switching capacitors to change
the time base for faster or slower than real time solutions, the ability to
freeze a state for debugging, etc.

and when it does it is because the higher level is naturally described with
> the same kind of language as the lower level,


Agreed that this is the "sweet spot" of high efficiency. THIS has been my
guiding motivation to find a better way.


> so you get occasional useful cases like the fluid dynamics of weather
> prediction, but barring some revolutionary new perspective on cognition,


Which is exactly what we are looking for here.

it just isn't the case for the things we are interested in on this list.
>
> It's exactly why Cybernetics never went very far.
>

... on computers that were a millionth as fast and had a millionth the
memory of a modern Intel micro.

>
> If you want to revive or extend that avenue of investigation you have to
> explain how to fit cognitive phenomena into that particular mathematical
> construct, and why it would be appropriate to do so.


I have posted before about the fundamental fallacy of cognitive psychology
- that it studies our model of how we think, and not at all how we actually
think.

My point is that you (and others who haven't discarded cognitive
psychology) want the answers before you are willing to do the homework to
find them. I see no prospect for success with this 'tude.


> Nobody has ever been able to do it, even a little bit -- beyond some vague
> analogies leading to simplistic models that were abandoned because nobody
> could get them to correspond to the cognitive "territory" even as well as
> languages of logic or statistics (which also haven't yet proven adequate
> but seem at least to have gotten somewhat further).
>

Agreed. However, I have yet to see a credible attempt, because the
underpinnings are not yet in place. We are at the beginning of this
journey, not at the middle as you suggest.

We can't productively discuss ANY approach that isn't capable of
self-organizing (and not just learning), and no approach is there yet. We
haven't even made it to the starting gate. Kurzweil's exponential curves
ignore the fact there they have yet to start.

>
> I do find it a consistent and possibly correct hypothesis that there can
> in fact be no such higher-level model of cognition that has sufficient
> correspondence with the "territory" that we vaguely see and describe in
> terminology of cognitive psychology (etc) -- the Physical Symbol System
> Hypothesis (and other similar assumptions underlying AGI work) could be
> wrong.


I don't see that diffy-Qs are at all incompatible with PSSH. Indeed, most
present real-world diffy-Qs involve representing changes in PHYSICAL
entities. I have already published a paper showing how most things in the
brain must now be represented by derivatives rather than integrated
quantities - and that was even before diffy-Qs entered the discussion.
Hence, on this point there is no argument. It is the *algebra* of computing
the physical states that is at issue here. AGI is now short on answers as
to how this self-organizes, but Diffy-Qs may explain much of this. We just
don't know enough yet to have the answers you would like to see.

But if that is so, then there is no language about thinking or intelligence
> that has any use whatsoever.


All potentially practical proposals that I have seen represent numerical
quantities in some way, whether they are representing probabilities,
diffy-Q terms, or whatever. We never ever know anything absolutely for
sure, so no simplistic symbol manipulation can work well.

Stick to physics at the level for which there seems to be principled
> correspondence between map and territory, and figure out how to make it
> scale by 25 or so orders of magnitude, and good luck with that!
>

Between the first mechanical analog computers (e.g. the one at NOAA that
computed tide tables for a century) and modern multi-core microcomputers,
there is ~10 orders of magnitude in speed.

Your brain has ~13 orders of magnitude more active connections than an
abacus.

So, I don't know where you got your 25 orders of magnitude.

>
> I'm even skeptical of approaches that want to use such modelling language
> applied to intermediate levels of description -- like neuron modelling.  On
> what principled basis do we decide that the "important" features of those
> blobs of atoms are adequately described by any particular mathematical
> formalism?
>
All we have is empirical justifications supporting pet intuitions.


You are preaching to the choir here. I absolutely agree that present
"modeling" efforts are doomed for exactly the reasons you stated.


> It could work out, but there already is a field called "neuroscience" that
> is working on that with billions of dollars of resources.


Not really. It is SO misdirected that there is virtually nothing useful to
computing coming from present-day neuroscience.


> You could try telling *them* to use differential equations; maybe they
> haven't thought of it, I don't know.
>

I have known lots of these people, and I can't think of any of them who
would grok this discussion.

>
> This got rather long... but put more briefly, if you want anybody to
> engage this (or any!) idea, you have to explain what you are trying to do a
> little better, because it is baffling.


This is a standard problem at the beginning of any discussion regarding new
directions.


> In this case, differential equations are specific kinds of expressions
> involving specific mathematical constructs.  So, just to start:  what
> exactly are you thinking the variables in differential equations should
> refer to,


Whatever works, as this MUST be a self-organizing system.

such that they can model anything of interest to AGI researchers?
>

If I am right (and I think I am) present AGI directions are doomed.
Eventually, these people will either grok and evaluate the alternatives, or
fall into the dustbin of history.

Again, very good comments.

Steve



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to