Steve,

 

this is precisely the critical point where we part. You follow the crowd by
thinking that *you yourself* are going to "self-organize" something and then
tell the computer how to do it. 

 

If you do that, the computer will remain as unable to self-organize as it
was before. 

 

Sorry, but you need to find out how *nature* self-organizes things. 

 

Sergio

 

 

From: Steve Richfield [mailto:[email protected]] 
Sent: Wednesday, July 11, 2012 11:28 AM
To: AGI
Subject: Re: [agi] Analog Computation

 

Sergio, Ben, et al,

Once in the analog world, I suspect the "trick" to make self-organization
work will be to have the network self-organize to simulate a model to in
effect PREDICT the coming inputs, complete with sometimes multivalued
uncertainties, etc. The "balance" to the "equations" would come in balancing
the errors, to seek to eliminate any consistency in the differences between
predicted and actual observations. When "done", the errors will appear to
these neurons (that are ever SO good at finding consistencies) to be
completely random.

However, this sort of prediction is a very dynamic continuous-time sort of
thing, and hence is difficult to simulate using present-day methods. For
example, you can't just show it isolated pictures, because every sudden
change going from one picture to the next would be a discontinuity that
would screw up the feedback in any continuous-time system. Instead, it would
have to learn to recognize things in a physically reasonable scene that
smoothly changes in a physically reasonable way to show various objects.

I don't think this limitation has anything at all to do with analog
solution, but instead is an inherent limitation to self-organization
methods, and hence is the reason that no one has yet made it work (right).
Time is the forth dimension, and once you slice it up, you have in effect
discarded most of the information in one of the four dimensions, so OF
COURSE past efforts to self-organize have failed.

The thing that has woefully misdirected unsupervised learning research is
that it is just barely possible, with millions of frames of training, to
ever so gradually train a network with isolated images. If this didn't work
at all, it would have been abandoned long ago. Now, researchers think that
better algorithms are needed, without realizing that they are all working on
the wrong problem.

Note that newborn animals train their vision QUICKLY, like in seconds. This
natural process is orders of magnitude faster THAN IS CONCEIVABLY POSSIBLE
with isolated images.

I suspect that this need for temporal consistency to self-organize carries
over to self-organizing higher-level functions. Note that this cannot be a
"statistical" process, because in natural systems it works too fast to have
collected statistics. You only need to see something new once to "get it".

Fast self-organization is now the "grand challenge" of AGI, because without
it AGI can't go anywhere at all, and present AGI efforts don't have a clue
how to do it. 

Ben: do you see any problem with the above sentence? Why take another step
in AGI before you have a solution to fast self-organization?

I can see how to quickly learn and organize using derivatives (which appears
to be what we compute with) that can't work with stable values (as used in
present AGI code). However, derivatives are discontinuous (and hence
unusable) unless the entire system operates in continuous-time. I see no
prospective substitute for operating in continuous time to preserve the
information in the 4th dimension, and it seems obvious (to me) that no one
now in AGI has yet understood this apparently fundamental challenge. Once
everyone gets onto the same page, we might be able to have a productive
conversation about this.

Nyquist says that you only need to sample at twice the maximum frequency, so
perhaps it IS possible to chop time up into slices. However, working with
samples does NOT excuse anyone from having to understanding how
self-organization is done in real time. On the contrary, it is yet another
complication that must be dealt with. Hence, the potential possibility of
cumbersomely working with finely divided slices in time does NOT change any
of the above discussion.

Note that the sampling rate to satisfy Nyquist may be quite high, because
there is NO FUNDAMENTAL LIMIT to input frequency. Systems depending on
Nyquist typically require low-pass filters to remove "noise" that is higher
than the highest frequency of interest, because this noise, when sampled at
lower rates, can "alias" as low frequency signals. I suspect that real-world
neurons look at higher-order derivatives to look for curvature in the
signals, etc., which would be greatly affected by higher input frequency
components. Hence, real-world vision systems may need slice time at 10kHz or
more to work like we do.

Steve
=============

On Wed, Jul 11, 2012 at 7:27 AM, Sergio Pissanetzky <[email protected]>
wrote:

 

Steve, Dorian, 

 

DORIAN> . In addition, the first   prototype  does not need to be entirely
analog. 

I entirely agree, actually we have no other choice. 

 

I feel fully obliged to respond in extent, but my time is limited. For now,
here is what I propose. The goal of demonstrating the capabilites of analogs
is highly desirable but will have to wait a little. A more modest goal needs
to be set, and this would be to simulate an analog on a PC and use it to
demonstrate how self-organization would work on an analog. A paper can be
published comparing the PC with an equivalent analog. 

 

My own immediate plans include doing just that, perhaps with a simple
problem of OO analysis or image recognition. I have already the algorithm,
fresh from the owen, and will write software for it soon. If you want, you
can do the same thing on your PC, I can help (you must be a good
programmer), the advantage for you being that you will end up having your
own platform. 

 

 

 

Ben, 

 

I am not sure if you are aware that algorithms that halt are causets. The
relations between variables in any algorithm satisfies the same conditions
postulated in the definition of causets. 

 

What this tells me, is that causets are a better way of describing the world
than algorithms. Because everything in the world will halt. So causets turn
the halting problem on its head. Do we really need algorithms at all, and
paying the price of introducing the halting problem?  Comments? 

 

Sergio

 

 

 

From: Dorian Aur [mailto:[email protected]] 
Sent: Tuesday, July 10, 2012 4:38 PM


To: AGI
Subject: Re: [agi] Analog Computation

 

In addition, the first   prototype  does not need to be entirely analog.

Steve's terms of "weak AGI" and "strong AGI"  make sense in this context, he
is making history .  Indeed, the  "weak AGI"  framework does not seem to
move far from current  AI,  it is limited by:
 (i) the Turing framework;
 (ii) fairly good math components  added on a distorted  interpretation of
experimental data - many biological misconceptions - digital spike, the
connectionist paradigm - everything is  between neurons -- completely
untrue!

Dorian

On Tue, Jul 10, 2012 at 1:49 PM, Sergio Pissanetzky <[email protected]>
wrote:

Steve,

 

you are not alone. How big can one go with an FPGA that is currently
available? 1K? 10K? 10K would already be nearing some practical applications
with EI, but 100K would be better. I am thinking EI because I am sure that,
if EI can be demonstrated for example in image recognition, then it would
attract attention immediately, including the chip makers. "General
computation" is too vague. Or, better, I propose to start "general
computation" with EI, then one could expand. 

 

Also, personally I believe this would be "hyper-Turing" but I would be very
careful with that term because there is too mych hype about it. Ben has
strong reasons why it is better not to use the term for now. I am very happy
that such things can be calculated, and there is plenty of time to find out
if they are hyper or not. 

 

Do you do these things? Do you build analogs from components? I don't have
any money, but just saying. 

 

Sergio

 

 

 

From: Steve Richfield [mailto:[email protected]] 
Sent: Tuesday, July 10, 2012 2:48 PM


To: AGI
Subject: Re: [agi] Analog Computation

 

Sergio,

On Tue, Jul 10, 2012 at 12:30 PM, Sergio Pissanetzky
<[email protected]> wrote:

how do you do millions with analogs? 


The technology is well known and would be fairly easy to build, but the
chips aren't (yet) available because there is no market (yet) for them!!!
This is obviously a chicken-or-egg problem.

Basically, you would build it just like an FPGA, where the interconnections
are made with programmed transmission gates. However, instead of switching
logic gates, you would be switching integrators and other analog building
blocks.

Note that people have already done this, but switched "artificial neuron
synapses" instead of more general purpose analog building blocks.

Such a device attached to a PC as an outboard processor could enable really
general purpose hyper-Turing computation at pretty much full unhindered
speeds. I see the promise here, but so far I seem to stand alone in this.

Steve


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> | Modify
Your Subscription

        

 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57>  


 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> AGI |
Archives | Modify Your Subscription

        

 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57>  


 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> AGI |
Archives | Modify Your Subscription

 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> 

 <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57>  


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
<https://www.listbox.com/member/?&;> Modify Your Subscription

 <http://www.listbox.com> 




-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.

 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> |
<https://www.listbox.com/member/?&;
ad2> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to