Loosemore, et al,

Just to get this discussion out of esoteric math, here is a REALLY SIMPLE
way of doing unsupervised learning with dp/dt that looks like it ought to
work.

Suppose we record each occurrence of the inputs to a neuron, keeping
counters to identify how many times each combination has happened. For this
discussion, each input will be considered to have either a substantial
positive, substantial negative, or nearly zero dp/dt. When we reach a
threshold, of, say 20, identical occurrences of the same combination of
dp/dt that is NOT accompanied by lateral inhibition, we will proclaim THAT
to be our "principal component" function for that neuron to do for the rest
of its "life". Thereafter, the neuron will require the previously observed
positive and negative inputs to be as programmed, but will ignore all inputs
that were nearly zero.

Of course, many frames will be "corrupted" because of overlapping phenomena,
sampling on a dp/dt edges, noise, fast phenomena, etc., etc. However, there
will be few if any precise repetitions of corrupted frames, whereas clean
frames should be quite common.

First the most common "frame" (all zeros - nothing there) will be
recognized, followed by each of the most common simultaneously occurring
temporal patterns recognized by successive neurons, all identified in order
of decreasing frequency exactly as needed for Huffman or PCA coding.

This process won't start until all inputs are accompanied by an indication
that they have already been programmed by this process, so that programming
will proceed layer by layer without corruption from inputs being only
partially developed (a common problem in multi-layer NNs).

While clever math might make this work a little faster, and certainly wet
neurons can't store many previous patterns, this should be guaranteed to
work, and produce substantially perfect unsupervised learning, albeit
probably slower than better-math methods, but probably faster than wet
neurons that can't save thousands of combinations during early programming.

Of course, this would be completely unworkable outside of dp/dt space, as in
"object space", this would probably exhaust a computer's memory before
completing.

Does this get the Loosemore Certificate of No Objection as being an
apparently workable method for substantially optimal unsupervised learning?

Thanks for considering this.

Steve Richfield



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to