Convincing other people to accept or use (or even think about) the
mathematics that I am thinking about is not one of my goals. While my ideas
are sketchy at best, they would only be useful to me if I actually could
use them. Getting people to comment helps me to think outside of my own
personal 'box'.
The idea of Conceptual Mathematics does not demand a fundamental temporal
characteristic but I can see how one might be useful. And the idea of a
temporal differential is relevant to what I was trying to explain to
rouncer81. The mathematics of concepts would not be limited to 'strengths',
or 'magnitudes', or 'weights'.
I started thinking about this idea and somehow I got sucked back into my
personal vortex of the p=np? problem. I had to stop and try to remember
what I was thinking about when I started this thread and then I had to
remember what I was thinking about before I started thinking about this
topic.
I was at the eye doctor's office and he had a book on art and the eye.
There was an interesting section on how the eye was able to use optical
activation and inhibition of the cells to detect lines (transition lines).
A similar process can be used to detect other features including relative
movements and areas of colors. I began thinking about analogous methods
that might be worked on various kinds of concepts other than imagery and
motion. So there is something about dealing with transitional features that
can be likened to differentiating and integrating of elemental sensor
inputs. So I am not really thinking of an Artificial Neural Network but of
a Conceptual Net which can be used to detect, react, and learn to work with
high level structured information - and is there a way I could express this
system, or at least part of it, in a mathematical way to make it efficient.
So the mathematics would have to be simple enough for a me to figure out.
While my dream program would be able to work with high-level symbolic
information, it would also be able to work with less processed input. So
temporal differentiating could be more important in that scenario. But the
important thing to me is the idea that there might be an analogous kind of
thing with abstractions in general.  There might be a mathematical
differentiation and integration of abstractions (like generalizations) that
were derived from symbolic information or from more unprocessed input data.
Thank you for your comments Steve, because it helped me to think about this
from a slightly different perspective.

Jim Bromer


On Sun, Jun 9, 2019 at 5:53 PM Steve Richfield <steve.richfi...@gmail.com>
wrote:

> Jim, etc al,
>
> This discussion was greatly extended and played out in the early days on
> neural networks. I will outline how it went. Someone (me?) should write a
> paper about it. Anyway...
>
> Me (and some others): There MUST be a math of adaptive learning, etc, and
> if so it almost certainly must not only make sense in value, but also
> significance and dimensionality. Further, given more than a hundred million
> years of evolution, many of the answers are waiting to be seen in the lab -
> if only an astitude mathematician would provide guidance.
>
> Several people showed how NNs could be created that fit ALL of these
> criteria - but it took different kinds of neurons to do different kinds of
> jobs, e.g. logic vs PID mechanical control.
>
> It soon became obvious that maybe 5% of the people in the field could
> follow the math - leaving the other 95% clueless about what was even being
> discussed. The mathematicians were ignored or shouted down (sound
> familiar?), and with some interesting exceptions, could not find funding,
> in part because they were being actively blocked by the non-mathematicians.
>
> My own major contributions were:
>
> 1.  Showing that one of the two biological neurons whose transfer function
> had ever been characterized was almost certainly computing based on the
> logarithms of probabilities, while the other neuron was computing something
> ELSE.
>
> 2.  Showing how and why differentiating inputs and integrating outputs of
> a NN, as is routinely done in biology, left it's transfer function
> unchanged, but transformed it's learning to temporal.
>
> These are HUGE, but less than worthless in a world that disparages
> mathematics, disparages biology, disparages..., etc. Doing this, I have
> become a sort of parriah in NN, AI, AGI, etc.
>
> So, I hereby pass this torch onto you to, because there appears to be no
> way you could have less success with it than I have had.
>
> Steve Richfield
>
> On Sat, Jun 8, 2019, 8:43 PM <keghnf...@gmail.com> wrote:
>
>>
>> Evolution 2.0 Prize
>>
>> https://www.herox.com/evolution2.0
>>
>>
>>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T395236743964cb4b-M3e4f655d91f4609c3db14636>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T395236743964cb4b-M1fd1b13535beb31e33dbff8d
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to