Demo movie ahead >

Imagine this on a neuromorphic chip. Very parallel.

And energy efficient because all activations are based on input stimuli leaking 
and a few self-ignition nodes which generate their own energy/ reward, which 
leak. These nodes are easy to activate.

It's almost too simple.

Yeah, the nodes aren't representations like in an ANN, but other than that 
everything is realistic. The nodes actually are representation because multiple 
different contexts can activate them ex. myqcatq lights up dog.

https://www.youtube.com/watch?v=TL1AQR8oLSU

I have thought about it for a long time. ANNs basically are layers of nodes 
that zig-zag activations up based on weights (connections), some weights send 
inhibition, it is looking at the context to select the Next Token. The contexts 
that activate it are similar to the stored contexts how when they were stored - 
their positions (delay of activated letters), alternative words. Frequency on 
the node makes it more pre-activated. So to me, energy in the system leads to 
activation.

The brain stores sequences of letter features. The hierarchy allows them to 
integrate their contexts so all are 'connected'. Recognition (understanding/ 
translation) is how data/evolution evolves. The more context the better/faster 
understanding/evolution. Every word I say in my brain just has a relative delay 
(position) and nodes it activates/stores to.

I'm looking for funding so I can hire a freelancer from Upwork, it has been 
easy/fast for me. Total milestones will cost ~400USD. However if anyone here 
has a lot of time and experimental desire, you can put this together in just a 
day.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T409fc28ec41e6e3a-Md16c1b6fcf776528bf4e4ebe
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to