Any level of complexity creates a natural wall behind it,
where the previous level cannot be constructed from the same concepts as the 
current level; this is
just a law of reality. Ergo, any Human devised construct like language, cannot
be used to create a true Human level AGI.  Language, set theory, graph theory 
have no inherent
intelligence, without the intelligence of the user they are totally useless,
they are all a product of an intelligent system, a higher level of complexity.


IMHO the lowest level of abstraction to consider any AGI
schema is the spike, electrochemical synapse and neuron.  We are the only 
highly intelligent species we
currently know of, it just makes sense to use our own nervous system as
template to create an AGI. I needed to simulate the whole system including
dimensional space/ time, electrochemical effects, etc in order to get insights
into how the brain functions.


I think it’s important to draw some distinctions.  The brain doesn’t use our 
traditional
concepts of data storage or computation, so no binary gates, numbers, etc.  
Neither is the brain built complete like a
traditional mechanism, it grows and develops along with its senses over
time.  There are no programs just the
GTP, though I suppose for the purposes of this explanation the GTP could be
considered as the main program loop.  The
GTP never stops, it’s constantly cycling and phasing.  The brain has a relevant 
3D volume/ shape
unlike a silicone wafer, so the brain exists in both space and time, which is
important. Electrochemical local/ regional effects do contribute the processing
as do both the internal and external feedback loops. You can consider reality
as part of the feedback process, motor cortex/ cerebellum move muscles, alter
reality, vision/ audio and tactile senses bring the loop back into the system,
etc.  Memories are engrained in the
structure of the connectome, relative to the current state of the GTP so no
linear/ serial searching is required, everything’s parallel.  There is no 
inherent learning mechanism, the
connectome learns to learn, it’s a structure that has the capacity to learn
anything within the frequency domains of its senses.


The external
sensory cortex re-encodes incoming sensory streams by applying spatiotemporal
compression and then re-maps the temporal phase to a spatial dimension/ 
location,
time is irrelevant to the mapping. So the external sensory cortex areas
recognise/ encode to external reality time, but once the sparse re-encoded
signals enter the GTP they are outside/ reality time independent, internal GTP
time is regulated/ utilized by the connectome. Our sense of time is provided by
another brain structure entirely and can differ greatly from the external/
reality time, the brain can change its concept of time depending on the
experience or state of the GTP.


The GTP flows
through the intelligent structure of the connectome, and the connectome is
built from the state of the GTP. Curiosity partly comes from the sharing of
common networks, cortex areas that encode similar qualities of reality. 
Self-aweness
is the recognition of internal processes in the same terms that the system
understands external processes.  Consciousness is a bi-product of the 
complexity, harmonics piggy back on
the logical patterns within the GTP, it can’t be avoided and is learnt and
engrained into the connectome along with everything else.  If the GTP stalls/ 
stops then that connectome
is dead.


I’m ranting,
I’ve been doing this project for 20 odd years and I’m still awe of the
complexity and beauty of the human condition.


> Which sounds
like you want to replace phonemes with some kind of protocol based on
"what it has heard from its peers".


The vocal
output streams are not directly related to what the system has
previously heard.  They are an internal
approximation built from the various cortex regions that have contributed to
the output.  They are generated by the
systems internal model/ understanding of reality through experience using its
own senses.


Phoneme model


I was using the CMU dataset to test the audio clips of the
phonemes I’d recorded from the phoneme video, I needed to be sure that the
clips would in-fact when combined produce intelligible speech.


The actual vocal outputs from the connectome are parallel
spatiotemporal spike trains that would normally drive the vocal cords/ muscle
inflections used to shape the sounds, so the pronunciation of a phoneme is
actually a stream not a specific trigger. You can feel your jaw, tongue move as
you make sounds, you can hear the feedback through the bone and these signals
are all relevant to the connectome, they drive/ provide the resolution/
accuracy required to produce speech.  At
this stage the phonemes will require a kind of wrapper; a program to interpret
the connectomes output and then trigger the phonemes.  This is not an ideal 
solution hence the botch,
but I’m already using a similar schema to convert the bots head/ arm movements.


To me it’s not a matter of writing a theory;  we already know of an intelligent 
schema, we just have to figure out
how it actually functions and build it.


There is more information @ the following...
 
https://www.youtube.com/user/korrelan


https://sites.google.com/view/korrtecx


 :)

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf97c751029c2e4db-M6238e7e5cd342c968571ee19
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to