Planning as in thinking about the solution but not doing it yet is just simply 
the AI predicting problem>completion + do not do right now.

So the predicted limb movements in its sensory_envisioned_plan are what 
controls the body, and the predicted "do" or "don't do" allows those motors 
linked to to be triggered or not - this is not text, vision, audio, touch, or 
taste, smell, nor is it motor nodes, it is a 3rd type I suspect now, a 
"sense>motor" cortex, and yes that is one hell of a simple cortex, as it is 
just a sensory switch that lets simple motor leaves get done. All thinking in 
my design is done in sensory, there is no need for a motor cortex...Prove me 
wrong.

So, our cortexes are, (in our to-be-AGI it has text too cuz we can lol):


text
vision
audio
touch
smell
taste
DO_TRIGGER_MOTOR_LEAVES

motor leaf nodes


This is because, as said, the sensory predictions are saying clearing what 
motor associated leaf nodes in the motor options to trigger. But just imagining 
doing some things in some order does not make YOU "do" it, unless you "decide 
to do it". So that extra go ahead decision to do it in real life is a single 
sensory node called the DoItCortex. If you predict that at the same time as the 
sensory plan, then the linked motors can be linked to and done.

Proof there is no need for a motor cortex: The sensory cortexes already create 
(from a world model hierarchy) the full predicted plan to do, all you need to 
do after that is link to motor correctly for each action. Let's say you imagine 
a finger tap video, or a picture of such. Also in that video or image is a foot 
tap action.  These 2 nodes finger tap and foot tap are activated therefore, and 
can link to the motor actions to do them.

There still is a Cerebellum to make sure actions reach the predicted targets, 
ex. it slows down the slightly wrong actions so it DOES get the finger onto the 
table and not a bit off to the side going to miss it.

Like in Terminator movies, if you see a person's body shape and face, or a jet, 
you can morph your nanobot structure to resemble them and make others think you 
are that person. This will happen soon. It is the memory of them, their shape, 
that allows you to become them, all the nanobots have to do then is move until 
they match the predicted memory. You can also predict nanobots moving around 
and lifting things like this, using your mind control to do tons of complex 
things, like extend a tentacle of nanobots form yourself that then splits into 
20 fingers and starts encircling some ball on some table, then quickly moving 
that arm around a pole, things you've never done before with no bones or 
constraints.

:)
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tabd1763b276303cf-M09a6a5ae8095916a993029c3
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to