*Synergy, Reduction, and Saliency Are Paramount to General AI*

*
https://www.facebook.com/notes/juan-carlos-kuri-pinto/synergy-reduction-and-saliency-are-paramount-to-general-ai/10151442948752712
*
In my AI systems I never preprogram preexisting AI algorithms. I rather let
the machine learn the causal geometries of Reality:


Reduction is a proactive and unconscious exploration of the whole space of
mental resources, mind patterns, and hypotheses. It is not a
straightforward and preprogrammed recipe to solve a problem. It is not a
reductionist system. It is rather an inverse problem in which the mind
holistically tries to find the recipe. If the mind cannot match mental
patterns of thinking to solve problems or to explain phenomena, the mind
tries to learn or to create the key elements and the missing pieces in the
mind puzzles. Thus, both the time of the reduction algorithm and the
resulting recipes are totally unpredictable, imperfect, and non-guaranteed.
Successful recipes are always stored. Therefore "fully functional minds"
always seek to maximize utility functions which are recursively products of
reductions. The evolution of sane minds always seeks sophistication,
welfare, and improvement. That's the basis of the scientific method.


This conversation is also relevant:


Juan Carlos Kuri: "In case of pattern recognition, salient features are the
ones that are critical and crucial to recognize the pattern. Remove them
from your pattern representation, and your pattern recognizer will start to
fail. ... The same applies to model thinking: If you forgot to include a
crucial and critical feature, the behavior of your abstraction will
completely diverge from the real entity."


Monica Anderson: "Saliency is the key to AI. And to Models. One could say
that the goal of AI is to create a machine capable of Autonomous Reduction
- that Understands the World and creates useful Models for it and/or our
use. ... An AI is a Model Making Machine. It has to be implemented without
Models of the World. It has to experience, learn, abstract, and determine
saliency of its input data and it has to Understand the World. Only when
that Understanding exists and operates do we expect it to generate Models
for us."


Paraphrasing Dr. JoaquĆ­n Fuster: "Intelligence is within the brain network.
Trying to understand intelligence by studying neurotransmitters is like
trying to understand written language by studying the chemical composition
of the ink. It's simply not the right level of complexity. Language lies
within the relationships between words."



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to