“Sounds” good. But there are quite a number of AGI-ers – e.g. Pei with his 
“nonalgorithmic algorithm” – who argue somewhat similarly – and in the end it’s 
always the same old algos with just a few tweaks which never generate anything.

Can you provide a hint of a practical, effective mechanism for a)” 
learning/creating new elements” b) “holistic searching”, and provide c) an 
effect-to-be-achieved/example of an applicable problem?

From: Juan Carlos Kuri Pinto 
Sent: Friday, May 03, 2013 6:26 PM
To: AGI 
Subject: Re: [agi] Kurzweil irrelevant

In my AI systems I never preprogram preexisting AI algorithms. I rather let the 
machine learn the causal geometries of Reality: 

Reduction is a proactive and unconscious exploration of the whole space of 
mental resources, mind patterns, and hypotheses. It is not a straightforward 
and preprogrammed recipe to solve a problem. It is not a reductionist system. 
It is rather an inverse problem in which the mind holistically tries to find 
the recipe. If the mind cannot match mental patterns of thinking to solve 
problems or to explain phenomena, the mind tries to learn or to create the key 
elements and the missing pieces in the mind puzzles. Thus, both the time of the 
reduction algorithm and the resulting recipes are totally unpredictable, 
imperfect, and non-guaranteed. Successful recipes are always stored. Therefore 
"fully functional minds" always seek to maximize utility functions which are 
recursively products of reductions. The evolution of sane minds always seeks 
sophistication, welfare, and improvement. That's the basis of the scientific 
method.


This conversation is also relevant:


Juan Carlos Kuri: "In case of pattern recognition, salient features are the 
ones that are critical and crucial to recognize the pattern. Remove them from 
your pattern representation, and your pattern recognizer will start to fail. 
... The same applies to model thinking: If you forgot to include a crucial and 
critical feature, the behavior of your abstraction will completely diverge from 
the real entity."

Monica Anderson: "Saliency is the key to AI. And to Models. One could say that 
the goal of AI is to create a machine capable of Autonomous Reduction - that 
Understands the World and creates useful Models for it and/or our use. ... An 
AI is a Model Making Machine. It has to be implemented without Models of the 
World. It has to experience, learn, abstract, and determine saliency of its 
input data and it has to Understand the World. Only when that Understanding 
exists and operates do we expect it to generate Models for us."




On Fri, May 3, 2013 at 11:58 AM, Mike Tintner <[email protected]> wrote:

  AGI’ers’ cognitive synergy is totally dfifferent from that of the brain.

  Cognitive synergy per Ben is *predefined*. If you use existing algos, you 
have predefined, narrow frames/networks of options – and you have no capacity 
for generating new options.

  The brain works on totally different principles. If you want a simple analogy 
think of how millions/billions of humans demonstrably solve problems everyday 
with the use of the world wide web. If you want to solve a problem – as you 
just did with your post – you go truly “searching” – perhaps we must use a 
better word like “questing” – on the net for options. You DO NOT KNOW in 
advance, like every algo, where to search for info. You continually find NEW 
sources of info/options to generate NEW solutions to problems.  Neither algos 
nor cognitive synergistic suites of algos can do this – can explore.

  A web search is truly synergistic because it entails questing for and finding 
altogether NEW sources of info and new skills..

  Solving creative problems in your head is essentially the same as solving 
them via the web – you’re just exploring the internal “worldwide web” inside 
yourhead. Again, you have no predefined sets or spaces of options for any 
creative, real world problem. Rather, you search through disordered, clusters 
of associations in your head – messy webs as opposed to ordered 
networks/spaces. These searches are adventures – explorations – the opposite of 
what algos do, which is really “checking” lists rather than true “searching” 
through jungles of information.

  So brain synergy –  ad hoc patchworks of different clusters of information 
and skills – yes ; Ben’s cognitive synergy – pre hoc, prepatterned searches of 
predefined options and skills – no.
  From: Juan Carlos Kuri Pinto 
  Sent: Friday, May 03, 2013 5:32 PM
  To: AGI 
  Subject: Re: [agi] Kurzweil irrelevant

  Synergy is paramount for General AI! 

  Paraphrasing Dr. Joaquín Fuster: 
  "Intelligence is in the brain network. Trying to understand intelligence by 
studying neurotransmitters is like trying to understand written language by 
studying the chemical composition of the ink. It's simply not the right level 
of complexity. Language lies within the relationships between words."

  Cortex and Mind: Unifying Cognition

  
http://www.amazon.com/Cortex-Mind-Unifying-Cognition-ebook/dp/B000VDK26I/ref=sr_1_1_bnp_1_kin?ie=UTF8&qid=1367598110&sr=8-1&keywords=cortex+and+mind


  El alma está en la red del cerebro (The soul is within the brain network)

  http://www.rtve.es/television/20111111/alma-esta-red-del-cerebro/474693.shtml



  On Fri, May 3, 2013 at 9:35 AM, Ben Goertzel <[email protected]> wrote:

    >> 
http://multiverseaccordingtoben.blogspot.hk/2011/06/why-is-evaluating-partial-progress.html
    >>
    >> You may not like this explanation, but that's your problem ;p
    >
    > Well, I don't like it either.


    Good -- that is a point of evidence in its favor ;)


    >There is no evidence for synergy as the
    > key to intelligence.


    There most certainly is.  Ever read a textbook of cognitive neuroscience?


    >All of the evidence is in the other direction.


    You  mean there is lots of evidence that narrow AI programs do not manifest
    significant internal synergies .... Yeah...

    -- Ben G



    -------------------------------------------
    AGI
    Archives: https://www.listbox.com/member/archive/303/=now
    RSS Feed: https://www.listbox.com/member/archive/rss/303/23601136-98835e3f

    Modify Your Subscription: https://www.listbox.com/member/?&; 

    Powered by Listbox: http://www.listbox.com


        AGI | Archives  | Modify Your Subscription   

        AGI | Archives  | Modify Your Subscription  


      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to