PM: 
That's basically my process.  It involves reading, and evaluating, 
collaborating, then developing.   What's  your process? 

You & others have difficulties understanding what I’m doing here. What I’m 
doing first of all, broadly, is philosophy. Everyone here AFAIK has 
difficulties understanding that when it comes to AGI, philosophy comes *before* 
technology. This is what Deutsch was saying – s.o. who understands the 
technology – and what others have said, but v. few here realise.

What that means practically first of all is having and developing a philosophy 
of intelligence, and what it involves – what kinds of problems a living mind 
must solve. If you don’t understand what kinds of problems you must solve, you 
can hardly build a machine to solve them. And no one here does understand AGI 
problems.

AGI problems are simply creative problems, as opposed to rational, formulaic 
(narrow AI) problems. Logical systems are not creative.

If you want to read here, read the extensive psychological literature on 
“divergent vs convergent, “ wicked/wild vs tame”, “fluid vs crystallised”, 
“programmed vs unprogrammed”, “structured vs unstructured  reasoning  “- wh. is 
helpful but not yet a fully developed literature, because all these 
distinctions come down to creative vs rational thinking. That no one here but 
me refers to this literature and this fundamental division of intelligence  -  
basically “higher” vs “lower” intelligence -  is a sign of how totally out of 
touch with reality (and RWR) this field is.

What I’m also doing is elaborating creativity in terms of its various facets – 
conceptualisation, object recognition (vision/perception), real world reasoning 
(vs logical/mathematical reasoning), graphic vs geometric, imaginative vs 
symbolic reasoning, general vs specific reasoning.

And all this, though it may sound purely airy philosophy/psychology, actually 
translates into practical AGI programs (as in “space program”).

The current field of AGI is in many ways helpful for developing a philosophy 
here, because it is in every case -  logic, complexity, prediction, patterns, 
semantic nets – doing the polar opposite of what needs to be done – trying to 
extend the same old narrow AI, rather than doing the radical and revolutionary 
thinking required to solve AGI/ creative problems.






-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to