> Could you give me a little more detail about your thoughts on this?
> Do you think the problem of increasing uncomputableness of complicated
> complexity is the common thread found in all of the interesting,
> useful but unscalable methods of AI?
> Jim Bromer

Well, I think that dealing with combinatorial explosions is, in
general, the great unsolved problem of AI. I think the opencog prime
design can solve it, but this isn't proved yet...

Even relatively unambitious AI methods tend to get dumbed down further
when you try to scale them up, due to combinatorial explosion issues.
For instance, Bayes nets aren't that clever to begin with ... they
don't do that much ... but to make them scalable, one has to make them
even more limited and basically ignore combinational causes and just
look at causes between one isolated event-class and another...

And of course, all theorem provers are unscalable due to having no
scalable methods of inference tree pruning...

Evolutionary methods can't handle complex fitness functions because
they'd require overly large population sizes...

In general, the standard AI methods can't handle pattern recognition
problems requiring finding complex interdependencies among multiple
variables that are obscured among scads of other variables....

The human mind seems to do this via building up intuition via drawing
analogies among multiple problems it confronts during its history.
Also of course the human mind builds internal simulations of the
world, and probes these simulations and draws analogies from problems
it solved in its inner sim world, to problems it encounters in the
outer world...

etc. etc. etc.

ben


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to