> -----Original Message----- > From: Mike Archbold [mailto:[email protected]] > > > The problems start in strong AI, however, when you try to reconcile things > like > "beginning, cause, one vs. many, sameness/difference/likeness, complete vs. > incomplete, possible, > potential..." etc etc etc. Just considering one of these is fine, > one can usually make sense out of it, but the problem is that all these > concepts > are concurrently taken up in something in the world. > How do you even begin to work all of that together? If the approach > is emergence, nobody does, they just place hope in a clever learning scheme > can determine those things. It might work -- I'm not knocking evolutionary > learning algorithms. It might not though, and then it's back to head > scratching > on these long standing philosophy issues, like the potential vs actual, > appearance in relation to existence... on and on like that. >
I know. All these fuzzy concepts from the philosophers, Kant is like that I just can't read Kant. Picture loading them all in a knowledge graph. What is "essence" across the various philosophies through time until now and how does that relate to "being". They should just load up into the system aren't all these things just subgraphs with relative and changing definitions? What we can do with AGI but the philosophers cannot is change the rules from the ground up. Modify logic to see what happens. What if "up" really is "down" or outside is really inside? How does the system refactor itself? Some AGI's couldn't deal with that though they might have to re-emerge what the "essence of being" is. The AGI system really needs to be able to do that. The shining light of rationalism has to de-rationalize itself locally in various ways in order to see into the shadows of unknown so it can ingest new rules, those of which were previously illogical... and some new rules might require total system refactoring. Human brains struggle with total system refactoring. A 0=∞ conjecture is deflected rather than subsumed. There is too much logic against it. A full integration would yield unacceptable systemic risk. John ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
