On Tue, Feb 17, 2015 at 11:52 PM, Matt Mahoney via AGI <[email protected]> wrote: > On Tue, Feb 17, 2015 at 10:26 PM, Jim Bromer via AGI <[email protected]> wrote: >> I started wondering about how a good Satisfiability model might be >> used with AGI. > > It wouldn't because the hard problems in AI like vision and language > are not NP-hard. The more useful application would be breaking nearly > all forms of cryptography. (One time pad would still be secure). > -- Matt Mahoney
I seriously doubt the premise that the hard problems like vision and language in AI are not NP-hard. My (admittedly limited) experience with visual AI ran up against NP-Hard solutions that I thought would work. It can be argued that I simply made a mistake and had I worked at the problem more I could have found perfectly good alternatives. But that wasn't my experience at the time. The idea that the simplest methods that a programmer can think of will fail because they are in NP is serious. And since language could be considered to be a form of cryptography then your conjunction of cases (not language but cryptography) does not look really strong. (Visual processing also might be considered to be a form of cryptography and indeed it is used as such in captchas.) A mere solution in P might not be that powerful for AGI, but it probably would be. One method of a coherentist model that might be used with AGI is to build overlapping bounded cells of logical (or logic-based) reasoning. The overlapping model using (virtual) relations between objects that may exist within different levels of (virtual) local logical relations may be too weak at this time only because the SAT is in NP method is so frail that even the simplest of theoretical SAT relations will fail. In fact this is a good explanation for the failure of AI to achieve even the most basic form of strong AI. The inability to produce even child-like AI is an example. We know that recursion is a fundamental method of computation. If we use a recursive logical model of meta-theories that use SAT then we are going to quickly run against SAT barriers and those barriers are what prevents traction for even the simple stages of knowledge. This recursive model is exactly the kind of thing that might be needed to investigate relations between levels in the multiple level (semi-permeable) bounded model of logic. There are times when you need to check the consistency of multiple related (virtually) local models (of some ideas or other interrelated AI data) and using meta-theories seems like a reasonable method to use. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
