Yes. SAT solvers act on sets of logical symbolic propositions. This can be effectively applied to logically closed (you know what I am getting at) systems as well. Inductive systems are not logically closed because new ideas may change the logical relationships of known theories. Also, any such system can also contain errors. My thought is, that we human beings can, and do create simple logical theories about our environment.
I don't, however, think that human beings are SAT solvers either, but I do think that they can hold a great deal of 'distributed' ideative data that can be associated with some group (or groups) of theories. These relations need to be learned, usually over a period of time. But they can, somehow, be called into play when one theory (or a variation of a theory) has some relevance to them. Although this kind of reaction may not be instantaneous, except for a well learned associative relation, and although it may not work as a purely logical analytical device, I believe that a SAT solver might be able to simulate such situations and enhance contemporary rational-based AI programs significantly. I mentionied a limited general SAT solver that would be used with an inductive AI program that would use logic to create theories about the IO data environment, and about its own 'thinking'. So there would still be more complicated logical problems that my solver (if it works at all) would not be able to handle. What I have suggested here is that there may be a threshold where a better solver, even a limited solver, might actually allow a rational-based AI program, (even one working in an inductive non-monotonic, non-taxonomic theory-building application) to advance significantly beyond contemporary AI programs. I feel that a symbolic approach would be easier to start with and it could be feasible with better insight and some stronger methods. I do, however, also feel that (what I think is) a gated recurrent artificial neural network with n-space mapping (or bus-state mapping) could be made to work, but this would in essence be very similar to a hybrid approach. Thank you for your politeness and your insightful comments. I am going to quit this group because I have found that it is a pretty bad sign when the moderator mocks an individual for his religious beliefs. However, I hope to talk to again on some other forum. Jim Bromer On Mon, Mar 31, 2008 at 9:12 AM, Stephen Reed <[EMAIL PROTECTED]> wrote: > > Hi Jim, > According to the Wikipedia article on SAT Solvers, there are extensions for > quantified formulas, and first order logic. Otherwise SAT solvers operate > principally on sets of symbolic propositions. Agreed? > > I believe that SAT solvers are not cognitively plausible. More precisely, I > believe that human's do not perform constraint satisfaction problem solving > in a similar manner. One might argue however that SAT solvers are already > super-human in respect to their performance for certain problems (i.e. > digital circuit design verification). > > Where do you stand in the symbolic AI vs. connectionist AI dimension of our > audience? On the symbolic side? > -Steve > Stephen L. Reed > > Artificial Intelligence Researcher > http://texai.org/blog > http://texai.org > 3008 Oak Crest Ave. > Austin, Texas, USA 78704 > 512.791.7860 > > > ----- Original Message ---- > From: Jim Bromer <[EMAIL PROTECTED]> > To: [email protected] > Sent: Monday, March 31, 2008 7:46:30 AM > Subject: Re: [agi] Logical Satisfiability...Get used to it. > > I am going to try to summarize what I have said. > > With God's help, I may have discovered a path toward a method to > achieve a polynomial time solution to Logical Satisfiability, and so > from this vantage point I have started to ask the question of whether > or not a feasible SAT solver would actually be useful in advancing > general AI. > > I think that most knowledgeable people would assume that it would be. > However, there has been some doubt about this so I came up with a > logical model that might show how such a situation could make a > critical difference to general AI programming. Or at least AI > programming that emphasizes the use of rational methods. > > My feeling right now, is that if my solver actually works it would > take at least n^3 or n^4 steps. This means, for all practical > purposes, that it would stretch the range of general solvers from > logical formulas of 20 distinct variables or so to formulas of > hundreds or even thousands of characters long. I believe that this > would be a major advancement, both in general computing and in AI > programming. > > But would it make any difference in general AI programming, what this > group calls AGI? > > Imagining a system that used logical or rational methods that might > initially be expressed in fairly simple logical terms, but which could > have hundreds of variants and hundreds of interconnections with the > other logical formulas of the system I have come up with a case where > n^4 SAT might be critical. The formulas of the system that I have in > mind would be speculative and derived from an inductive logical system > that was designed to be capable of learning. I then pointed out that > some of the formulas produced by an automated system might have only a > few valid cases, meaning that a trial and error method of searching > for logical satisfiability would be very unlikely to work for those > formulas. In this case the n^4 SAT solver would be very useful. But > why would this be useful to AGI? > > Remember, our programs are supposed to be adaptive and capable of > general learning. If a fully automated AI program was truly learning > from the IO data environment, it would tend to create numerous > conjectures about it. Such a program would tend to create > insignificant conjectures that was founded on a great deal of trivial > evidence which could then be used to 'confirm' the conjecture. Even > worse, it might (and will) produce meaningless garbage that was based > on methods like those that mush rational and non-rational responses or > made extensive use of over-generalization. On the other hand a few > coincidences or over-generalizations could turn out to be very > meaningful. So my theory is this. If the program produced logical > theories of relations between events that occurred in the IO data > environment, then those theories that had only a few valid solutions > might be instrumental. A complicated theory that only has a few valid > cases would, under certain conditions, be easier to prove or disprove > than a theory that can be 'verified' by tens of thousands of > combinations of trivial coincidences. This is similar to Popper's > falsifiability theory in that it supposes that some theories have to > be strongly testable in order to advance science. I do not mean to > suggest that falsifiability is absolute in an inductive system, just > that some key theories that are narrowly testable may be very > significant in the advancement of learning. The rational-based > conjectures that only have a few solutions would therefore be better > for critical testing (as long as the solutions involved some kind of > observable event that was likely to occur under some conditions.) And > a better general SAT solver would represent a major advancment in > discovering the conditions under which confirmatory or disconfirmatory > evidence for those kinds of theories could be found. > > Perhaps people have objected to my messages about this because I > mentioned God, or perhaps they have objected to my question because > they believe a polynomial time solution to the SAT problem is > impossible. On the other hand, there may be another objection to the > question simply because the answer is so blatantly obviousness. Of > course a polynomial time solution to SAT would be significant in the > advancement AI programming. > > Jim Bromer > > On Sun, Mar 30, 2008 at 11:47 AM, Jim Bromer <[EMAIL PROTECTED]> wrote: > > > > > The issue that I am still trying to develop is whether or not a general > SAT solver would be useful for AGI. I believe it would be. So I am going to > go on with my theory about bounded logical networks. > > > > A bounded logical network is a network where simple logical theories, that > is logical speculations about the input output environment and about its own > 'thinking', could be constructed with hundreds or thousands of variants and > interconnections to other bounded logical theories. These theories would > not be fully integrated by strong taxonomic logic, so they could be used > with inductive learning. Such a system would produce some inconsistencies, > but any inductive system can produce inconsistencies. I believe that > interconnected logically bounded theories could show both the intuition of > network theories, the subtleties and nuances of complex integrated theories, > and the strong logical-analytical potential of logical-rational programs. > People should also realize that the bounded interconnected logical model > could be used with a variety of rational reasoning methods, not just > ideative logic. But my personal theories do center around rational ideative > reasoning that would, be capable (I believe) of using and learning general > reasoning. > > > > Now there is one criticism to my opinion about the usefulness of a general > SAT solver in this model. That is, since an interconnected bounded logical > network model is not a pure fully integrated logical model, then > approximations could be used effectively in the model. For instance, if the > system was capable of creating bounded logical theories with thousands of > interconnections, even if a solution wasn't known, the program could try > millions of guesses about the logical relations to see if any worked. These > guesses would be simplifications, which would tend toward > over-generalization, but whats the problem? The system I have in mind is > not purely logical, it is a bounded logical system which could and would > contain many inconsistencies anyway. My contention here is that is just the > problem that we are faced with today in rational based AGI. They can get so > far, but only so far. > > > > A theory, with thousands of subtle variations and connections with other > theories, that only had one or a few correct solutions would be useful in > critical reasoning because these special theories would be critically > significant. They would exhibit strong correlations with simple or > constrained relations that would be more like experiments that isolated > significant factors that can be tested. And these related theories could be > examined more effectively using abstraction as well. (There could still be > problems with the critical theory since it could contain inconsistencies, > but you are going to have that problem with any inductive system.) If you > are going to be using a rational-based AGI method, then you are going to > want some theories that exhibit critical reasoning. These kinds of theories > might turn out to be the keystone in developing more sophisticated models > about the world and reevaluating less sophisticated models. > > > > Jim Bromer > > ------------------------------------------- > agi > Archives: http://www.listbox.com/member/archive/303/=now > RSS Feed: http://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: http://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > > > ________________________________ > Looking for last minute shopping deals? Find them fast with Yahoo! Search. > ________________________________ > > agi | Archives | Modify Your Subscription ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63 Powered by Listbox: http://www.listbox.com
