All this talk about the Lord and SAT solvers has me thinking up variations
to the Janis Joplin song

http://www.azlyrics.com/lyrics/janisjoplin/mercedesbenz.html

"
Oh Lord, won't you buy me
a polynomial-time SAT solution
I'm counting on you Lord
Don't leave me in destitution

Prove that you love me
And create a scientific revolution
Oh Lord, won't you buy me
a polynomial-time SAT solution
"

... oh, whatever ...  ;-O

More seriously comments:

1)
Worst-case complexity doesn't matter much for AI.
What matters is average case across the class of
relevant problem instances

2)
O(n^4) doesn't matter if it's 9999999999999999999999n^4

3)
Agree with Stephen Reed that SAT solvers have little to do
with human cognition ... but that doesn't mean they can't
be extremely useful within AGI architectures.  And SMT
solvers, I would suspect, even more so.

4)
I don't presently see any way to place an SAT solver at
the heart of an AGI architecture.  I can see how to place
them in valuable subsidiary roles.

Put crudely, SAT solvers can solve constraint satisfaction
problems really fast.  This may be useful.  But the crux of
an AGI system that uses SAT, will still arguably lie in the
module that **formulates** the constraint satisfaction problems
in a contextually appropriate way.

5)
If you can formulate the problem of **formulating a contextually
appropriate, computationally tractable constraint satisfaction
problem whose solution will allow a system to achieve its
general high level goals in a particular context** as a computationally
tractable constraint satisfaction ... THEN and only then, will
you have convinced me that a great SAT solver can
serve as the core of an AGI architecture.

-- Ben G



-- Ben G






On Mon, Mar 31, 2008 at 9:12 AM, Stephen Reed <[EMAIL PROTECTED]> wrote:
>
> Hi Jim,
> According to the Wikipedia article on SAT Solvers, there are extensions for
> quantified formulas, and first order logic.  Otherwise SAT solvers operate
> principally on sets of symbolic propositions.  Agreed?
>
> I believe that SAT solvers are not cognitively plausible.  More precisely, I
> believe that human's do not perform constraint satisfaction problem solving
> in a similar manner.  One might argue however that SAT solvers are already
> super-human in respect to their performance for certain problems (i.e.
> digital circuit design verification).
>
> Where do you stand in the symbolic AI vs. connectionist AI dimension of our
> audience?  On the symbolic side?
> -Steve
>  Stephen L. Reed
>
> Artificial Intelligence Researcher
> http://texai.org/blog
> http://texai.org
> 3008 Oak Crest Ave.
> Austin, Texas, USA 78704
> 512.791.7860
>
>
>
> ----- Original Message ----
> From: Jim Bromer <[EMAIL PROTECTED]>
> To: [email protected]
>
> Sent: Monday, March 31, 2008 7:46:30 AM
> Subject: Re: [agi] Logical Satisfiability...Get used to it.
>
>  I am going to try to summarize what I have said.
>
> With God's help, I may have discovered a path toward a method to
> achieve a polynomial time solution to Logical Satisfiability, and so
> from this vantage point I have started to ask the question of whether
> or not a feasible SAT solver would actually be useful in advancing
> general AI.
>
> I think that most knowledgeable people would assume that it would be.
> However, there has been some doubt about this so I came up with a
> logical model that might show how such a situation could make a
> critical difference to general AI programming.  Or at least AI
> programming that emphasizes the use of rational methods.
>
> My feeling right now, is that if my solver actually works it would
> take at least n^3 or n^4 steps.  This means, for all practical
> purposes, that it would stretch the range of general solvers from
> logical formulas of 20 distinct variables or so to formulas of
> hundreds or even thousands of characters long.  I believe that this
> would be a major advancement, both in general computing and in AI
> programming.
>
> But would it make any difference in general AI programming, what this
> group calls AGI?
>
> Imagining a system that used logical or rational methods that might
> initially be expressed in fairly simple logical terms, but which could
> have hundreds of variants and hundreds of interconnections with the
> other logical formulas of the system I have come up with a case where
> n^4 SAT might be critical.  The formulas of the system that I have in
> mind would be speculative and derived from an inductive logical system
> that was designed to be capable of learning.  I then pointed out that
> some of the formulas produced by an automated system might have only a
> few valid cases, meaning that a trial and error method of searching
> for logical satisfiability would be very unlikely to work for those
> formulas.  In this case the n^4 SAT solver would be very useful.  But
> why would this be useful to AGI?
>
> Remember, our programs are supposed to be adaptive and capable of
> general learning.  If a fully automated AI program was truly learning
> from the IO data environment, it would tend to create numerous
> conjectures about it. Such a program would tend to create
> insignificant conjectures that was founded on a great deal of trivial
> evidence which could then be used to 'confirm' the conjecture.  Even
> worse, it might (and will) produce meaningless garbage that was based
> on methods like those that mush rational and non-rational responses or
> made extensive use of over-generalization.  On the other hand a few
> coincidences or over-generalizations could turn out to be very
> meaningful.  So my theory is this.  If the program produced logical
> theories of relations between events that occurred in the IO data
> environment, then those theories that had only a few valid solutions
> might be instrumental.  A complicated theory that only has a few valid
> cases would, under certain conditions, be easier to prove or disprove
> than a theory that can be 'verified' by tens of thousands of
> combinations of trivial coincidences.  This is similar to Popper's
> falsifiability theory in that it supposes that some theories have to
> be strongly testable in order to advance science.  I do not mean to
> suggest that falsifiability is absolute in an inductive system, just
> that some key theories that are narrowly testable may be very
> significant in the advancement of learning.  The rational-based
> conjectures that only have a few solutions would therefore be better
> for critical testing (as long as the solutions involved some kind of
> observable event that was likely to occur under some conditions.)  And
> a better general SAT solver would represent a major advancment in
> discovering the conditions under which confirmatory or disconfirmatory
> evidence for those kinds of theories could be found.
>
> Perhaps people have objected to my messages about this because I
> mentioned God, or perhaps they have objected to my question because
> they believe a polynomial time solution to the SAT problem is
> impossible.  On the other hand, there may be another objection to the
> question simply because the answer is so blatantly obviousness.  Of
> course a polynomial time solution to SAT would be significant in the
> advancement AI programming.
>
> Jim Bromer
>
> On Sun, Mar 30, 2008 at 11:47 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:
>
> >
> > The issue that I am still trying to develop is whether or not a general
> SAT solver would be useful for AGI. I believe it would be. So I am going to
> go on with my theory about bounded logical networks.
> >
> > A bounded logical network is a network where simple logical theories, that
> is logical speculations about the input output environment and about its own
> 'thinking', could be constructed with hundreds or thousands of variants and
> interconnections to other bounded logical theories.  These theories would
> not be fully integrated by strong taxonomic logic, so they could be used
> with inductive learning.  Such a system would produce some inconsistencies,
> but any inductive system can produce inconsistencies.  I believe that
> interconnected logically bounded theories could show both the intuition of
> network theories, the subtleties and nuances of complex integrated theories,
> and the strong logical-analytical potential of logical-rational programs.
> People should also realize that the bounded interconnected logical model
> could be used with a variety of rational reasoning methods, not just
> ideative logic.  But my personal theories do center around rational ideative
> reasoning that would, be capable (I believe) of using and learning general
> reasoning.
> >
> > Now there is one criticism to my opinion about the usefulness of a general
> SAT solver in this model.  That is, since an interconnected bounded logical
> network model is not a pure fully integrated logical model, then
> approximations could be used effectively in the model.  For instance, if the
> system was capable of creating bounded logical theories with thousands of
> interconnections, even if a solution wasn't known, the program could try
> millions of guesses about the logical relations to see if any worked.  These
> guesses would be simplifications, which would tend toward
> over-generalization, but whats the problem?  The system I have in mind is
> not purely logical, it is a bounded logical system which could and would
> contain many inconsistencies anyway.  My contention here is that is just the
> problem that we are faced with today in rational based AGI.  They can get so
> far, but only so far.
> >
> > A theory, with thousands of subtle variations and connections with other
> theories, that only had one or a few correct solutions would be useful in
> critical reasoning because these special theories would be critically
> significant.  They would exhibit strong correlations with simple or
> constrained relations that would be more like experiments that isolated
> significant factors that can be tested.  And these related theories could be
> examined more effectively using abstraction as well.  (There could still be
> problems with the critical theory since it could contain inconsistencies,
> but you are going to have that problem with any inductive system.)  If you
> are going to be using a rational-based AGI method, then you are going to
> want some theories that exhibit critical reasoning.  These kinds of theories
> might turn out to be the keystone in developing more sophisticated models
> about the world and reevaluating less sophisticated models.
> >
> > Jim Bromer
>
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&;
>
> Powered by Listbox: http://www.listbox.com
>
>
>  ________________________________
> Looking for last minute shopping deals? Find them fast with Yahoo! Search.
>  ________________________________
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to