Since combinatorial search problems are so common to artificial
intelligence, it has obvious applications. If such an algorithm can be
made, it seems like it could be used *everywhere* inside an AGI:
deduction (solve for cases consistent with constraints), induction
(search for the best model), planning... Particularly if there is a
generalization to soft constraint problems.

On 6/22/08, Jim Bromer <[EMAIL PROTECTED]> wrote:
> Abram,
> I did not group you with "probability buffs".  One of the errors I feel that
> writers make when their field is controversial is that they begin
> representing their own opinions from the vantage of countering critics.
> Unfortunately, I am one of those writers, (or perhaps I am just projecting).
>  But my comment about the probability buffs wasn't directed toward you, I
> was just using it as an exemplar (of something or another).
>
> Your comments seem to make sense to me although I don't know where you are
> heading.  You said:
> "what should be hoped for is convergence to (nearly) correct models of
> (small parts of) the universe. So I suppose that rather than asking for
> "meaning" in a fuzzy logic, I should be asking for clear accounts of
> convergence properties..."
>
> When you have to find a way to tie together components of knowledge together
> you typically have to achieve another kind of convergence.  Even if these
> 'components' of knowledge are reliable, they cannot usually be converged
> easily due to the complexity that their interrelations with other kinds of
> knowledge (other 'components' of knowledge) will cause.
>
> To follow up on what I previously said, if my logic program works it will
> mean that I can combine and test logical formulas of up to a few hundred
> distinct variables and find satisfiable values for these combinations in a
> relatively short period of time.  I think this will be an important method
> to test whether AI can be advanced by advancements in handling complexity
> even though some people do not feel that logical methods are appropriate to
> use on multiple source complexity.  As you seem to appreciate, logic can
> still be brought to to the field even though it is not a purely logical game
> that is to be played.
>
> When I begin to develop some simple theories about a subject matter, I will
> typically create hundreds of minor variations concerning those theories over
> a period of time.  I cannot hold all those variations of the conjecture in
> consciousness at any one moment, but I do feel that they can come to mind in
> response to a set of conditions for which that particular set of variations
> was created for.  So while a simple logical theory (about some subject) may
> be expressible with only a few terms, when you examine all of the possible
> variations that can be brought into conscious consideration in response to a
> particular set of stimuli, I think you may find that the theories could be
> more accurately expressed using hundreds of distinct logical values.
>
> If this conjecture of mine turns out to be true, and if I can actually get
> my new logical methods to work, then I believe that this new range of
> logical methods may show whether advancements in complexity can make a
> difference to AI even if its application does not immediately result in
> human level of intelligence.
>
> Jim Bromer
>
>
> ----- Original Message ----
> From: Abram Demski <[EMAIL PROTECTED]>
> To: [email protected]
> Sent: Sunday, June 22, 2008 4:38:02 PM
> Subject: Re: [agi] Approximations of Knowledge
>
> Well, since you found my blog, you probably are grouping me somewhat
> with the "probability buffs". I have stated that I will not be
> interested in any other fuzzy logic unless it is accompanied by a
> careful account of the meaning of the numbers.
>
> You have stated that it is unrealistic to expect a logical model to
> reflect the world perfectly. The intuition behind this seems clear.
> Instead, what should be hoped for is convergence to (nearly) correct
> models of (small parts of) the universe. So I suppose that rather than
> asking for "meaning" in a fuzzy logic, I should be asking for clear
> accounts of convergence properties... but my intuition says that from
> clear meaning, everything else follows.
>
>
>
>
>
>
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to