Whether an AI needs to explicitly manipulate declarative statements is
a deep question ... it may be that other dynamics that are in some
contexts implicitly equivalent to this sort of manipulation will
suffice

But anyway, there is no contradiction between manipulating explicit
declarative statements and using probability theory.

Some of my colleagues and I spent a bunch of time during the last few
years figuring out nice ways to combine probability theory and formal
logic.  In fact there are "Progic" workshops every year exploring
these sorts of themes.

So, while the mainstream of probability-focused AI theorists aren't
doing hard-core probabilistic logic, some researchers certainly are...

I've been displeased with the wimpiness of the progic subfield, and
its lack of contribution to areas like inference with nested
quantifiers, and intensional inference ... and I've tried to remedy
these shortcomings with PLN (Probabilistic Logic Networks) ...

So, I think it's correct to criticize the mainstream of
probability-focused AI theorists for not doing AGI ;-) ... but I don't
think they've overlooking basic issues like overfitting and such ... I
think they're just focusing on relatively easy problems where (unlike
if you want to do explicitly probability theory based AGI) you don't
need to merge probability theory with complex logical constructs...

ben

On Sat, Nov 29, 2008 at 9:15 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> In response to my message, where I said,
> "What is wrong with the AI-probability group mind-set is that very few
> of its proponents ever consider the problem of statistical ambiguity
> and its obvious consequences."
> Abram noted,
> "The "AI-probability group" definitely considers such problems.
> There is a large body of literature on avoiding overfitting, ie,
> finding patterns that work for more then just the data at hand."
>
> Suppose I responded with a remark like,
> 6341/6344 wrong Abram...
>
> A remark like this would be absurd because it lacks reference,
> explanation and validity while also presenting a comically false
> numerical precision for its otherwise inherent meaninglessness.
>
> Where does the ratio 6341/6344 come from?  I did a search in ListBox
> of all references to the word "overfitting" made in 2008 and found
> that out of 6344 messages only 3 actually involved the discussion of
> the word before Abram mentioned it today.  (I don't know how good
> ListBox is for this sort of thing).
>
> So what is wrong with my conclusion that Abram was 6341/6344 wrong?
> Lots of things and they can all be described using declarative
> statements.
>
> First of all the idea that the conversations in this newsgroup
> represent an adequate sampling of all ai-probability enthusiasts is
> totally ridiculous.  Secondly, Abram's mention of overfitting was just
> one example of how the general ai-probability community is aware of
> the problem that I mentioned.  So while my statistical finding may be
> tangentially relevant to the discussion, the presumption that it can
> serve as a numerical evaluation of Abram's 'wrongness' in his response
> is so absurd that it does not merit serious consideration.  My
> skepticism then concerns the question of just how would a fully
> automated AGI program that relied fully on probability methods be able
> to avoid getting sucked into the vortex of such absurd mushy reasoning
> if it wasn't also able to analyze the declarative inferences of its
> application of statistical methods?
>
> I believe that an AI program that is to be capable of advanced AGI has
> to be capable of declarative assessment to work with any other
> mathematical methods of reasoning it is programmed with.
>
> The ability to reason about declarative knowledge does not necessarily
> have to be done in text or something like that.  That is not what I
> mean.  What I really mean is that an effective AI program is going to
> have to be capable of some kind of referential analysis of events in
> the IO data environment using methods other than probability.  But if
> it is to attain higher intellectual functions it has to be done in a
> creative and imaginative way.
>
> Just as human statisticians have to be able to express and analyze the
> application of their statistical methods using declarative statements
> that refer to the data subject fields and the methods used, an AI
> program that is designed to utilize automated probability reasoning to
> attain greater general success is going to have to be able to express
> and analyze its statistical assessments in terms of some kind of
> declarative methods as well.
>
> Jim Bromer
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"I intend to live forever, or die trying."
-- Groucho Marx


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to