Josh,

On 4/12/08, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
>
> On Friday 11 April 2008 03:17:21 pm, Steve Richfield wrote:
> > > Steve: If you're saying that your system builds a model of its world
> of
> > > discourse as a set of non-linear ODEs (which is what Systems Dynamics
> is
> > > bout) then I (and presumably Richard) are much more likely to be
> > > interested...
> >
> > No it doesn't. Instead, my program is designed to work on systems that
> are
> > not nearly enough known to model. THAT is the state of the interesting
> (at
> > least to me) part of the real world.
>
> If the programmer builds the model of the world beforehand, and the system
> uses it, it's just standard narrow AI. If the system builds the model
> itself
> from unstructured inputs, it's AGI.


... and if the computer can work with a very incomplete portion of a model,
then it is USEFUL AGI.

In some sense, we know how to do that: it's called the scientific method.
> However, as normally explained, it leaves a lot to intuition. "Form a
> theory"
> isn't too far from "and then a miracle occurs."  In other words, we need
> to
> be a little more explicit in how our system will form a theory.
>
> Perhaps a good way to characterize any given AGI is to specify:
> (a) what form are its hypotheses in
> (b) how are they generated
> (c) how are they tested
> (d) how are they revised
>
> Would it be fair to say that Dr. Eliza tries to form a causal net /
> influence
> diagram type structure?


Sort of. What it DOES do is to identify isolated cause-and-effect chain
links and assign probabilities to their existence in the present problem,
WITHOUT presently attempting to thread them all together. In actual
operation, most probabilities are either >90% or <10% after the first few
questions. Yes, there would be some benefit to such threading and that is
planned for the future, but it is common/usual to be able to solve problems
with some tiny number of the links being identified. Two must be identified
(one near the root cause and one in the self-sustaining loop in the cause
and effect chain) to effect a cure, and given that the lengths of the chains
leading to self-sustaining loops is typically ~twice as long as the
self-sustaining loops themselves, it typically takes ~4 links to be
identified to effect a permanent cure, though this number can be as low as 2
or high without limit.

In short, I proclaim your definition of AGI as being nearly useless, because
it requires FAR more than necessary information to operate. Of course, if
you happen to have all that much information, it would sure be nice to be
able to fully utilize it - something that the Dr. Eliza approach was never
intended to do.

Steve Richfield

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to