>  It could be done with a simple chain of word associations mined from a text
>  corpus: alert -> coffee -> caffeine -> theobromine -> chocolate.

That approach yields way, way, way too much noise.  Try it.

>  But that is not the problem.  The problem is that the reasoning would be
>  faulty, even with a more sophisticated analysis.  By a similar analysis you
>  could reason:
>
>  - coffee makes you alert.
>  - coffee contains water.
>  - water (H20) is related to hydrogen sulfide (H2S).
>  - rotten eggs produce hydrogen sulfide.
>  - therefore rotten eggs make you alert.

There is a "produce" predicate in here which throws off the chain of
reasoning wildly.

And, nearly every food contains water, so the application of Bayes
rule within this inference chain of yours will yield a conclusion with
essentially zero confidence.  Since fewer foods contain caffeine or
theobromine, the inference trail I suggested will not have this
problem.

In short, I claim your "similar analysis" is only similar at a very
crude level of analysis, and is not similar when you look at the
actual probabilistic inference steps involved.

>  Long chains of logical reasoning are not very useful outside of mathematics.

But the inference chain I gave as an example is NOT very long. The
problem is actually that outside of math, chains of inference (long or
short) require contextualization...

>  > I think there would be a viable path to AGI via
>  >
>  > 1)
>  > Filling a KB up w/ commensense knowledge via text mining and simple
>  > inference,
>  > as I described above
>  >
>  > 2)
>  > Building an NL conversation system utilizing the KB created in 1
>  >
>  > 3)
>  > Teaching the AGI the "implicit knowledge" you suggest via conversing with 
> it
>
>  I think adding common sense knowledge before language is the wrong approach.
>  It didn't work for Cyc.

I agree it's not the best approach.

I also think, though, that one unsuccessful attempt should not be taken to damn
the whole approach.

The failure of explicit knowledge encoding by humans, does not straightforwardly
imply the failure of knowledge extraction via text mining (as approaches to AGI)

>  Natural language evolves to the easiest form for humans to learn, because if 
> a
>  language feature is hard to learn, people will stop using it because they
>  aren't understood.  We would be wise to study language learning in humans and
>  model the process.  The fact is that children learn language in spite of a
>  lack of common sense.

Actually, they seem to acquire language and common sense together.

But, "wild children" and apes learn common sense, but never learn
language beyond
the proto-language level.

But I agree, study of human dev psych is one thing that has inclined
me toward the
embodied approach ...

yet I still feel you dismiss the text-mining approach too glibly...

-- Ben G

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to