For example, would CYC today at least better answer Vaughan Pratt's test questions from http://boole.stanford.edu/cyc.html? Has there been more progress toward developing a neutral source of questions to use to evaluate how performance improves with time and with implementation variations?
At 01:57 AM 11/30/2008, Stephen Reed wrote:
Hi Robin,
There are no Cyc critiques that I know of in the last few years. I was employed seven years at Cycorp until August 2006 and my non-compete agreement expired a year later.
An interesting competition was held by Project Halo in which Cycorp participated along with two other research groups to demonstrate human-level competency answering chemistry questions. Results are here. Although Cycorp performed principled deductive inference giving detailed justifications, it was judged to have performed inferior due to the complexity of its justifications and due to its long running times. The other competitors used special purpose problem solving modules whereas Cycorp used its general purpose inference engine, extended for chemistry equations as needed.
My own interest is in natural language dialog systems for rapid knowledge formation. I was Cycorp's first project manager for its participation in the the DARPA Rapid Knowledge Formation project where it performed to DARPA's satisfaction, but subsequently its RKF tools never lived up to Cycorp's expectations that subject matter experts could rapidly extend the Cyc KB without Cycorp ontological engineers having to intervene. A Cycorp paper describing its KRAKEN system is here.
I would be glad to answer questions about Cycorp and Cyc technology to the best of my knowledge, which is growing somewhat stale at this point.
>What are the best available critiques of CYC as it exists now (vs. soon after project started)?
Research Associate, Future of Humanity Institute at Oxford University
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323
