? You know these guys. How would YOU play this hand?
Any thoughts?
Steve Richfield
On 12/2/08, Stephen Reed [EMAIL PROTECTED] wrote:
Steve Richfield said:
If I understand you correctly, Cycorp's code should be public domain, and as
such, I should be able to simply mine
for.
It sounds like Cycorp doesn't have a useful product (yet) whereas it looks like
I do, so it is probably I who should be doing this, not Cycorp.
Any thoughts?
Who should I ask for code from?
Steve Richfield
==
On 12/1/08, Stephen Reed [EMAIL PROTECTED] wrote:
Steve Richfield said
AM 11/30/2008, Stephen Reed wrote:
Hi Robin,
There are no Cyc critiques that I know of in the last few years. I was
employed seven years at Cycorp until August 2006 and my non-compete agreement
expired a year later.
An interesting competition was held by Project Halo in which Cycorp
Ben,
Cycorp participated in the DARPA Transfer Learning project, as a subcontractor.
My project role was simply a team member and I did not attend any PI
meetings. But I did work on getting a Quake III Arena environment working at
Cycorp which was to be a transfer learning testbed. I also
Matt Taylor was also an intern at Cycorp where was on Cycorp's Transfer
Learning team with me.
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
From:
on transfer learning is similar
to what Taylor presented to AGI-08?
Pei
On Sun, Nov 30, 2008 at 1:01 PM, Stephen Reed [EMAIL PROTECTED] wrote:
Matt Taylor was also an intern at Cycorp where was on Cycorp's Transfer
Learning team with me.
-Steve
Stephen L. Reed
Artificial Intelligence
Hi Robin,
There are no Cyc critiques that I know of in the last few years. I was
employed seven years at Cycorp until August 2006 and my non-compete agreement
expired a year later.
An interesting competition was held by Project Halo in which Cycorp
participated along with two other
Hi Russell,
Although I've already chosen an implementation language for my Texai project -
Java, I believe that my experience may interest you. As many here already
know, Cycorp's implementation language was a lisp subset during the the time I
worked there. At Cycorp, I explored creating an
Ave.
Austin, Texas, USA 78704
512.791.7860
- Original Message
From: Russell Wallace [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 10:28:39 AM
Subject: Re: [agi] On programming languages
On Fri, Oct 24, 2008 at 4:10 PM, Stephen Reed [EMAIL PROTECTED] wrote:
Hi
From: Russell Wallace [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 12:01:53 PM
Subject: Re: [agi] On programming languages
On Fri, Oct 24, 2008 at 5:55 PM, Stephen Reed [EMAIL PROTECTED] wrote:
Composed statements generate Java statements such as an assignment
statement
* . . . .
- Original Message -
From: StephenReed
To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 12:55PM
Subject: **SPAM** Re: [agi] Onprogramming languages
Russellasked:
But if it can't read the syntax tree, how willit know what the main body
actually does?
My line
, Stephen Reed [EMAIL PROTECTED] wrote:
Not really. Although the distinguishing feature of a Lisp syntax tree is a
nested list, and the fact that my composition framework is also a tree does
not make that framework a Lisp family language.
What do you see as the most important differences? I'll
I at least glance at all posts but prefer to read, write and otherwise
participate in those which:
discuss how to design or engineer AGI systems, using current
computers, according to designs that can feasibly be implemented by
moderately-sized groups of people
That's why I came over from
Hi YKY,
If your code will be open source lisp, then I have a few points learned from my
experience at Cycorp.
(1) Franz has a very good Common Lisp (Allegro) IDE for Windows and Linux, but
is closed source
(2) Steel Bank Common Lisp is open source, derived from CMU Common Lisp.
Recent SBCL
Ben gave the following examples that demonstrate the ambiguity of the
preposition with:
People eat food with forks
People eat food with friend[s]
People eat food with ketchup
The Texai bootstrap English dialog system, whose grammar rule engine I'm
currently rewriting, uses elaboration and
Mike asked:
How does Stephen or YKY or anyone else propose to read between the lines?
And what are the basic world models, scripts, frames etc etc. that you
think sufficient to apply in understanding any set of texts, even a relatively
specialised set?
Interesting that this question
Mike said:
The way humans acquire language is precisely by starting not by reading
Wikipedia but by mastering fiction-like sentences with simple subjects and
simple actions and relationships - like John sit John eat Jack like Jill.
Me give Jill soap etc. -based primarily in the here and
Matt said:
The overview claims to be able to convert natural language sentences into Cycl
assertions, and to convert questions to Cycl queries. So I wonder why the
knowledge base is still not being built this way. And I wonder why there is no
public demo of the interface, and no papers giving
Hi Terren,
When I worked at Cycorp, on the Cyc Knowledge Base, I think that they employed
over 20 PhD philosophers at the peak to edit Cyc concepts and relationships.
Every day I heard conversations such as Would a geyser of Dr. Pepper still be
Dr. Pepper? - (or is it only a conveniently
Hi Richard,
Frankly, looking at recent posts, I think this list is already dead.
Dear Richard, be patient, or post more about your own results. I have, right
or wrong, somewhat modest expectations for the posts on this list (-aside from
my favorite authors :-) ). I, like perhaps some other
Hi Steve,
I'm thinking about the Texai bootstrap dialog system, and in particular about
adding grammar rules and vocabulary for the utterance Compile a class.
Cheers.
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin,
similar to Cyc to me),
more so than to OpenCog Prime (my proposal for a Novamente-like system
built on OpenCog, not yet fully documented but I'm actively working on
the docs now).
I wonder why you don't join Stephen Reed on the texai project? Is it
because you don't like the open-source nature of his
]
To: agi@v2.listbox.com
Sent: Tuesday, June 3, 2008 12:20:19 PM
Subject: Re: [agi] OpenCog's logic compared to FOL?
On 6/3/08, Stephen Reed [EMAIL PROTECTED] wrote:
I believe that the crisp (i.e. certain or very near certain) KR for these
domains will facilitate the use of FOL inference (e.g
From: YKY (Yan King Yin) [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, June 3, 2008 5:29:07 PM
Subject: Re: [agi] OpenCog's logic compared to FOL?
On 6/4/08, Stephen Reed [EMAIL PROTECTED] wrote:
All of the work to date on program generation, macro processing, application
Crest Ave.
Austin, Texas, USA 78704
512.791.7860
- Original Message
From: Steve Richfield [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 27, 2008 12:23:37 PM
Subject: Merging threads was Re: Code generation was Re: [agi] More Info Please
Steve,
On 5/26/08, Stephen Reed
Regarding the best language for AGI development, most here know that I'm using
Java in Texai. For skill acquisition, my strategy is to have Texai acquire a
skill by composing a Java program to perform the learned skill. I hope that
the algorithmic (e.g. Java statement operation) knowledge
Hi Lukasz,
Here is a typical Capability Description from my first set of bootstrap cases:
(capability
name: defineInstanceVariable
description: Defines an instance variable having the given name and object
type.
preconditions:
(rdf:type ?variable-name cyc:NonEmptyCharacterString)
Hi Jey,
You said:
This list is being dominated by nonsense because the scientifically
grounded people on this list don't want to take the time to refute
every piece of fantastic drivel. I certainly don't blame them for
wanting to focus on their time on other more productive projects, but
it
/16/08, Stephen Reed [EMAIL PROTECTED] wrote:
I naively expect misunderstandings and ignorance on the part of the user to be
reconciled via clarification dialog.
What else is there besides misunderstandings and ignorance on the part of the
user? Is there something else for computers to address
Steve Richfield said:
Does anyone else here share my dream of a worldwide AI with all of the
knowledge of the human race to support it - built with EXISTING Wikipediaand
Dr. Eliza software and a little glue to hold it all together?
Hi Steve,
I share part of your dream, in that I am strongly
- Original Message
From: rooftop8000 [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, May 10, 2008 12:35:49 PM
Subject: Re: [agi] organising parallel processes, try2
Do you think a hierarchy structure could be too restrictive?
No, I have not yet found a use case that would
Hi Matt,
You asked:
What would be a good test for understanding an algorithm?
As I mentioned before, I want to create a system capable of being taught -
specifically capable of being taught skills. And I strongly share your
interest in answers to this question. A student should be able to
Hi,
The Texai system, as I envision its deployment, will have the following
characteristics:
* a lot of processes
* a lot of hosts
* message passing between processes, that are arranged in a
hierarchical control system
* higher level processes will be
Hi Mike,
I've spent some time working with the CMU Sphinx automatic speech recognition
software, as well as the Festival text-to-speech software. From the Texai
SourceForge source code repository, anyone interested can inspect and download
an echo application that recognizes a spoken
YKY,
The Rus form is also a popular logical form, have you heard of it?
I think it is complete in the sense that all English (or NL) sentences
can be represented in it, but the drawback is that it's somewhat
indirect.
I have not heard about Rus form. Could you provide a link or reference?
Message
From: YKY (Yan King Yin) [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 6, 2008 10:36:16 AM
Subject: Re: [agi] organising parallel processes
On 5/4/08, Stephen Reed [EMAIL PROTECTED] wrote:
As perhaps you know, I want to organize Texai as a vast multitude of agents
Hi Lukasz ,
With regard to the Texai approach, I have subjected myself to these constraints:
* to author the bootstrap portion of the system by myself
* to write the least amount of code (e.g. not to write an ideal AI
language first)
* to reuse existing narrow AI
PM
Subject: Re: [agi] organising parallel processes
Stephen Reed wrote:
At the time that the Texai bootstrap English dialog system is
available, I'll begin fleshing out the hundreds of agencies for
which I hope to recruit human mentors. Each agency I establish will
have paragraphs
, USA 78704
512.791.7860
- Original Message
From: Mike Dougherty [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, May 5, 2008 12:26:43 AM
Subject: Re: [agi] organising parallel processes
On Sun, May 4, 2008 at 11:28 PM, Stephen Reed [EMAIL PROTECTED] wrote:
be like Skype, the popular
Message
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, May 5, 2008 1:43:20 PM
Subject: Re: [agi] organising parallel processes
--- Stephen Reed [EMAIL PROTECTED] wrote:
Matt (or anyone else), have you gotten as far as thinking about NAT
hole punching or some other
Hi Bob,
You said:
The human mind actually doesn't scale all that well (just look at how
dysfunctional large corporations or government agencies can become)
I am relying on the contrary for ultimately deploying a vast multitude of
collaborating Texai instances. Like James Albus, I believe that
]
To: agi@v2.listbox.com
Sent: Sunday, May 4, 2008 12:32:03 PM
Subject: [agi] about Texai
@Stephen Reed and others:
I'm writing a prototype of my AGI in Lisp, with special emphasis on
the inductive learning algorithm. I'm looking for collaborators.
It seems that Texai is the closed to my AGI theory
Hi Matt,
As perhaps you know, I want to organize Texai as a vast multitude of agents
situated in a hierarchical control system, grouped as possibly redundant,
load-sharing, agents within an agency sharing a specific mission. I have given
some thought to the message content, and assuming that
, May 4, 2008 9:41:16 PM
Subject: Re: [agi] organising parallel processes
On Sun, May 4, 2008 at 10:00 PM, Stephen Reed [EMAIL PROTECTED] wrote:
Matt (or anyone else), have you gotten as far as thinking about NAT hole
punching or some other solution for peer-to-peer?
NAT hole punching has
Hi Josh,
I briefly looked at the ImageNet description at the Princeton WordNet site. It
does not reveal whether the images are open source to the extent this new data
can be linked and distributed with WordNet, which has a very permissive
license.
-Steve
Stephen L. Reed
Artificial
Hi all,
For those interested, I have posted on my blog here, the current Texai
bootstrap system status.
Note that the system now can only understand and generate a single utterance.
Progress towards a broader coverage of English awaits the completion of the
Grammar Acquisition Skill, and
Mike,
All you need to know about an apple is a set of letters A-P-P-L-E,and other
letters like F-R-U-I-T and R-E-D. (You also seem to be implying that
blind/deaf people get to know the world by language *without* senses!)
I concede that you are well informed about the learning
Hi Mike,
John Arne Riise stood doubled over in his tiny corner of football hell.
These sentences are great demonstrations of why I favor a construction grammar.
It's not necessary to process the imagery from first principles. These
sentences are full of idioms that can be simply treated as
Matt said:
General intelligences are going to have to compete with organizations of
specialized systems, each of which is optimized for a narrow task.
Interesting observation. I envision Texai as a multitude of specialized agents
arranged in hierarchical control system, and acting in
]
To: agi@v2.listbox.com
Sent: Monday, April 21, 2008 12:43:37 PM
Subject: RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent
input and responses
.hmmessage P { margin:0px;padding:0px;} body.hmmessage {
FONT-SIZE:10pt;FONT-FAMILY:Tahoma;} Stephen Reed writes:
Hey Texai, let's
Matt said:
People do not learn grammar by being given grammatical rules, because we still
don't know what they are. Grammar rules seem to have a Zipf distribution,
like vocabulary. About 200 words account for half of the tokens in text, and
then it gets complicated. Likewise, a small number of
Josh said:
[what's missing] In a single word: feedback.
At a very high level of abstraction, most the AGI (and AI for that matter)
schemes I've seen can be caricatured as follows:
1. Receive data from sensors.
2. Interpret into higher-level concepts.
3. Then a miracle occurs.
4. Interpret
Bob,
I, perhaps naively, agree with your list of required resources - which I'm glad
to have.
But I believe that AGI will not be developed in isolation. Its not only that
AGI is a hard, unsolved problem, its that working alone in isolation, there is
such a great probability that the
Hi Mike,
The article is entirely available here. I believe that it not appropriate
(i.e. illegal) to reproduce copyrighted material from web sites for which all
rights are reserved, without their permission. Fair use, as typically employed
on the web, would allow quotation of the first few
Zitgist has released UMBEL web services that provide a subject ontology, based
upon the OpenCyc ontology that is linked to other useful ontologies including
WordNet. A useful navigation page is here. This news is especially good for
Texai because I too have adopted the OpenCyc ontology as
Mike,
Thanks for the reference, which I will study further. As many know, the Texai
KB is currently crisp and symbolic, and will have to stay that way until after
the bootstrap English dialog system is developed. I want Texai to be
implemented in a cognitively plausible manner, and articles
Mike,
I've printed but not yet fully studied the Barsalou paper. But I am still very
comforted by your quotation from his work:
... he posits as primary something more like 2) an agent-dependent
instruction manual. According to this metaphor, knowledge of a category is not
a general
Hi Ed,
As most already know, the problem I am trying to solve involves knowledge and
skill acquisition to achieve AGI. The proposed solution is a bootstrap English
dialog system, backed by a knowledge base based upon OpenCyc and greatly
elaborated with lexical information from WordNet,
YKY
Here is what I learned from implementing the Texai knowledge base. It persists
symbolic statements about concepts.
I designed an SQL schema to persist OpenCyc in its full CycL form, in MySQL on
SuSE 64-bit Linux. My Java application driving MySQL dramatically slowed down
when the number
YKY said:
If the inference requires a rule outside the sub-KB, you'd have to do
a very expensive swap. I think this only works if you're sure the
entire inference is contained within a sub-KB.
Right. I envision Texai deployed as distributed agents operating within a
hierarchical control
YKY,
I agree with your side of the debate about whole KB not fitting into RAM. As a
solution, I propose to partition the whole KB into the tiniest possible cached
chunks, suitable for a single agent running on a host computer with RAM
resources of at least one GB. And I propose that AGI will
:03 AM, Stephen Reed [EMAIL PROTECTED] wrote:
I would be interested
in your comments on my adoption of Fluid Construction Grammar as a solution
to the NL to semantics mapping problem.
(1) Word Grammar (WG) is a construction-free version of your approach.
It is based solely on spreading
Publishing computer-generated books on demand, aggregating many small profits,
is an interesting illustration of The Long Tail.
Considering an AGI, I anticipate that knowledge and skill acquisition will be
facilitated by this principle. Obscure knowledge and skills can be acquired
from, and
PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, April 14, 2008 2:11:31 PM
Subject: Re: [agi] Between logical semantics and linguistic semantics
On Mon, Apr 14, 2008 at 5:14 PM, Stephen Reed [EMAIL PROTECTED] wrote:
My first solution to this problem is to postpone it by employing a
controlled
Mark wrote:
I wonder if you could clarify why you insistupon GPL as opposed to a
Berkeley-type or Apache-type license? I believevery strongly that both
ends of the open source to commercial *spectrum*are entirely unreasonable
and that there is a reasonable middle ground wherewe
Hi Richard,
After reading your blog post I wonder if you think either that (1) a
hierarchical control system, such as proposed by James Albus and adopted by me
as the Texai cognitive architecture, is doomed to failure as an AGI due to
complexity, or whether that (2) a hierarchical control
512.791.7860
- Original Message
From: Stephen Reed [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, April 11, 2008 2:43:26 PM
Subject: Re: [agi] Blog essay on the complex systems problem
Hi Richard,
After reading your blog post I wonder if you think either that (1) a
hierarchical
- Original Message
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, April 11, 2008 3:06:21 PM
Subject: Re: [agi] Blog essay on the complex systems problem
Richard Loosemore wrote:
[snip]
I would not say that your hierarchical control structure is doomed to
Richard Loosemore wrote:
[snip] but this is not an open source project at this stage.
Richard,
I am sad that your AGI project is not open source at this stage. Please
consider developing *something* open source that is related to your mission, at
least analogous to OpenCog -- Novamente.
MW/MT:... how do you test acquired knowledge?
I have given this problem some thought, regarding the testing of acquired
grammar facts, rules and skills. Here are some points, mostly from my
experience with Cyc.
Before the knowledge is acquired, the mentor (or ultimately the system
Everyone knows that perception is the result of a combination of pickup
(bottom-up processing) and expectation (top-down processing). There are many,
many ways to implement this idea.
Richard,
Thanks for describing perception, in the same fashion that I believe is
explained by James Albus
- Original Message
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, April 10, 2008 11:20:13 AM
Subject: Re: [agi] How Bodies of Knowledge Grow
... I agree that Albus is interesting. I am superficially familiar
with his approach.
From my point of view I
- Original Message
From: Steve Richfield [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, April 10, 2008 2:58:09 PM
Subject: [agi] Comments from a lurker...
[snip] BTW, the principles behind Dr. Eliza are rather unique. I'd be glad to
send some papers to anyone who is
Lukasz, I am very pleased with my implementation of the few Double R Grammar
rules required to incrementally parse the book is on the table, which is an
example sentence from Jerry Ball's paper. Dr. Ball is a proponent of
cognitively plausible NLP architectures.
-Steve
Stephen L. Reed
I've posted the status of the Texai bootstrap English dialog system on my blog.
Summary: parsing works for a single use case sentence, and I'm moving on to
generation.
I also created a page describing my approach to English utterance comprehension
here, that integrates the incremental version
of
natural language? (And NL-semantics' impact on logical semantics, as
opposed to letting the computer build the representation for itself,
out of some elementary thought mechanics.)
P.S. Thanks to Pei Wang for the interesting curriculum and to Stephen
Reed for the great work on Texai
Hi Jim,
According to the Wikipedia article on SAT Solvers, there are extensions for
quantified formulas, and first order logic. Otherwise SAT solvers operate
principally on sets of symbolic propositions. Agreed?
I believe that SAT solvers are not cognitively plausible. More precisely, I
Derek: How could a symbolic engine ever reason about the real world *with*
access to such information?
I hope my work eventually demonstrates a solution to your satisfaction. In the
meantime there is evidence from robotics, specifically driverless cars, that
real world sensor input can be
Mike,
I have Lakoff Johnson Metaphors We Live By. And I'll order the other
titles you recommend.
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
- Original Message
From: Mike
I've added some content in the Computational Linguistics section of the AGIRI
Wiki, which Ben outlined:
Fluid_Construction_Grammar adapted from the Wikipedia article that I mostly
authored. Link Grammar adapted from Wikipedia. Language Generation adapted
from Wikipedia. Word Grammar
Mike,
An interesting paper on the meanings of words is I don't believe in word
senses by Adam Kilgarriff. He concludes:
Following a description of the conflict between WSD [Word Sense Disambiguation]
and lexicological research, I examined the concept, ‘word sense’. It was not
found to be
Ben,
I would agree with an even stronger version of your statement: Treating word
senses as fuzzy, cluster type categories in the context of usage-instances is
the only cognitively plausible method for AGI to comprehend and produce them.
-Steve
Stephen L. Reed
Artificial Intelligence
- Original Message
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 27, 2008 5:30:12 PM
Subject: Re: [agi] Microsoft Launches Singularity
DIV {
MARGIN:0px;}
Steve,
Some odd thoughts in reply. Thanks BTW for
article.
1. You don't seem to get what's
Ben,
Wikipedia has significant overlap with the topic list on the AGIRI Wiki. I
propose for discussion the notion that the AGIRI Wiki be content-compatible
with Wikipedia along two dimensions:
license - authors agree to the GNU Free Documentation Licenseeditorial
standards - Wikipedia says
Ben,
I just created an account on the wiki and created my user page derived from my
Wikipedia user page. Image uploads on the wiki work the same way as on
Wikipedia - Yay.
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Thanks Ben for leaving a placeholder for Fluid Construction Grammar. I've
copied over the Wikipedia article for which I wrote most of the content.
Cheers.
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA
While programming my bootstrap English dialog system, I needed a spreading
activation library for the purpose of enriching the discourse context with
conceptually related terms. For example given that there is a human-habitable
room that both speakers know of, then it is reasonable to assume
I agree with Mark.
The reason the readers of this forum should seek to control AGI development is
to ensure friendly behavior, rather than leaving this responsibility to an Evil
Company or to some military organization.
With human labor removed as a constraint on our system's economic
Hi Mark,
I value your ideas about 'Friendliness as an attractor in state space'. Please
keep it up.
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
- Original Message
From:
According to the in-house Cycorp jargon, deep inference begins at approximately
four backchain steps in a deductive inference. As most here know, there is an
exponential fanout in the number of separate inference paths with each
backchain step, given a large candidate rule set and a large set
Pei: Resolution-based FOL on a huge KB is intractable.
Agreed.
However Cycorp spend a great deal of programming effort (i.e. many man-years)
finding deep inference paths for common queries. The strategies were:
prune the rule set according to the contextsubstitute procedural code for
, and the
result is equivalent to the original knowledge in truth-value only.
It is hard to control the direction of the inference without semantic
information.
Pei
On Feb 18, 2008 11:13 AM, Stephen Reed [EMAIL PROTECTED] wrote:
Pei: Resolution-based FOL on a huge KB
Yes, I would be very glad to incorporate any content that I can then republish
using a Wikipedia-compatible license, e.g. GNU Free Documentation License. Any
weaker license, such as Apache, BSD would be OK too.
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
Briefly, I think that Cyc indeed has solved the brittleness problem observed
with 1980's style narrow-domain expert systems. During the Halo project, Cyc
was merely extended in a principled fashion to answer a battery of word
questions in the chemistry domain. In my opinion the chief drawback
David said:
Most of the people on this list have quite different ideas about
how an AGI
should be made BUT I think there are a few things that most, if
not all
agree on.
1. Intelligence can be created by using computers that exist today
using
software.
Mike,
Cyc uses, and my own Texai project will also eventually employ, deductive
reasoning (i.e. modus ponens) as its main inference mechanism. In Cyc, most of
the fallacies that Shirkey points out are avoided by two means - nonmonotonic
(e.g. default) reasoning, and context.
Although I
Pei,
Given your description, I agree B2 is the way to go. At Cycorp, the inductive
(e.g. rule induction), abductive (e.g. hypothesis generation), and analogical
reasoning engines I observed were all supported by deductive inference. I also
a member of a Cycorp team that collaborated with
on automatic programming?
Stephen Reed wrote:
Eli,
Same as Ben - Generative Programming, Methods, Tools, and
Applications
(2000) -
Krzysztof Czarnecki, Ulrich W. Eisenecker
I would chime in and say that this one also struck me as a very
stimulating book
Richard,
I entirely agree with your comments. I would like to eventually stop
programming in Java and have the system do that for me. I am strongly
motivated to build its dialog component first because that addresses the issue
of how to collaborate with the system when the rough seas are
on automatic programming?
Stephen Reed wrote:
Richard,
I entirely agree with your comments. I would like to eventually
stop
programming in Java and have the system do that for me. I am
strongly
motivated to build its dialog component first because
1 - 100 of 162 matches
Mail list logo