On Feb 17, 2008 9:42 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
So far I've been using resolution-based FOL, so there's only 1 inference
rule and this is not a big issue. If you're using nonstandard inference
rules, perhaps even approximate ones, I can see that this distinction is
All of these rules have exception or implicit condition. If you
treat them as default rules, you run into multiple extension
problem, which has no domain-independent solution in binary logic ---
read http://www.cogsci.indiana.edu/pub/wang.reference_classes.ps for
details.
Pei
On Feb 17, 2008
On 18/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
Well, the idea is to ask lots of people to contribute to the KB, and pay
them with virtual credits. (I expect such people to have a little knowledge
in logic or Prolog, so they can enter complex rules. Also, they can be
assisted by
On Feb 18, 2008 1:37 AM, Bob Mottram [EMAIL PROTECTED] wrote:
In a closed loop system what you have is a
synchronisation between data streams. In part the brain is trying to
find the best model that it can and superimpose that onto the
available data (hence the perception of lines which don't
All of these rules have exception or implicit condition. If you
treat them as default rules, you run into multiple extension
problem, which has no domain-independent solution in binary logic ---
read http://www.cogsci.indiana.edu/pub/wang.reference_classes.ps for
details.
Pei,
Do you have a
Just put one at http://nars.wang.googlepages.com/wang.reference_classes.pdf
On Feb 18, 2008 9:01 AM, Mark Waser [EMAIL PROTECTED] wrote:
All of these rules have exception or implicit condition. If you
treat them as default rules, you run into multiple extension
problem, which has no
I believe I offered the beginning of a v. useful way to conceive of this
whole area in an earlier post.
The key concept is inventory of the world.
First of all, what is actually being talked about here is only a
VERBAL/SYMBOLIC KB.
One of the grand illusions of a literature culture is that
I should add to the idea of our common sense knowledge inventory of the
world - because my talk of objects and movements may make it all sound v.
physical and external. That common sense inventory also includes a vast
amount of non-verbal knowledge, paradoxically, about how we think and
On 2/18/08, Mike Tintner [EMAIL PROTECTED] wrote:
I believe I offered the beginning of a v. useful way to conceive of this
whole area in an earlier post.
The key concept is inventory of the world.
First of all, what is actually being talked about here is only a
VERBAL/SYMBOLIC KB.
One of
This raises another v. interesting dimension of KB's and why they are limited.
The social dimension. You might, purely for argument's sake, be able to name a
vast amount of unnamed parts of the world. But you would then have to secure
social agreement for them to become practically useful. Not
Pei: Resolution-based FOL on a huge KB is intractable.
Agreed.
However Cycorp spend a great deal of programming effort (i.e. many man-years)
finding deep inference paths for common queries. The strategies were:
prune the rule set according to the contextsubstitute procedural code for
Harshad RJ wrote:
On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Harshad RJ wrote:
I read the conversation from the start and believe that Matt's
argument is correct.
Did you mean to send this only to me? It looks as though
Steve,
I also agree with what you said, and what Cyc uses is no longer pure
resolution-based FOL.
A purely resolution-based inference engine is mathematically elegant,
but completely impractical, because after all the knowledge are
transformed into the clause form required by resolution, most of
--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
On 2/18/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Heh... I think you could give away read-only access and charge people to
update it. Information has negative value, you know.
Well, the idea is to ask lots of people to contribute to the
Pei,
Another issue with a KB inference engine as contrasted with a FOL theorem
prover is that the former seeks answers to queries, and the latter often seeks
to disprove the negation of the theorem by finding a contradiction. Cycorp
therefore could not reuse much of the research from the
On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
My argument was (at the beginning of the debate with Matt, I believe)
that, for a variety of reasons, the first AGI will be built with
peaceful motivations. Seems hard to believe, but for various technical
reasons I think we
On Feb 18, 2008 7:41 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
In other words you cannot have your cake and eat it too: you cannot
assume that this hypothetical AGI is (a) completely able to build its
own understanding of the world, right up to the human level and beyond,
while also
Matt Mahoney wrote:
On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
My argument was (at the beginning of the debate with Matt, I believe)
that, for a variety of reasons, the first AGI will be built with
peaceful motivations. Seems hard to believe, but for various technical
On Feb 18, 2008 12:37 PM, Stephen Reed [EMAIL PROTECTED] wrote:
Pei,
Another issue with a KB inference engine as contrasted with a FOL theorem
prover is that the former seeks answers to queries, and the latter often
seeks to disprove the negation of the theorem by finding a contradiction.
On Feb 18, 2008 10:11 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
You assume that the system does not go through a learning phase
(childhood) during which it acquires its knowledge by itself. Why do
you assume this? Because an AGI that was motivated only to seek
electricity and
On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:
... might be true. Yes, a motivation of some form could be coded into
the system, but the paucity of expression in the level at which it is
coded, may still allow for unintended motivations to emerge out.
It seems that in the AGI
Bob Mottram wrote:
On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:
... might be true. Yes, a motivation of some form could be coded into
the system, but the paucity of expression in the level at which it is
coded, may still allow for unintended motivations to emerge out.
It seems
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
My argument was (at the beginning of the debate with Matt, I believe)
that, for a variety of reasons, the first AGI will be built with
peaceful
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
On Feb 3, 2008 10:22 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
My argument was (at the beginning of the debate with Matt, I believe)
that, for a variety of reasons, the first AGI will be built with
Harshad RJ wrote:
On Feb 18, 2008 10:11 PM, Richard Loosemore [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
You assume that the system does not go through a learning phase
(childhood) during which it acquires its knowledge by itself. Why do
you assume this? Because an AGI
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Perhaps worm is the wrong word. Unlike today's computer worms, it would
be
intelligent, it would evolve, and it would not necessarily be controlled
by or
serve the interests of its creator. Whether or not it is
Only robots above a certain level of sophistication may receive
a mind-implant via MindForth. The computerized robot needs to have
an operating system that will support Forth and sufficient memory
to hold both the AI program code and a reasonably large knowledge
base (KB) of experience. A
27 matches
Mail list logo