Oops!
The William Blake poem recited in the Dangerous Knowledge BBC program was
not Infinity (that's what Cantor was so concerned about). It was
Auguries of Innocence. The passage used in the program (and the one
borrowed by Sting) was:
To see a world in a grain of sand
And a heaven in a
On Mon, Sep 29, 2008 at 4:10 AM, Abram Demski [EMAIL PROTECTED] wrote:
How much will you focus on natural language? It sounds like you want
that to be fairly minimal at first. My opinion is that chatbot-type
programs are not such a bad place to start-- if only because it is
good publicity.
I
On Mon, Sep 29, 2008 at 9:38 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
It seems to me the main limitation is that the language model has to be
described formally in Cycl, as a lexicon and rules for parsing and
disambiguation. There seems to be no mechanism for learning natural language
by
Thank you. The detailed info is appreciated.
--Abram
On Sun, Sep 28, 2008 at 11:54 PM, Stephen Reed [EMAIL PROTECTED] wrote:
Matt said:
The overview claims to be able to convert natural language sentences into
Cycl assertions, and to convert questions to Cycl queries. So I wonder why
the
On Sun, Sep 28, 2008 at 5:23 PM, David Hart [EMAIL PROTECTED] wrote:
Actually, It's been my hunch for some time that the richness and importance
of Hellen Keller's sensational environment is frequently grossly
underestimated. The sensations of a deaf/blind person still include
proprioception,
On Mon, Sep 29, 2008 at 4:23 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
On Mon, Sep 29, 2008 at 4:10 AM, Abram Demski [EMAIL PROTECTED]
wrote:
How much will you focus on natural language? It sounds like you want
that to be fairly minimal at first. My opinion is that chatbot-type
Ben gave the following examples that demonstrate the ambiguity of the
preposition with:
People eat food with forks
People eat food with friend[s]
People eat food with ketchup
The Texai bootstrap English dialog system, whose grammar rule engine I'm
currently rewriting, uses elaboration and
Stephen,
Yes, I think your spreading-activation approach makes sense and has plenty
of potential.
Our approach in OpenCog is actually pretty similar, given that our
importance-updating dynamics can be viewed as a nonstandard sort of
spreading activation...
I think this kind of approach can
Interestingly, Helen Keller's story provides a compelling example of what it
means for a symbol to go from ungrounded to grounded. Specifically, the moment
at the water pump when she realized that the word water being spelled into
her hand corresponded with her experience of water - that
Ben and Stephen,
AFAIK your focus - and the universal focus - in this debate on how and whether
language can be symbolically/logically interpreted - is on *individual words
and sentences.* A natural place to start. But you can't stop there - because
the problems, I suggest, (hard as they
On Tue, Sep 30, 2008 at 5:23 AM, Mike Tintner [EMAIL PROTECTED]wrote:
How does Stephen or YKY or anyone else propose to read between the lines?
And what are the basic world models, scripts, frames etc etc. that you
think sufficient to apply in understanding any set of texts, even a
On Mon, Sep 29, 2008 at 9:18 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Parsing English sentences into sets of formal-logic relationships is not
extremely hard given current technology.
But the only feasible way to do it, without making AGI breakthroughs
first, is to accept that these
On Tue, Sep 30, 2008 at 1:51 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
My point for YKY was (as you know) not that this is an impossible problem
but that it's a fairly deep AI problem which is not provided out-of-the-box
in any existing NLP toolkit. Solving disambiguation thoroughly is
On Mon, Sep 29, 2008 at 6:28 PM, Lukasz Stafiniak [EMAIL PROTECTED]wrote:
On Mon, Sep 29, 2008 at 11:33 PM, Eric Burton [EMAIL PROTECTED] wrote:
It uses something called MontyLingua. Does anyone know anything about
this? There's a site at
Thanks! Fascinating
On 9/29/08, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
On Mon, Sep 29, 2008 at 11:33 PM, Eric Burton [EMAIL PROTECTED] wrote:
It uses something called MontyLingua. Does anyone know anything about
this? There's a site at http://web.media.mit.edu/~hugo/montylingua/
and it is
On Mon, Sep 29, 2008 at 6:03 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
On Mon, Sep 29, 2008 at 9:18 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Parsing English sentences into sets of formal-logic relationships is not
extremely hard given current technology.
But the only feasible
I mean that a more productive approach would be to try to understand why
the problem is so hard.
IMO Richard Loosemore is half-right ... the reason AGI is so hard has to do
with Santa Fe Institute style
complexity ...
Intelligence is not fundamentally grounded in any particular mechanism
Mike asked:
How does Stephen or YKY or anyone else propose to read between the lines?
And what are the basic world models, scripts, frames etc etc. that you
think sufficient to apply in understanding any set of texts, even a relatively
specialised set?
Interesting that this question
David,
Thanks for reply. Like so many other things, though, working out how we
understand texts is central to understanding GI - and something to be done
*now*. I've just started looking at it, but immediately I can see that what the
mind does - how it jumps around in time and space and POV
http://video.google.ca/videoplay?docid=-7933698775159827395ei=Z1rhSJz7CIvw-QHQyNkCq=nltkvt=lf
NLTK video ;O
On 9/29/08, Mike Tintner [EMAIL PROTECTED] wrote:
David,
Thanks for reply. Like so many other things, though, working out how we
understand texts is central to understanding GI - and
Eric,
Thanks for link. Flipping through quickly, it still seemed sentence-based.
Here's an example of time flipping - fast-forwarding text - and the kind
of jumps that the mind can make
AGI Year One. AGI is one of the great technological challenges. We believe
we have the basic technology -
Extracting meaning from text requires context-sensitivity to do
correctly. Natural language parsers necessarily don't reason about
things. An AGI whose natural-language interface was abstracted via
some good parser could make suppositions about the constructs it
returned by interpreting them
*in an ,_,
On 9/29/08, Eric Burton [EMAIL PROTECTED] wrote:
Extracting meaning from text requires context-sensitivity to do
correctly. Natural language parsers necessarily don't reason about
things. An AGI whose natural-language interface was abstracted via
some good parser could make
Cognitive linguistics also lacks a true deveopmental model of language
acquisition that goes beyond the first few years of life, and can embrace
all those several - and, I'm quite sure, absolutely necessary - stages of
mastering language and building a world picture.
Tomassello's theory of
Mike,
If your question is directed toward the general AI community (rather
then the people on this list), the answer is a definite YES. It was
some time ago, and as far as I know the line of research has been
dropped, yet the results are to this day quite surprisingly good (I
think). The
Ben,
Er, you seem to be confirming my point. Tomasello from Wiki is an early child
development psychologist. I want a model that keeps going to show the stages of
language acquistion from say 7-13, on through teens, and into the twenties -
that shows at what stages we understand
As I recall Tomassello's Constructing a Language deals with all the phases
of grammar learning including complex recursive phrase structure grammar...
But it doesn't trace language learning from the teens into the twenties,
no...
From a psychological point of view, that is an interesting topic,
I take it back, the field is still alive. Interesting.
http://xenia.media.mit.edu/~mueller/storyund/storyres.html
--Abram Demski
On Mon, Sep 29, 2008 at 9:51 PM, Abram Demski [EMAIL PROTECTED] wrote:
Mike,
If your question is directed toward the general AI community (rather
then the people
Mike said:
The way humans acquire language is precisely by starting not by reading
Wikipedia but by mastering fiction-like sentences with simple subjects and
simple actions and relationships - like John sit John eat Jack like Jill.
Me give Jill soap etc. -based primarily in the here and
Abram,
Yes, I'm aware of Schank - and failed to reference him. I think though that
that approach signally failed. And you give a good reason - it requires too
much knowledge entry. And that is part of my point. On the surface,
language passages can appear to be relatively simple, but
My guess is that Schank and AI generally start from a technological POV,
conceiving of *particular* approaches to texts that they can implement,
rather than first attempting a *general* overview.
I can't speak for Schank, who was however working a long time ago when
cognitive science was
2008/9/29 YKY (Yan King Yin) [EMAIL PROTECTED]:
I'm planning to make the project opensource, but I want to have a web
site that keeps a record of contributors' contributions. So that's
taking some extra time.
Most wiki's automatically keep tracl of who made
what changes, when.
*All* souce
2008/9/29 Stephen Reed [EMAIL PROTECTED]:
Ben gave the following examples that demonstrate the ambiguity of the
preposition with:
People eat food with forks
People eat food with friend[s]
People eat food with ketchup
[...]
how Texai would process Ben's examples. According to
2008/9/29 Ben Goertzel [EMAIL PROTECTED]:
Stephen,
Yes, I think your spreading-activation approach makes sense and has plenty
of potential.
Our approach in OpenCog is actually pretty similar, given that our
importance-updating dynamics can be viewed as a nonstandard sort of
spreading
34 matches
Mail list logo