I'll answer you point by point; others who find this tedious can just scroll
down to conclusions at the bottom.
On 4/27/07, Mark Waser [EMAIL PROTECTED] wrote:
I am NOT suggesting a rule-based system at this level. First I figure out
a good representation for the minimal Basic English grammar
On 4/27/07, Matt Mahoney [EMAIL PROTECTED] wrote:
I think learning in layers (A) is the correct approach, but also that it
can
be done from a corpus of adult level language, at least if you are
training a
pure, ungrounded language model. When parents use baby talk, they are
actually using
I think YKY is right on this one. There was a Dave Barry column about going to
the movies with kids in which a 40-foot image of a handgun appears on the
screen, at which point every mother in the theater turns to her kid and says,
Oh look, he's got a GUN!
Communication in natural language is
I agree about developmental language learning combined with automated
learning of grammar rules being the right approach to NLP.
In fact, my first wife did her PhD work on this topic in 1994, at Waikato
University in Hamilton New Zealand. She got frustrated and quite before
finishing her
:-) In bold, blue below
BTW, I am color-blind (the standard male red-green version) so non-bold red is
a bad choice for replying to me (as in, I missed a couple of your replies
initially and still may have missed some . . . . :-)
- Original Message -
From: YKY (Yan King Yin)
I agree about developmental language learning combined with automated
learning of grammar rules being the right approach to NLP.
I think that the fundamentals of grammar rules are hard-coded into humans
and that the specific non-determined details (i.e. language-specific
differences)
Disagree. The brain ALWAYS tries to make sense of language - convert it into
images and graphics. I see no area of language comprehension where this
doesn't apply.
I was just reading a thread re the Symbol gorund P on another group - I think
what's fooling people into thinking purely
On 4/28/07, Mike Tintner [EMAIL PROTECTED] wrote:
Disagree. The brain ALWAYS tries to make sense of language - convert it into
images and graphics. I see no area of language comprehension where this
doesn't apply.
I think that a *solution to NLP* is not a *solution to AGI*, so your
argument
Are you saying then that blind people can not make sense of language because
they lack the capacity to imagine images having never seen them before?
Or that blind people could not understand or would not view these these as
equally strange as a sighted person?
The man climbed the penny
The mat
I think that a *solution to NLP* is not a *solution to AGI*, so your
argument does not apply.
I think that this depends upon your definition of intelligence and also
assumes that a solution to NLP is not enough to boostrap the rest. I could
argue the point either way. I think that NLP is
I strongly suspect that he is a very visual person and that, for him,
convert into a complete mental model feels like seeing so he used that as a
shorthand for what he really meant -- assuming (unconsciously/without thinking)
that you would be willing to accept the shorthand rather than
On 4/28/07, Mark Waser [EMAIL PROTECTED] wrote:
I think that a *solution to NLP* is not a *solution to AGI*, so your
argument does not apply.
I think that this depends upon your definition of intelligence and also
assumes that a solution to NLP is not enough to boostrap the rest. I could
I disagree with this two ways. First, it's fairly well accepted among
mainstream AI researchers that full NL competence is AI-complete, i.e. that
human-level intelligence is a prerequisite for NL. Secondly, even the parsing
part of NLP is part of a more general recursive sequence
Classic objection.
The answer is that blind people can draw - reasonably faithful outline objects.
Experimentally tested.
Their brains like all our brains form graphic outlines of objects and fit them
accordingly to create scenes to test the sense of sentences.
Worms do it too. They are
Mike,
1) It seems to assume that intelligence is based on a rational,
deterministic program - is that right? Adaptive intelligence, I would argue,
definitely isn't. There isn't a rational, right way to approach the problems
adaptive intelligence has to deal with.
I'm not sure what you mean
On Saturday 28 April 2007 09:02, Benjamin Goertzel wrote:
In other words: I became convinced that in the developmental approach, if
you want to take the human child language learning metaphor at all
seriously, you need to go beyond pure language learning and take an
experientially grounded
In case anyone is interested, some folks at IBM Almaden have run a
one-hemisphere mouse-brain simulation at the neuron level on a Blue Gene (in
0.1 real time):
http://news.bbc.co.uk/2/hi/technology/6600965.stm
http://ieet.org/index.php/IEET/more/cascio20070425/
On 4/28/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
I disagree with this two ways. First, it's fairly well accepted among
mainstream AI researchers that full NL competence is AI-complete, i.e. that
human-level intelligence is a prerequisite for NL.
I don't think this is the operational
On 4/28/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
On 4/28/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
I disagree with this two ways. First, it's fairly well accepted among
I was writing in context of Mark Waser language-specific solutions (as
I understand them), which if wished could
On Sat, Apr 28, 2007 at 01:15:13PM -0400, J. Storrs Hall, PhD. wrote:
In case anyone is interested, some folks at IBM Almaden have run a
one-hemisphere mouse-brain simulation at the neuron level on a Blue Gene (in
What they did was running a simplified, unrealistic model. It's still
a great
On 4/28/07, Eugen Leitl [EMAIL PROTECTED] wrote:
On Sat, Apr 28, 2007 at 01:15:13PM -0400, J. Storrs Hall, PhD. wrote:
In case anyone is interested, some folks at IBM Almaden have run a
one-hemisphere mouse-brain simulation at the neuron level on a Blue Gene (in
What they did was running a
Shane,
Little bit confusing here - perhaps too general and unfocussed to pursue
really
But interestingly while you deny that the given conception of intelligence is
rational and deterministic.. you then proceed to argue rationally and
deterministically. First, that there IS a right way to
I thought that you implied that the solution to NLP does not need to
be general in its cognitive capacity.
Not deliberately. I suspect that it's going to require most of what
general cognition includes *at a specific level* (i.e. somewhere between but
not including the low/perceptual
I don't think this is the operational sense of NLP as pursued by
applying linguistic theories in narrow AI setting. (e.g. Dynamic
Syntax, DRT, HPSG, ...)
but we want to apply NLP generally (i.e. not just in a narrow AI setting)
I was writing in context of Mark Waser language-specific
On 4/28/07, Mark Waser [EMAIL PROTECTED] wrote:
I don't think this is the operational sense of NLP as pursued by
applying linguistic theories in narrow AI setting. (e.g. Dynamic
Syntax, DRT, HPSG, ...)
but we want to apply NLP generally (i.e. not just in a narrow AI setting)
(For what
No, I mean applying to other modality so to say, to some other kind
of problem solving, not to another language
Ah. And this is the basis for my repeated clarification about NLP requiring
general cognition of a specific level (or type). Path-finding cognition
certainly isn't required for
On 4/28/07, Mark Waser [EMAIL PROTECTED] wrote:
No, I mean applying to other modality so to say, to some other kind
of problem solving, not to another language
Ah. And this is the basis for my repeated clarification about NLP requiring
general cognition of a specific level (or type).
On 4/28/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
So you mean, that NLP can/must understand the Whorfian, Barthesian,
philosophical broad language using the tools of computational
linguistics' narrow language?
Then NLP=AGI (holistic, non-modular view)
Not that structuralists, tracing back
Note that instructions regarding how to unsubscribe are given
at the end of every post ot the list.
Therefore, if folks want to unsubscribe, they can just do
so themselves without requesting someone else to do it
for them ;-)
-- Ben G
On 4/27/07, Li, Jiang (NIH/CC/DRD) [F] [EMAIL PROTECTED]
The man issue is, we we still have basically no idea of the patterns
according to which the neurons in the mouse brain are really interconnected,
except in some particular regions ... so semi-randomly hooking up 8 million
(well-simulated individually) neurons is not really simulating half a mouse
So you mean, that NLP can/must understand the Whorfian, Barthesian,
philosophical broad language using the tools of computational
linguistics' narrow language?
Then NLP=AGI (holistic, non-modular view)
I mean that I believe that the Sapir-Whorf hypothesis is true and that this
means that NLP
When I first saw this on the BBC web site I thought it looked exciting -
maybe the first upload. But on closer inspection it seems to be less
impressive. There is an extremely brief report on what they did, which
looks like merely simulating a large number of neurons on a supercomputer,
without
You are right that NLP implies the processing of world-view, I just
remind that general world-view management should be outsourced to the
AGI core.
I agree. My mental separation is that the NLP module simply consists of
the parser and the generator but that they absolutely require the
On 4/28/07, Mark Waser [EMAIL PROTECTED] wrote:
You are right that NLP implies the processing of world-view, I just
remind that general world-view management should be outsourced to the
AGI core.
I agree. My mental separation is that the NLP module simply consists of
the parser and the
So, in the real context of an AGI, you make her responsible for
talking to you in this simplified language, which just pushes language
understanding under her carpet ;-)
(joking here)
ASSERT(Lukasz Stafiniak, Evil)
So . . . . if I get NLP working, does this mean that I have AGI or just a
On 4/28/07, Mark Waser [EMAIL PROTECTED] wrote:
So, in the real context of an AGI, you make her responsible for
talking to you in this simplified language, which just pushes language
understanding under her carpet ;-)
(joking here)
ASSERT(Lukasz Stafiniak, Evil)
So . . . . if I get NLP
--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
Headline: Employees of a new plan to get Dell back on the road to growth,
including streamlining management and looking at new methods of distribution
beyond the computer company's direct-selling model.
Can a baby really learn from THIS ^^^ ?
I'll have to say my objection stands.
Because the point is that blind people learn about an object and infer it's
shape from words and description of the object without ever seeing them.
An intelligent AI will do so in the same way.
After the blind learns about an object by reading in
Helen Keller must have had a tough time existing without words. According to
you she didn't know the shape of the chairs she sat on. She had no words.
What are these commonsense rules in words that you learned? That apply to the
sentences I gave? Or to elephants and chairs? Where did you get
--- Gary Miller [EMAIL PROTECTED] wrote:
I'll have to say my objection stands.
Because the point is that blind people learn about an object and infer it's
shape from words and description of the object without ever seeing them.
I think the blind form a 3-D model of the world through
Does anyone know if the number of synapses per neuron (8000) for mouse
cortical cells also apply to humans? This is the first time I have seen an
estimate of this number. I believe the researchers based their mouse
simulation on anatomical studies.
--- J. Storrs Hall, PhD. [EMAIL PROTECTED]
On 4/28/07, Mike Tintner [EMAIL PROTECTED] wrote:
And what if I say to you: sorry but the elephant did sit on the chair -
how would you know that I could be right?
I could assign a probability of truthfulness to this statement that is
dependant on how many other assertions you have made and
Mark,
I need to know a bit more about your approach. What do you mean when you
say grammar is embedded in your KR? For an example rule like NP -- det
noun, how is it represented or embedded in your scheme?
Your approach may have these problems:
1. you cannot learn a new NL; English is
43 matches
Mail list logo