On 11/10/06, Ben Goertzel [EMAIL PROTECTED] wrote: The word agent is famously polysemous in computer science.In my prior post, I used it in the sense of software agent not autonomous
mental agent.These Novamente MindAgents are just software objects with certain functionalities, that get
YKY says:
The Novamente design is modular, in two senses:
1) there is a high-level architecture consisting of a network of
functionally specialized lobes -- a lobe for language processing, a
lobe for visual perception, a lobe for general cognition etc.
2) each lobe contains a set of
BTW i was wrong, i hadn't seen it, it looks cool.
On Nov 9, 2006, at 8:01 PM, YKY (Yan King Yin) wrote:
On 11/10/06, Ben Goertzel [EMAIL PROTECTED]> wrote:
> > 2. Ben raised the issue of learning. I think we should divide learning
> > into 3 parts:
> >
> >(1) linguistic eg grammar
> >
Matt, expand upon the first part as you said there please.JamesMatt Mahoney [EMAIL PROTECTED] wrote: James,Many of the solutions you describe can use information gathered from statistical models, which are opaque. I need to elaborate on this, because I think opaque models will be fundamental to
The use of agent here was def confusing in terms of AI, as it is much more frequently used for the autonomous agent type.Otherwise the structure is similar in fashion to mine and many others, thoguh the wording is different. Terminology being a major stand-in-the-way point around here.JamesBen
James Ratcliff [EMAIL PROTECTED] wrote:Matt, expand upon the first part as you said there please.I argued earlier that a natural language model has a complexity of about 10^9 bits. To be precise, let p(s) be a function that outputs an estimate of the probability that string s will appear as a
This is an interesting thread, I'll add some comments:
1. For KR purposes, I think first order predicate logic is a good choice. Geniform 2.0 can be expressed in FOL entirely. ANN is simply not in a state advanced enough to represent complex knowledge (eg things that are close to NL). I
2. Ben raised the issue of learning. I think we should divide learning
into 3 parts:
(1) linguistic eg grammar
(2) semantic / concepts
(3) generic / factual.
This leaves out a lot, for instance procedure learning and
metalearning... and also perceptual learning (e.g. object
On 11/10/06, Ben Goertzel [EMAIL PROTECTED] wrote: 2.Ben raised the issue of learning.I think we should divide learning into 3 parts:
(1) linguistic eg grammar (2) semantic /concepts (3) generic / factual. This leaves out a lot, for instance procedure learning and metalearning... and also
In Novamente, the synthesis of probabilistic logical inference and
probabilistic evolutionary learning is to be used to carry out all of
the above kinds of learning you mention, and more
Well, then your architecture would be monolithic and not modular. I think
it's a good choice to
Yes. All of the above.We have already heard the statement from all around I believe, and seen the results that show that one single algorithm is just not goign to work, and its unreasonable to think it would. So then its really down to breaking up the parts, defining them precisely, and
Matt: To parse English you have to know that pizzas have pepperoni, that demonstrators advocate violence, that cats chase mice, and so on. There is no neat, tidy algorithm that will generate all of this knowledge. You can't do any better than to just write down all of these facts. The data is not
Hi,
About
But a simple example is
ate a pepperoni pizza
ate a tuna pizza
ate a VEGAN SUPREME pizza
ate a Mexican pizza
ate a pineapple pizza
I feel this discussion of sentence parsing and interpretation is
taking a somewhat misleading direction, by focusing on examples that
are in fact very
To: agi@v2.listbox.com
Subject: Re: Re: [agi] The crux of the problem
Hi,
About
But a simple example is
ate a pepperoni pizza
ate a tuna pizza
ate a VEGAN SUPREME pizza
ate a Mexican pizza
ate a pineapple pizza
I feel this discussion of sentence parsing and interpretation is taking
My plan has both A with B and D examplesand Ben: So, I feel much of the present discussion on NLP interpretation isbypassing the hard problem, which is enabling an AGI system to learnthe millions or billions of commonsense (probabilistic) rules relatingto basic relationships like with_tool, which
] The crux of the problem
Hi,
About
But a simple example is
ate a pepperoni pizza
ate a tuna pizza
ate a VEGAN SUPREME pizza
ate a Mexican pizza
ate a pineapple pizza
I feel this discussion of sentence parsing and interpretation is
taking a somewhat misleading direction, by focusing
Kevin wrote:
http://www.physorg.com/news82190531.html
Rabinovich and his colleague at the Institute for Nonlinear Science at the
University of California, San Diego, Ramon Huerta, along with Valentin
Afraimovich at the Institute for the Investigation of Optical Communication
at the
About
http://www.physorg.com/news82190531.html
Rabinovich and his colleague at the Institute for Nonlinear Science at the
University of California, San Diego, Ramon Huerta, along with Valentin
Afraimovich at the Institute for the Investigation of Optical Communication
at the University of
Back in 1987, during my M.Sc., I invented the term 'dynamic relaxation'
to describe a quasi-neural system whose dynamics were governed by
multiple relaxation targets that are changing all the time. So the idea
of having a multi-lobe attractor, or structured, time-varying
attractors, is not
Richard wrote:
What Rabinovich et al appear to do is to buy some mathematical
tractability by applying their idea to a trivially simple neural model.
That means they know a lot of detail about a model that, if used for
anything realistic (like building an intelligence) would *then* beg so
many
James,Many of the solutions you describe can use information gathered from statistical models, which are opaque. I need to elaborate on this, because I think opaque models will be fundamental to solving AGI. We need to build models in a way that doesn't require access to the internals. This
The crux of the problem is this: what should
be the fundamental elements used for knowledge representation. Should they
be statements in predicate or term logic, maybe with the addition of
probabilities and confidence? Should they be neural-net-type learned
functional mappings? Or should
James Ratcliff [EMAIL PROTECTED] wrote:Many of these examples actually arnt hard, if you use some statitisical information and common sense knowledge base.The problem is not that these examples are hard, but that are are millions of them. To parse English you have to know that pizzas have
23 matches
Mail list logo