Re: Re: Re: Re: [agi] The crux of the problem

2006-11-11 Thread YKY (Yan King Yin)
On 11/10/06, Ben Goertzel [EMAIL PROTECTED] wrote: The word agent is famously polysemous in computer science.In my prior post, I used it in the sense of software agent not autonomous mental agent.These Novamente MindAgents are just software objects with certain functionalities, that get

Re: Re: Re: Re: [agi] The crux of the problem

2006-11-10 Thread Ben Goertzel
YKY says: The Novamente design is modular, in two senses: 1) there is a high-level architecture consisting of a network of functionally specialized lobes -- a lobe for language processing, a lobe for visual perception, a lobe for general cognition etc. 2) each lobe contains a set of

Re: [agi] The crux of the problem

2006-11-10 Thread Josh Cowan
BTW i was wrong, i hadn't seen it, it looks cool. On Nov 9, 2006, at 8:01 PM, YKY (Yan King Yin) wrote: On 11/10/06, Ben Goertzel [EMAIL PROTECTED]> wrote: > > 2.  Ben raised the issue of learning.  I think we should divide learning > > into 3 parts: > > > >(1) linguistic eg grammar > >

Re: [agi] The crux of the problem

2006-11-10 Thread James Ratcliff
Matt, expand upon the first part as you said there please.JamesMatt Mahoney [EMAIL PROTECTED] wrote: James,Many of the solutions you describe can use information gathered from statistical models, which are opaque. I need to elaborate on this, because I think opaque models will be fundamental to

Re: Re: Re: Re: [agi] The crux of the problem

2006-11-10 Thread James Ratcliff
The use of agent here was def confusing in terms of AI, as it is much more frequently used for the autonomous agent type.Otherwise the structure is similar in fashion to mine and many others, thoguh the wording is different. Terminology being a major stand-in-the-way point around here.JamesBen

Re: [agi] The crux of the problem

2006-11-10 Thread Matt Mahoney
James Ratcliff [EMAIL PROTECTED] wrote:Matt, expand upon the first part as you said there please.I argued earlier that a natural language model has a complexity of about 10^9 bits. To be precise, let p(s) be a function that outputs an estimate of the probability that string s will appear as a

Re: [agi] The crux of the problem

2006-11-09 Thread YKY (Yan King Yin)
This is an interesting thread, I'll add some comments: 1. For KR purposes, I think first order predicate logic is a good choice. Geniform 2.0 can be expressed in FOL entirely. ANN is simply not in a state advanced enough to represent complex knowledge (eg things that are close to NL). I

Re: Re: [agi] The crux of the problem

2006-11-09 Thread Ben Goertzel
2. Ben raised the issue of learning. I think we should divide learning into 3 parts: (1) linguistic eg grammar (2) semantic / concepts (3) generic / factual. This leaves out a lot, for instance procedure learning and metalearning... and also perceptual learning (e.g. object

Re: Re: [agi] The crux of the problem

2006-11-09 Thread YKY (Yan King Yin)
On 11/10/06, Ben Goertzel [EMAIL PROTECTED] wrote: 2.Ben raised the issue of learning.I think we should divide learning into 3 parts: (1) linguistic eg grammar (2) semantic /concepts (3) generic / factual. This leaves out a lot, for instance procedure learning and metalearning... and also

Re: Re: Re: [agi] The crux of the problem

2006-11-09 Thread Ben Goertzel
In Novamente, the synthesis of probabilistic logical inference and probabilistic evolutionary learning is to be used to carry out all of the above kinds of learning you mention, and more Well, then your architecture would be monolithic and not modular. I think it's a good choice to

Re: [agi] The crux of the problem

2006-11-08 Thread James Ratcliff
Yes. All of the above.We have already heard the statement from all around I believe, and seen the results that show that one single algorithm is just not goign to work, and its unreasonable to think it would. So then its really down to breaking up the parts, defining them precisely, and

Re: [agi] The crux of the problem

2006-11-08 Thread James Ratcliff
Matt: To parse English you have to know that pizzas have pepperoni, that demonstrators advocate violence, that cats chase mice, and so on. There is no neat, tidy algorithm that will generate all of this knowledge. You can't do any better than to just write down all of these facts. The data is not

Re: Re: [agi] The crux of the problem

2006-11-08 Thread Ben Goertzel
Hi, About But a simple example is ate a pepperoni pizza ate a tuna pizza ate a VEGAN SUPREME pizza ate a Mexican pizza ate a pineapple pizza I feel this discussion of sentence parsing and interpretation is taking a somewhat misleading direction, by focusing on examples that are in fact very

RE: Re: [agi] The crux of the problem

2006-11-08 Thread Kevin
To: agi@v2.listbox.com Subject: Re: Re: [agi] The crux of the problem Hi, About But a simple example is ate a pepperoni pizza ate a tuna pizza ate a VEGAN SUPREME pizza ate a Mexican pizza ate a pineapple pizza I feel this discussion of sentence parsing and interpretation is taking

Re: Re: [agi] The crux of the problem

2006-11-08 Thread James Ratcliff
My plan has both A with B and D examplesand Ben: So, I feel much of the present discussion on NLP interpretation isbypassing the hard problem, which is enabling an AGI system to learnthe millions or billions of commonsense (probabilistic) rules relatingto basic relationships like with_tool, which

Re[3]: [agi] The crux of the problem

2006-11-08 Thread Mark Waser
] The crux of the problem Hi, About But a simple example is ate a pepperoni pizza ate a tuna pizza ate a VEGAN SUPREME pizza ate a Mexican pizza ate a pineapple pizza I feel this discussion of sentence parsing and interpretation is taking a somewhat misleading direction, by focusing

Re: [agi] The crux of the problem

2006-11-08 Thread Richard Loosemore
Kevin wrote: http://www.physorg.com/news82190531.html Rabinovich and his colleague at the Institute for Nonlinear Science at the University of California, San Diego, Ramon Huerta, along with Valentin Afraimovich at the Institute for the Investigation of Optical Communication at the

Re: Re: [agi] The crux of the problem

2006-11-08 Thread Ben Goertzel
About http://www.physorg.com/news82190531.html Rabinovich and his colleague at the Institute for Nonlinear Science at the University of California, San Diego, Ramon Huerta, along with Valentin Afraimovich at the Institute for the Investigation of Optical Communication at the University of

Re: [agi] The crux of the problem

2006-11-08 Thread Richard Loosemore
Back in 1987, during my M.Sc., I invented the term 'dynamic relaxation' to describe a quasi-neural system whose dynamics were governed by multiple relaxation targets that are changing all the time. So the idea of having a multi-lobe attractor, or structured, time-varying attractors, is not

Re: Re: [agi] The crux of the problem

2006-11-08 Thread Ben Goertzel
Richard wrote: What Rabinovich et al appear to do is to buy some mathematical tractability by applying their idea to a trivially simple neural model. That means they know a lot of detail about a model that, if used for anything realistic (like building an intelligence) would *then* beg so many

Re: [agi] The crux of the problem

2006-11-08 Thread Matt Mahoney
James,Many of the solutions you describe can use information gathered from statistical models, which are opaque. I need to elaborate on this, because I think opaque models will be fundamental to solving AGI. We need to build models in a way that doesn't require access to the internals. This

[agi] The crux of the problem

2006-11-07 Thread John Scanlon
The crux of the problem is this: what should be the fundamental elements used for knowledge representation. Should they be statements in predicate or term logic, maybe with the addition of probabilities and confidence? Should they be neural-net-type learned functional mappings? Or should

Re: [agi] The crux of the problem

2006-11-07 Thread Matt Mahoney
James Ratcliff [EMAIL PROTECTED] wrote:Many of these examples actually arnt hard, if you use some statitisical information and common sense knowledge base.The problem is not that these examples are hard, but that are are millions of them. To parse English you have to know that pizzas have