Re: [agi] Understanding Natural Language

2006-11-29 Thread J. Storrs Hall, PhD.
On Tuesday 28 November 2006 17:50, Philip Goetz wrote: I see that a raster is a vector. I see that you can have rasters at different resolutions. I don't see what you mean by map the regions that represent the same face between higher and lower-dimensional spaces, or what you are taking the

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
On 11/28/06, Matt Mahoney [EMAIL PROTECTED] wrote: First order logic (FOL) is good for expressing simple facts like all birds have wings or no bird has hair, but not for statements like most birds can fly. To do that you have to at least extend it with fuzzy logic (probability and

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
Oops - looking back at my earlier post, I said that English sentences translate neatly into predicate logic statements. I should have left out logic. I like using predicates to organize sentences. I made that post because Josh was pointing out some of the problems with logic, but then making

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz
On 11/14/06, Mark Waser [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Models that are simple enough to debug are too simple to scale. The contents of a knowledge base for AGI will be beyond our ability to comprehend. Given sufficient time, anything should be able to be understood and

Re: [agi] Understanding Natural Language

2006-11-29 Thread J. Storrs Hall, PhD.
On Wednesday 29 November 2006 13:56, Matt Mahoney wrote: How is a raster scan (16K vector) of an image useful? The difference between two images of faces is the RMS of the differences of the images obtained by subtracting pixels. Given an image of Tom, how do you compute the set of all

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Mark Waser
Matt was not arguing over whether what an AI does should be called understanding or statistics. Matt was discussing what the right way to design an AI is. And Matt made a number of statements that I took issue with -- the current one being that an AI's reasoning wouldn't be

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Mark Waser
AI is about solving problems that you can't solve yourself. You can program a computer to beat you at chess. You understand the search algorithm, but can't execute it in your head. If you could, then you could beat the computer, and your program will have failed. I disagree. AI is

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz
On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote: I defy you to show me *any* black-box method that has predictive power outside the bounds of it's training set. All that the black-box methods are doing is curve-fitting. If you give them enough variables they can brute force solutions through

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Mark Waser
If you look into the literature of the past 20 years, you will easily find several thousand examples. I'm sorry but either you didn't understand my point or you don't know what you are talking about (and the constant terseness of your replies gives me absolutely no traction on assisting

Re: [agi] Understanding Natural Language

2006-11-29 Thread J. Storrs Hall, PhD.
On Wednesday 29 November 2006 16:04, Philip Goetz wrote: On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: There will be many occurances of the smaller subregions, corresponding to all different sizes and positions of Tom's face in the raster. In other words, the Tom's face region

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz
On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote: If you look into the literature of the past 20 years, you will easily find several thousand examples. I'm sorry but either you didn't understand my point or you don't know what you are talking about (and the constant terseness of your

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: On Wednesday 29 November 2006 16:04, Philip Goetz wrote: On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: There will be many occurances of the smaller subregions, corresponding to all different sizes and positions of Tom's

Re: [agi] Funky Intel hardware, a few years off...

2006-11-29 Thread Philip Goetz
On 10/31/06, Ben Goertzel [EMAIL PROTECTED] wrote: This looks exciting... http://www.pcper.com/article.php?aid=302type=expertpid=1 A system Intel is envisioning, with 100 tightly connected cores on a chip, each with 32MB of local SRAM ... If you want to go in that direction, you can start

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-29 Thread Philip Goetz
On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote: The goal-stack AI might very well turn out simply not to be a workable design at all! I really do mean that: it won't become intelligent enough to be a threat. Specifically, we may find that the kind of system that drives itself using

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz
On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote: I was saying that *because* (for independent reasons) these people's usage of terms like intelligence is so disconnected from commonsense usage (they idealize so extremely that the sense of the word no longer bears a reasonable connection

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Matt Mahoney
So what is your definition of understanding? -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Philip Goetz [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Wednesday, November 29, 2006 5:36:39 PM Subject: Re: [agi] A question on the symbol-system hypothesis On 11/19/06, Matt

Re: [agi] Understanding Natural Language

2006-11-29 Thread Russell Wallace
On 11/28/06, Philip Goetz [EMAIL PROTECTED] wrote: I see evidence of dimensionality reduction by humans in the fact that adopting a viewpoint has such a strong effect on the kind of information a person is able to absorb. In conversations about politics or religion, I often find ideas that to

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Ben Goertzel
On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote: On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote: I defy you to show me *any* black-box method that has predictive power outside the bounds of it's training set. All that the black-box methods are doing is curve-fitting. If you give them

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-29 Thread Ben Goertzel
Richard, This is certainly true, and is why in Novamente we use a goal stack only as one aspect of cognitive control... ben On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote: On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote: The goal-stack AI might very well turn out simply not to be

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Richard Loosemore
Philip Goetz wrote: On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote: I was saying that *because* (for independent reasons) these people's usage of terms like intelligence is so disconnected from commonsense usage (they idealize so extremely that the sense of the word no longer bears a

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-29 Thread David Hart
On 11/30/06, Ben Goertzel [EMAIL PROTECTED] wrote: Richard, This is certainly true, and is why in Novamente we use a goal stack only as one aspect of cognitive control... Ben, Could you elaborate for the list some of the nuances between [explicit] cognitive control and [implicit]