On Tuesday 28 November 2006 17:50, Philip Goetz wrote:
I see that a raster is a vector. I see that you can have rasters at
different resolutions. I don't see what you mean by map the regions
that represent the same face between higher and lower-dimensional
spaces, or what you are taking the
On 11/28/06, Matt Mahoney [EMAIL PROTECTED] wrote:
First order logic (FOL) is good for expressing simple facts like all birds have wings or no
bird has hair, but not for statements like most birds can fly. To do that you have to at
least extend it with fuzzy logic (probability and
Oops - looking back at my earlier post, I said that English sentences
translate neatly into predicate logic statements. I should have left
out logic. I like using predicates to organize sentences. I made
that post because Josh was pointing out some of the problems with
logic, but then making
On 11/14/06, Mark Waser [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Models that are simple enough to debug are too simple to scale.
The contents of a knowledge base for AGI will be beyond our ability to
comprehend.
Given sufficient time, anything should be able to be understood and
On Wednesday 29 November 2006 13:56, Matt Mahoney wrote:
How is a raster scan (16K vector) of an image useful? The difference
between two images of faces is the RMS of the differences of the images
obtained by subtracting pixels. Given an image of Tom, how do you compute
the set of all
Matt was not arguing over whether what an AI does should be called
understanding or statistics. Matt was discussing what the right
way to design an AI is.
And Matt made a number of statements that I took issue with -- the current
one being that an AI's reasoning wouldn't be
AI is about solving problems that you can't solve yourself. You can
program a computer to beat you at chess. You understand the search
algorithm, but can't execute it in your head. If you could, then you
could beat the computer, and your program will have failed.
I disagree. AI is
On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:
I defy you to show me *any* black-box method that has predictive power
outside the bounds of it's training set. All that the black-box methods are
doing is curve-fitting. If you give them enough variables they can brute
force solutions through
If you look into the literature of the past 20 years, you will easily
find several thousand examples.
I'm sorry but either you didn't understand my point or you don't know
what you are talking about (and the constant terseness of your replies gives
me absolutely no traction on assisting
On Wednesday 29 November 2006 16:04, Philip Goetz wrote:
On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
There will be many occurances of the smaller subregions, corresponding to
all different sizes and positions of Tom's face in the raster. In other
words, the Tom's face region
On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:
If you look into the literature of the past 20 years, you will easily
find several thousand examples.
I'm sorry but either you didn't understand my point or you don't know
what you are talking about (and the constant terseness of your
On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
On Wednesday 29 November 2006 16:04, Philip Goetz wrote:
On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
There will be many occurances of the smaller subregions, corresponding to
all different sizes and positions of Tom's
On 10/31/06, Ben Goertzel [EMAIL PROTECTED] wrote:
This looks exciting...
http://www.pcper.com/article.php?aid=302type=expertpid=1
A system Intel is envisioning, with 100 tightly connected cores on a
chip, each with 32MB of local SRAM ...
If you want to go in that direction, you can start
On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The goal-stack AI might very well turn out simply not to be a workable
design at all! I really do mean that: it won't become intelligent
enough to be a threat. Specifically, we may find that the kind of
system that drives itself using
On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote:
I was saying that *because* (for independent reasons) these people's
usage of terms like intelligence is so disconnected from commonsense
usage (they idealize so extremely that the sense of the word no longer
bears a reasonable connection
So what is your definition of understanding?
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 5:36:39 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
On 11/19/06, Matt
On 11/28/06, Philip Goetz [EMAIL PROTECTED] wrote:
I see evidence of dimensionality reduction by humans in the fact that
adopting a viewpoint has such a strong effect on the kind of
information a person is able to absorb. In conversations about
politics or religion, I often find ideas that to
On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote:
On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:
I defy you to show me *any* black-box method that has predictive power
outside the bounds of it's training set. All that the black-box methods are
doing is curve-fitting. If you give them
Richard,
This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...
ben
On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote:
On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The goal-stack AI might very well turn out simply not to be
Philip Goetz wrote:
On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote:
I was saying that *because* (for independent reasons) these people's
usage of terms like intelligence is so disconnected from commonsense
usage (they idealize so extremely that the sense of the word no longer
bears a
On 11/30/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Richard,
This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...
Ben,
Could you elaborate for the list some of the nuances between [explicit]
cognitive control and [implicit]
21 matches
Mail list logo