Abram,
On 7/23/08, Abram Demski [EMAIL PROTECTED] wrote:
The Wikipedia article on PCA cites papers that show K-means clustering
and PCA to be in a certain sense equivalent-- from what I read so far,
the idea is that clustering is simply extracting discrete versions of
the continuous
Ben,
On 7/22/08, Benjamin Johnston [EMAIL PROTECTED] wrote:
/Restating (not copying) my original posting, the challenge of effective
unstructured learning is to utilize every clue and NOT just go with static
clusters, etc. This includes temporal as well as positional clues,
information
Replying in reverse order
Story: I once viewed being able to invert the Airy Disk transform (what
makes a blur from a point of light in a microscope or telescope) as an
EXTREMELY valuable thing to do to greatly increase their power, so I set
about finding a transform function. Then, I
This is getting long in embedded-reply format, but oh well
On Wed, Jul 23, 2008 at 12:24 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
Abram,
On 7/23/08, Abram Demski [EMAIL PROTECTED] wrote:
Replying in reverse order
Story: I once viewed being able to invert the Airy Disk
The Wikipedia article on PCA cites papers that show K-means clustering
and PCA to be in a certain sense equivalent-- from what I read so far,
the idea is that clustering is simply extracting discrete versions of
the continuous variables that PCA extracts.
Abram,
On 7/22/08, Abram Demski [EMAIL PROTECTED] wrote:
Problem Statement: What are the optimal functions, derived from
real-world observations of past events, the timings of their comings
and goings, and perhaps their physical association, to extract each
successive parameter containing
Steve Richfield wrote:
Richard,
Good - you hit this one on its head! Continuing...
On 7/22/08, *Richard Loosemore* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Steve Richfield wrote:
THIS is a big question. Remembering that absolutely ANY function
can be
Abram,
On 7/22/08, Abram Demski [EMAIL PROTECTED] wrote:
From the paper you posted, and from wikipedia articles, the current
meaning of PCA is very different from your generalized version. I
doubt the current algorithms would even metaphorically apply...
Just more input points that are
On Tue, Jul 22, 2008 at 4:29 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
Abram,
On 7/22/08, Abram Demski [EMAIL PROTECTED] wrote:
From the paper you posted, and from wikipedia articles, the current
meaning of PCA is very different from your generalized version. I
doubt the current
Abram,
All good points. Detailed comments follow. First I must take a LONG drag,
because I must now blow a lot of smoke...
On 7/22/08, Abram Demski [EMAIL PROTECTED] wrote:
On Tue, Jul 22, 2008 at 4:29 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
Abram,
On 7/22/08, Abram Demski [EMAIL
Derek,
On 7/22/08, Derek Zahn [EMAIL PROTECTED] wrote:
Remembering that absolutely ANY function can be performed by
passing the inputs through a suitable non-linearity, adding them
up, and running the results through another suitable non-linearity,
it isn't clear what the limitations
/Restating (not copying) my original posting, the challenge of
effective unstructured learning is to utilize every clue and NOT just
go with static clusters, etc. This includes temporal as well as
positional clues, information content, etc. PCA does some but
certainly not all of this, but
Steve,
Principal component analysis is not new, it has a long history, and so
far it is a very long way from being the basis for a complete AGI, let
alone a theory of everything in computer science.
Is there any concrete reason to believe that this particular PCA paper
is doing something
On Mon, Jul 21, 2008 at 10:32 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Steve,
Principal component analysis is not new, it has a long history, and so far
it is a very long way from being the basis for a complete AGI, let alone a
theory of everything in computer science.
Is there any
14 matches
Mail list logo