Richard,

You are confusing what PCA now is, and what it might become. I am more
interested in the dream than in the present reality. Detailed comments
follow...

On 7/21/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> Steve Richfield wrote:
>
>>  Maybe not "complete" AGI, but a good chunk of one.
>>
>
> Mercy me!  It is not even a gleam in the eye of something that would be
> half adequate.


Who knows what the building blocks of the first successful AGI will be.
Remember that YOU are made of wet neurons, and who knows, maybe they work by
some as-yet-to-be-identified mathematics that will be uncovered in the quest
for a better PCA.

  Do you have any favorites?
>>
>
> No.  The ones I have seen are not worth a second look.


I had the same opinion.

 I have attached an earlier 2006 paper with *_pictures_* of the learned
>> transfer functions, which look a LOT like what is seen in a cat's an money's
>> visual processing.
>>
>
> ... which is so low-level that it counts as peripheral wiring.


Agreed, but there is little difference between GOOD compression and
understanding, so if these guys are truly able to (eventually) perform good
compression, then maybe we are on the way to understanding.

 Note that in the last section where they consider multi-layer applications,
>> that they apparently suggest using *_only one_* PCA layer!
>>
>
> Of course they do:  that is what all these magic bullet people say. They
> can't figure out how to do things in more than one layer, and they do not
> really understand that it is *necessary* to do things in more than one
> layer, so guess what?, they suggest that we not *need* more than one layer.
>
> Sigh.  Programmer Error.


I noted this comment because it didn't ring true for me either. However, my
take on this is that a real/future/good PCA will work for many layers, and
not just the first.

Note that the extensive training was LESS than what a baby sees during its
first hour in the real world.

    To give you an idea of what I am looking for, does the algorithm go
>>    beyond single-level encoding patterns?
>>
>>  Many of the articles, including the one above, make it clear that they
>> are up against a computing "brick wall". It seems that algorithmic honing is
>> necessary to prove whether the algorithms are any good. Hence, no one has
>> shown any practical application (yet), though they note that JPEG encoding
>> is a sort of grossly degenerative example of their approach.
>>  Of course, the present computational difficulties is NO indication that
>> this isn't the right and best way to go, though I agree that this is yet to
>> be proven.
>>
>
> Hmm... you did not eally answer the question here.


Increasing bluntness: How are they supposed to test multiple-layer methods
when they have to run their computers for days just to test a single layer?
PCs just don't last that long, and Microsoft has provided no checkpoint
capability to support year-long executions.

  Does your response indicate that you are willing to take a shot at
>> explaining some of the math murk in more recent articles? I could certainly
>> use any help that I can get. So far, it appears that a PCA and matrix
>> algebra glossary of terms and abbreviations would go a LONG way to
>> understanding these articles. I wonder if one already exists?
>>
>
> I'd like to help (and I could), but do you realise how pointless it is?


Not yet. I agree that it has't gone anywhere yet. Please make your case that
this will never go anywhere.

 All this brings up another question to consider: Suppose that a magical
>> processing method were discovered that did everything that AGIs needed, but
>> took WAY more computing power than is presently available. What would people
>> here do?
>> 1.  Go work on better hardware.
>> 2.  Work of faster/crummier approximations.
>> 3.  Ignore it completely and look for some other breakthrough.
>>
>
> Steve, you raise a deeply interesting question, at one level, because of
> the answer that it provokes:  if you did not have the computing power to
> prove that the "magical processing method" actually was capable of solving
> the problems of AGI, then you would not be in any position to *know* that it
> was capable of solving the problems if AGI.


This all depends on the underlying theoretical case. Early Game Theory
application was also limited by compute power, but holding the proof that
this was as good as could be done, they pushed for more compute power rather
than walking away and looking for some other approach. I remember when the
RAND Corp required 5 hours to just to solve a 5X5 non-zero-sum game.

Your question answers itself, in other words.


Only in the absence of theoretical support/proof of optimality. PCA looked
like maybe such a proof might be in its future.

Steve Richfield


>> ================
>> Steve Richfield wrote:
>>
>>        Y'all,
>>         I have long predicted a coming "Theory of Everything" (TOE) in
>>        CS that would, among other things, be the "secret sauce" that
>>        AGI so desperately needs. This year at WORLDCOMP I saw two
>>        presentations that seem to be running in the right direction. An
>>        earlier IEEE article by one of the authors seems to be right on
>>        target. Here is my own take on this...
>>         Form:  The TOE would provide a way of unsupervised learning to
>>        rapidly form productive NNs, would provide a subroutine that AGI
>>        programs could throw observations into and SIGNIFICANT patterns
>>        would be identified, would be the key to excellent video
>>        compression, and indirectly, would provide the "perfect"
>>        encryption that nearly perfect compression would provide.
>>         Some video compression folks in Germany have come up with
>>        "Principal Component Analysis" that works a little like
>>        clustering, only it also includes temporal consideration, so
>>        that things that come and go together are presumed to be
>>        related, thereby eliminating the "superstitious clustering"
>>        problem of static cluster analysis. There is just one "catch":
>>        This is buried in array transforms and compression jargon that
>>        baffles even me, a former in-house numerical analysis consultant
>>        to the physics and astronomy departments of a major university.
>>        Further, it is computationally intensive.
>>         Teaser: Their article is entitled "A new method for Principal
>>        Component Analysis of high-dimensional data using Compressive
>>        Sensing" and applies methods that *_benefit_* from having many
>>        dimensions, rather than being plagued by them (e.g. as in
>>        cluster analysis).
>>         Enter a retired math professor who has come up with some clever
>>        "simplifications" (to the computer, but certainly not to me) to
>>        make these sorts of computations tractable for real-world use.
>>        It looks like this could be quickly put to use, if only someone
>>        could translate this stuff from linear algebra to English for us
>>        mere mortals. He also authored a textbook that Amazon provides
>>        peeks into, but in addition to its 3-digit price tag, it was
>>        also rather opaque.
>>         It's been ~40 years since I have had my head into matrix
>>        transforms, so I have ordered up some books to hopefully help me
>>        through it. Is there someone here who is fresh in this area who
>>        would like to take a shot at "translating" some obtuse
>>        mathematical articles into English - or at least providing a few
>>        pages of prosaic footnotes to explain their terminology?
>>         I will gladly forward the articles that seem to be relevant to
>>        anyone who wants to take a shot at this.
>>         Any takers?
>>         Steve Richfield
>>
>> ------------------------------------------------------------------------
>>        *agi* | Archives
>>        <https://www.listbox.com/member/archive/303/=now>
>>        <https://www.listbox.com/member/archive/rss/303/> | Modify
>>        <https://www.listbox.com/member/?&;
>>        <https://www.listbox.com/member/?&;>> Your Subscription
>>  [Powered by Listbox] <http://www.listbox.com
>>        <http://www.listbox.com/>>
>>
>>
>>
>>
>>    -------------------------------------------
>>    agi
>>    Archives: https://www.listbox.com/member/archive/303/=now
>>    RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>    Modify Your Subscription: https://www.listbox.com/member/?&;
>>    <https://www.listbox.com/member/?&;>
>>    Powered by Listbox: http://www.listbox.com <http://www.listbox.com/>
>>
>>
>> ------------------------------------------------------------------------
>> *agi* | Archives <https://www.listbox.com/member/archive/303/=now> <
>> https://www.listbox.com/member/archive/rss/303/> | Modify <
>> https://www.listbox.com/member/?&;> Your Subscription       [Powered by
>> Listbox] <http://www.listbox.com>
>>
>>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to