On Sat, Aug 18, 2012 at 10:06 AM, Boris Kazachenko <[email protected]>wrote:

> Jim: I have been hoping that Sergio would give up on the endless sales
> pitch and explain the kernel of his idea,
> Boris: sorry to interject, but think the reason for his "endless sales
> pitch" is that there isn't much of a kernel. All his talk about physics,
> causality, emergence, & so on, is a delusion. The real question is "what do
> you do with the data?", & about the only thing he does is exhaustive
> "permutations" within a matrix, plus some basic matrix scope adjustment.
> That's a brute-force search, which is dumber than evolution & won't
> discover anything interesting in a trillion years.
>


I believe that good compression methods are almost certainly necessary for
AGI.  That's why I made the comments that addition and multiplication are
powerful because they do not need to decompress what is effectively
compressed data to use it.  However, that does not mean that I believe that
massive entropy reduction is likely to be the basis of AGI or that
compression methods should be taken as the basis.

Sergio was talking about some kind of entropy reduction but he never really
got around to explaining why he thought this method would be effective or
why he thought it would be powerful enough to enable an AGI program to
learn new things.

Jim Bromer



On Sat, Aug 18, 2012 at 10:06 AM, Boris Kazachenko <[email protected]>wrote:

> **
> Jim: I have been hoping that Sergio would give up on the endless sales
> pitch and explain the kernel of his idea,
>
> Boris: sorry to interject, but think the reason for his "endless sales
> pitch" is that there isn't much of a kernel. All his talk about physics,
> causality, emergence, & so on, is a delusion. The real question is "what do
> you do with the data?", & about the only thing he does is exhaustive
> "permutations" within a matrix, plus some basic matrix scope adjustment.
> That's a brute-force search, which is dumber than evolution & won't
> discover anything interesting in a trillion years.
>
>
>  *From:* Jim Bromer <[email protected]>
> *Sent:* Saturday, August 18, 2012 8:01 AM
> *To:* AGI <[email protected]>
> *Subject:* Re: [agi] Uncertainty, causality, entropy, self-organization,
> and Schroedinger's cat.
>
> I really am not trying to be disruptive.  I think the conversation about
> Sergio's theory is interesting.  However, I don't see hubris as the avenue
> of science.
>
> Right now there are good models of simple neural connections but there
> aren't any that explain how intelligence actually works.
>
> I have been hoping that Sergio would give up on the endless sales pitch
> and explain the kernel of his idea, but I guess I will have to study posets
> and try to figure it out for myself.
>
> The problem with the simplistic solutions is that they fail to deal with
> the complications.  So, ok, information theory might be used to analyze
> signals and it might be used effectively in neural science, but it doesn't
> explain general intelligence and it is not adequate for every kind of
> measurement you might want to make in neural science.  This should be so
> obvious that it should not need to be said.
>
> Similarly, Friston's ideas may be interesting but it hasn't been used
> effectively to explain general intelligence.  The problem is that, like
> most of the other conjectures made so far, one can use the theory to model
> simple problems (or to imagine simple problems being so modeled) but once
> you try to turn that into a model of general intelligence the program will
> fail.
>
> You can reduce the complications and complexity of the problem by any
> number of methods but most of them won't work.  There may be something
> similar to a just-in-time method in AI that might be called
> when-its-needed, but so far, no one has demonstrated how anything like that
> could work.  A when-its-needed computation or projection won't be based on
> global or a priori general entropy reduction because, assuming that the
> rapidity of the development of thought and of habit is dependent on the
> richness of the detail available and the extent of hierarchical cross
> indexing available, I would say that general massive entropy reduction
> would be an obstacle to insightful guessing, projection and learning.
>
> Jim Bromer
>
>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/18407320-d9907b69> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to