Jim: I have been hoping that Sergio would give up on the endless sales pitch 
and explain the kernel of his idea,

Boris: sorry to interject, but think the reason for his "endless sales pitch" 
is that there isn't much of a kernel. All his talk about physics, causality, 
emergence, & so on, is a delusion. The real question is "what do you do with 
the data?", & about the only thing he does is exhaustive "permutations" within 
a matrix, plus some basic matrix scope adjustment. That's a brute-force search, 
which is dumber than evolution & won't discover anything interesting in a 
trillion years.   



From: Jim Bromer 
Sent: Saturday, August 18, 2012 8:01 AM
To: AGI 
Subject: Re: [agi] Uncertainty, causality, entropy, self-organization, and 
Schroedinger's cat.


I really am not trying to be disruptive.  I think the conversation about 
Sergio's theory is interesting.  However, I don't see hubris as the avenue of 
science.

Right now there are good models of simple neural connections but there aren't 
any that explain how intelligence actually works.

I have been hoping that Sergio would give up on the endless sales pitch and 
explain the kernel of his idea, but I guess I will have to study posets and try 
to figure it out for myself.

The problem with the simplistic solutions is that they fail to deal with the 
complications.  So, ok, information theory might be used to analyze signals and 
it might be used effectively in neural science, but it doesn't explain general 
intelligence and it is not adequate for every kind of measurement you might 
want to make in neural science.  This should be so obvious that it should not 
need to be said.

Similarly, Friston's ideas may be interesting but it hasn't been used 
effectively to explain general intelligence.  The problem is that, like most of 
the other conjectures made so far, one can use the theory to model simple 
problems (or to imagine simple problems being so modeled) but once you try to 
turn that into a model of general intelligence the program will fail.

You can reduce the complications and complexity of the problem by any number of 
methods but most of them won't work.  There may be something similar to a 
just-in-time method in AI that might be called when-its-needed, but so far, no 
one has demonstrated how anything like that could work.  A when-its-needed 
computation or projection won't be based on global or a priori general entropy 
reduction because, assuming that the rapidity of the development of thought and 
of habit is dependent on the richness of the detail available and the extent of 
hierarchical cross indexing available, I would say that general massive entropy 
reduction would be an obstacle to insightful guessing, projection and learning.

Jim Bromer


      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to