Boris, 

 

You portray yourself as a professional GI'er, and suggest that I am a
newcomer. Perhaps, but I bring Physics to the table, which is something you
need and don't even know. So here is my contribution. The accumulation of
entropy, not a lack of theoretical integrity, is the reason for the failure
to scale in all AI/ AGI attempts to date. It is necessary to deal with the
entropy, or else there will be no AGI. I know how to do it, and that's what
the "dumb" permutations do. They are not so dumb, they remove energy from
the information, which also causes entropy to be removed and the information
to self-organize. The result, in your own words, is a hierarchical pattern
discovery process. The permutations start from the beginning, cross-compare
sensory input, discover initial patterns (my block systems), which are
selectively forwarded to the next level for expanded search. Recursive
comparison & elevation of input patterns on successive levels of search will
discover increasingly general patterns or concepts. Higher levels generate
downward feedback, ultimately a motor action, to focus on additively
predictive lower-level sources. The scope and complexity of discoverable
patterns grow in a strictly incremental way. You said it, you wanted it, and
its all there. By the way, you are using obsolete info, I dropped matrices
long ago in favor of causal sets, and I observe self-organization in causal
sets. 

 

This is the entire AGI machine I propose, there is nothing else. Everything
else goes in the data, which is what gives people the false impression that
I neglect the data.  In the entropy approach, the AGI machine is
problem-independent. The machine is always the same, no matter what problem
you are dealing with (people are like that). The problem with Alan Grimes is
that he couldn't understand that, and he believed (as you do), that the
theory ought to be problem-specific (people are not, they learn how to be). 

 

Here is an example. Alan said: "you will need to extend your theory to deal
with orthogonal sets of orderings." I could have answered (but didn't):
There is no need to extend the theory. The machine will deal with them the
same way you and everybody else did: by acquiring all the causal relations
implied in that notion. You went to school and learned arithmetic and
algebra, then went to college and learned high math, etc. One day, you
learned about orthogonal orderings. The AGI, too, will have to "go to
school" , quite literally. Some  day, somebody will convert an entire book
of arithmetic into a causal set and feed it to the AGI. The AGI will
self-organize the arithmetic, so it can be used for thinking (for
self-organizing other things). Just like you did when you were a child. But
with machines, this has to be done only once. After that, copies can be
made, and all future machines will "know" arithmetic. Then, algebra, etc.
You get the point. "

 

He didn't. Do you? 

 

And no, you didn't answer any of my questions. And I did explain the data.
It just doesn't help. 

 

Sergio

 

 

From: Boris Kazachenko [mailto:[email protected]] 
Sent: Monday, August 20, 2012 4:43 PM
To: AGI
Subject: Re: [agi] Uncertainty, causality, entropy, self-organization, and
Schroedinger's cat.

 

Sorry Sergio, this is not against you personally, I know you're much nicer
guy than I am. It's just that... there is no there there in your theory.
"Entropy", "self-organization", "causality", "emergence" are meaningless
buzzwords in this context, & I am sure you will pick up more of them. I
answered all your questions, even before you asked them, it's just doesn't
help.

You, on the other hand, refuse to explain what do you actually suggest to do
with the data, other than those dumb matrix permutations & scope adjustment?
What is your contribution to all that "causality", "emergence", & so on? 

 

 

 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/18407320-d9907b69> |
<https://www.listbox.com/member/?&;> Modify Your Subscription 

 <http://www.listbox.com> 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> |
<https://www.listbox.com/member/?&;
ad2> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to