I thought that self-organizing systems are routinely created and hosted
within software environments. An example would be a multi-agent host where
the agents congregate and spontaneously emerge a relatively coordinated
behavior that solves some individually unrelated goal.

 

A problem is taming the self-organization across complexity barriers in such
a way as to reach higher thresholds of sophistication, perhaps going through
multiple stages of self-organization to optimize problem solving abilities
using computational expense minimizations awarded though the coordinating of
the self-organization superstructure. That superstructure being a systems
automata which imposes feedback through itself, across internal complexity
regions, while learning symbiotically with its informational environment.

 

In this particular construct I described maybe the overall superstructure
behavior and coordination is analogish, versus the individual
self-organizing being more constructed digitally. Analog control itself
being a self-organized resultant, a fine tuner.

 

Just some thoughts.

 

John

 

 

From: Steve Richfield [mailto:[email protected]] 



Sergio,

Your posting here is a bit bizarre. You are saying exactly what I thought
that I was saying, namely, that intelligent systems must self-organize.
Where did you get the idea that I thought that people should be involved in
this? This forum is living proof that people are WAY too stupid to
contribute to this sort of work.

Of course, for this to ever happen, people must first do what a couple of
hundred million years of evolution has already done, namely, figure out how
to make self organization work. Until that happens, AGI is a complete
non-starter.

Hence, tricks for utilizing equation balancing to guide self-organization
must evolve from the self-organizing process. There is tens of megabytes of
DNA code wrapped up in creating our brains, and self-organization has
remained elusive, so I suspect that there will be some complexity to the
self-organization process. Once we understand THAT, then we can have a
productive discussion as to how to build practical AGIs.

Steve
==================

On Wed, Jul 11, 2012 at 11:41 AM, Sergio Pissanetzky
<[email protected]> wrote:

Steve,

 

this is precisely the critical point where we part. You follow the crowd by
thinking that *you yourself* are going to "self-organize" something and then
tell the computer how to do it. 

 

If you do that, the computer will remain as unable to self-organize as it
was before. 

 

Sorry, but you need to find out how *nature* self-organizes things. 

 

Sergio

 

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to