On Sat, Jul 25, 2015 at 10:41 PM, Aaron Hosford <[email protected]> wrote:
> I think maybe I didn't communicate my thoughts clearly. There is also > information stored in the form of the population of agents itself. By > replacing those agents with new ones, you could implement population-level > learning which does not take place in the environment. In the extreme case, > this can be reduced to individual-level learning if we simply replace an > agent with an otherwise identical one that would result if the agent itself > had learned, but it could also operate in less obviously equivalent ways. > Thus evolution of the population represents a generalization of learning at > the individual agent level. It is the evolution of the population itself > that represents a learning algorithm, independent of the information stored > in the environment. > For some reason I did not respond to your message correctly. That is a good point but I am not sure it fits the proper definition of stigmergy. However, you can point out that it does behave like stigmergy in that the abstract process of selecting and recombining substrings is unchanging.(I have forgotten the details of how a genetic algorithm works.) To fit the example to the definition of stigmergy, I think you would have to say the resulting strings that are used in the next round of trials are more like the 'marks in the environment' than the agents. That is a good example of what I am talking about. As you would want to improve on the process the unchanging parts would have to tend to become a little more abstract so that you are first talking about programming that would be the unchanging part and then meta-programming would have to become the unchanging part to make the method more powerful. Genetic programming lacks higher level insight about close misses and planning could be used to create a more insightful variation of the method. Jim Bromer On Sat, Jul 25, 2015 at 10:41 PM, Aaron Hosford <[email protected]> wrote: > I think maybe I didn't communicate my thoughts clearly. There is also > information stored in the form of the population of agents itself. By > replacing those agents with new ones, you could implement population-level > learning which does not take place in the environment. In the extreme case, > this can be reduced to individual-level learning if we simply replace an > agent with an otherwise identical one that would result if the agent itself > had learned, but it could also operate in less obviously equivalent ways. > Thus evolution of the population represents a generalization of learning at > the individual agent level. It is the evolution of the population itself > that represents a learning algorithm, independent of the information stored > in the environment. > > On Sat, Jul 25, 2015 at 8:53 PM, Jim Bromer <[email protected]> wrote: > >> On Sat, Jul 25, 2015 at 9:25 PM, Aaron Hosford <[email protected]> >> wrote: >> > Reading your response, Jim, brings genetic programming to mind. If the >> > population, taken as a whole, learns, but the individuals within that >> > population do not, is it still stigmergy? >> >> I thought that was part of the definition of stigmergy. The objects >> that are learned are based on basic reactions. The marks in the >> environment constitute the knowledge. I doubt if it is that simple in >> insects and it is certainly not too likely that birds that flock do >> not learn from the experience (since social birds are considered more >> likely to be more intelligent than non-social birds.) >> Jim Bromer >> >> >> On Sat, Jul 25, 2015 at 9:25 PM, Aaron Hosford <[email protected]> >> wrote: >> > Reading your response, Jim, brings genetic programming to mind. If the >> > population, taken as a whole, learns, but the individuals within that >> > population do not, is it still stigmergy? >> > >> > >> > >> > Aaron Hosford >> > >> >> On Jul 25, 2015, at 7:49 PM, Jim Bromer <[email protected]> wrote: >> >> >> >> Stigmergy refers to individuals who leave and react to marks (or >> >> mark-like objects) in the environment that produce group behavior >> >> rather than forming memories and using reasoning. Their reactions are >> >> relatively simplistic but the group is able to produce a range of >> >> variations which may seem surprising. If you are going to use >> >> stigmergy as a term to describe an AI program or part of an AI program >> >> then you have to hold the agents (or parts) so that they are not >> >> learning but just leaving and reacting to marks in the >> >> pseudo-environment (the blackboard). This suggests that your AI >> >> program is going to be based on agents capable of simple reactions to >> >> marks in the blackboard. I would want my AI program to be able to >> >> learn and I see no reason why shielding the agents from being able to >> >> learn is going to make the project more likely to succeed. However, it >> >> is interesting to think about how much could be done this way and it >> >> is a worthwhile experiment. But, suppose you want to work from this >> >> stigmergy into some more powerful model without giving up the >> >> philosophical model of the stigmergy entirely. You are going to try >> >> to give the agents some ability to learn but you still want to limit >> >> the memory store of the agents. What I am saying is that you might >> >> take that step by saying the agents are endowed with some >> >> 'abstractons' (or programming) which can then specialize as they are >> >> needed. Some of the original programming (or the potential range of >> >> the programming) is going to be filtered out as the agent specializes. >> >> But, once you take this step, I am saying, it is difficult to justify >> >> limiting the range of the agents to learn new abstractions (or new >> >> programming). The specialization is itself equivalent to a kind of >> >> reprogramming so why stop there? Why not explore other ways that the >> >> agents can be reprogrammed to deal with the data environment. So the >> >> 'agents' are not only specializing by filtering out programming steps >> >> but they could also (for example) be able to try combining programming >> >> steps in creative ways or even modifying the programming steps in some >> >> more dramatic (but still well managed) way. So then the simple >> >> programming of the agents is not a first level abstraction but a >> >> meta-level abstraction. (The word "abstraction" was originally meant >> >> to refer to an insight that was derived from learning but then >> >> Aristotle's redefinition of 'the form' meant the concept of >> >> abstraction could also be used as a form or formula or a program.) >> >> >> >> If you wanted to simulate the behavior of an ant colony you could call >> >> it stigmergy because the current thinking about the behaviors of ants >> >> are presumably seen as stigmergic. Now you could take his simulation >> >> to something more abstract so it is no longer a simulation. Derived >> >> from a simulation of the stigmergy you can still call it a stigmergic >> >> model. Then even though you might further change your model you can >> >> still say that it was derived from stigmergy. But you need to keep >> >> some kind of reality check on your use of the terminology so you don't >> >> completely lose track of what the program is doing. >> >> Jim Bromer >> >> >> >> >> >>> On Fri, Jul 24, 2015 at 3:55 PM, Mike Archbold <[email protected]> >> wrote: >> >>>> On 7/24/15, Jim Bromer <[email protected]> wrote: >> >>>> You can try to use stigmergy as if it were an abstraction that can be >> >>>> seen as part of a human-like intelligence but then you would, for >> >>>> example, be forced to declare that the more abstract parts of the >> >>>> programming were the primitives that were not changing due to the >> >>>> memories of events and the integration of those event-memories. But, >> >>>> since you would want a secondary abstraction-generation system be >> >>>> something that could be learned you would have to reach further into >> >>>> the abstractions of the abstractions of the programming to find the >> >>>> truly stigmergic part. It is an interesting philosophical exercise >> but >> >>>> can it be used to lead to something new? >> >>>> Jim Bromer >> >>> >> >>> Jim, I really like this paragraph above although I don't know what it >> >>> means, exactly, but have kind of feel for it... >> >>> PM, I don't recall you had ideas in your design (apologize if I >> >>> forgot). How do you define "idea" in a non formal type way? >> >>> >> >>> Mike A >> >>> >> >>> >> >>>> >> >>>>> On Wed, Jul 22, 2015 at 10:31 AM, Jim Bromer <[email protected]> >> wrote: >> >>>>> The definition of stigmergy in Wikipedia is that, "It produces >> >>>>> complex, seemingly intelligent structures, without need for any >> >>>>> planning, control, or even direct communication between the agents. >> As >> >>>>> such it supports efficient collaboration between extremely simple >> >>>>> agents, who lack any memory, intelligence or even individual >> awareness >> >>>>> of each other." >> >>>>> So while Facebook, for example, is designed to work based on human >> >>>>> responses it does also retain 'marks' which are used to determine a >> >>>>> range of actions that can be subsequently taken in response. >> However, >> >>>>> communication between the human agents, who have stores of memories, >> >>>>> is the whole reason Facebook has succeeded. Can we look at part of a >> >>>>> distributed active system, even one that relies on human IO, and say >> >>>>> that part of it is stigmergic? OK, but the next question is why? >> What >> >>>>> can you do with that point of view? I think (it is obvious that) >> human >> >>>>> beings are sometimes reacting without fully realizing what is going >> on >> >>>>> and instead base their responses on prevailing commonalities of >> >>>>> insight (like prevailing memes). This kind of reaction might be >> >>>>> likened to a stigmergic reaction. Subsequent interactions can then >> be >> >>>>> used to refine the first attempts to understand what is going on (or >> >>>>> what someone else is trying to say.) So perhaps by looking at >> >>>>> foundational or simple methods that can combine stigmergy with more >> >>>>> traditional AI methods so that stigmergic reactions can be >> integrated >> >>>>> with previous reactions (for example successive statements) someone >> >>>>> might be able to gain a little more insight in AGI. However, this >> >>>>> implies that simple reactions must be context-sensitive to different >> >>>>> combinations of events and they have to be sensitive to hidden parts >> >>>>> that need to be inferred and discovered in order to appreciate >> special >> >>>>> meanings (or to invoke special reactions) related to individuation >> of >> >>>>> the agents. So I can see one way how this extension of the >> definition >> >>>>> of stigmergy might be used to yield some novel experimental results. >> >>>>> If I only had the time... >> >>>> >> >>>> >> >>>> ------------------------------------------- >> >>>> AGI >> >>>> Archives: https://www.listbox.com/member/archive/303/=now >> >>>> RSS Feed: >> https://www.listbox.com/member/archive/rss/303/11943661-d9279dae >> >>>> Modify Your Subscription: >> >>>> https://www.listbox.com/member/?& >> >>>> Powered by Listbox: http://www.listbox.com >> >>> >> >>> >> >>> ------------------------------------------- >> >>> AGI >> >>> Archives: https://www.listbox.com/member/archive/303/=now >> >>> RSS Feed: >> https://www.listbox.com/member/archive/rss/303/24379807-653794b5 >> >>> Modify Your Subscription: https://www.listbox.com/member/?& >> >>> Powered by Listbox: http://www.listbox.com >> >> >> >> >> >> ------------------------------------------- >> >> AGI >> >> Archives: https://www.listbox.com/member/archive/303/=now >> >> RSS Feed: >> https://www.listbox.com/member/archive/rss/303/23050605-2da819ff >> >> Modify Your Subscription: https://www.listbox.com/member/?& >> >> Powered by Listbox: http://www.listbox.com >> > >> > >> > ------------------------------------------- >> > AGI >> > Archives: https://www.listbox.com/member/archive/303/=now >> > RSS Feed: >> https://www.listbox.com/member/archive/rss/303/24379807-653794b5 >> > Modify Your Subscription: https://www.listbox.com/member/?& >> > Powered by Listbox: http://www.listbox.com >> >> >> ------------------------------------------- >> AGI >> Archives: https://www.listbox.com/member/archive/303/=now >> RSS Feed: >> https://www.listbox.com/member/archive/rss/303/23050605-2da819ff >> Modify Your Subscription: https://www.listbox.com/member/?& >> Powered by Listbox: http://www.listbox.com >> > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> | > Modify > <https://www.listbox.com/member/?&> > Your Subscription <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
