Russ,
What the hell is your problem? I said explicitly that you may be looking for a
specific way to do things, rather than a way to solve a particular problem. I
also indicated that even if I understood what you were asking for perfectly, my
solution was non-ideal in many ways. No indication was made that there was
anything "best" about what I was offering, quite the opposite. I also indicated
repeatedly that I was open to other, more experienced people on the list
telling me I was on the wrong track. For Christ's sake, I'm a psychologist who
dabbles in simulation, enjoys brainstorming, and likes to make small
contributions to other people's projects. MY emails were generated directly by 
YOUR emails. For example, the system I propose would simulate one in which
rules are able:

to refer to agents, 


to create and destroy agents, 


to create new rules for newly created agents, 


to disable rules for existing agents, and 


to modify existing rules for existing agents. 
As that is a direct quote from your email.... I assure you I am not acting out
of arrogance, but out of an intention to help. 

In the future please don't ask me (or anyone else) to elaborate on something if
you know
in advance that the elaboration will not be what you are looking for.
My last email was composed specifically in response to your indicated
desire for further information. If I misread your request, as sincere when it
was intended as sarcastic, I apologize. That said, I am not a quick writer, I
take these
emails seriously, I recheck them several times before I send them (this one has
taken over an hour in itself), and
it is a waste of my time to fulfill such requests only to be
insulted afterward. My classes start tomorrow, my syllabi are not prepared, and
yet I have dedicated several hours of my weekend to this. If what I said isn't
useful, you can say that, but you don't
need to be rude about it. Seriously, what the hell is your problem?

Sincerely,

Eric






On Mon, Aug 24, 2009 12:37 AM, Russ Abbott <[email protected]> wrote:
>
>
>Eric,  
>
>You said, "Not knowing what you want to
do ...".
>
>It's clear from the rest of your message that you're absolutely right. You
have no idea what I want to do.
>
>What amazes me is that you nevertheless seem to think that you can tell me the
best way for me to do it. How can you be so arrogant?
>
>Perhaps that's also what went wrong in our discussion of consciousness a while
ago.
>
>-- Russ
>
>
>
>


>On Mon, Aug 24, 2009 at 1:58 PM, ERIC P. CHARLES <<#>> wrote:
>

>Russ (and everyone else),
>Just because its what I know, I would do it
in NetLogo. I'm not suggesting that NetLogo will do what you want, just that it
can simulate doing what you want. Not knowing what you want to do, lets keep it
general:
>
>You start by making an "agent" with a list of things it can do,
lets label them 1-1000, and a list of things it can sense, lets label them
A-ZZZ. But there is a catch, the agent has no commands connecting the sensory
list to the behaviors list, a different object must do that. The agent must
query all the rules until it finds one that accepts its current input, and then
the rule sends it a behavior code. (Note, that any combination of inputs can be
represented as a single signal or as several separate ones, it doesn't matter
for theses purposes)
>
>You then make several "rules", each of which
receives a signal from an agent and outputs a behavior command. One rule might
be "If input WFB, then behavior 134." Note, it doesn't matter how complicated
the rule is, this general formula will still work. Any countable infinity of
options can be re-presented using the natural numbers, so it is a useful
simplification. Alternatively, imagine that each digit provides independent
information and make the strings as long as you wish. 
>
>Now, to implement
one of my suggestions you could use:
>1) The "system level" solution: On an
iterative basis asses the benefit gained by individuals who accessed a given
rule (i.e. turtles who accessed rule 4 gained 140 points on average, while
turtles who accessed rule 5 only gained 2 points on average). This master
assessor then removes or modifies rules that aren't up to snuff. 
>
>2) The
"rule modified by agents" solution: Agents could have a third set of
attributes, in addition to behaviors and sensations they might have "rule
changers". Let's label them from ! to ^%*. For example, command $% could tell
the rule to select another behavior at random, while command *# could tell the
rule to simply add 1 to the current behavior. 
>
>3) The "agents disobey"
solution: Agents could in the presence of certain sensations modify their
reactions to the behavior a given rule calls up in a permanent manner. This
would require an attribute that kept track of which rules had been previously
followed and what the agent had decided from that experience. For example, a
given sensation may indicate that doing certain behaviors is impossible or
unwise (you can't walk through a wall, you don't want to walk over a cliff);
under these circumstance, if a rule said "go forward" the agent could
permanently decide that if rule 89 ever says "go forward" I'm gonna "turn
right" instead.... where "go forward" = "54" and "turn right" = "834". In this
case the object labeled "rule" is still the same, but only because the effect
of the rule has been altered within the agent, which for metaphorical purposes
should be sufficient. 
>
>Because of the countable-infinity thing, I'm not
sure what kinds of thing a system like this couldn't simulate. Any combination
of inputs and outputs that a rule might give can be simulated in this way. If
you want to have 200 "sensory channels" and 200 "limbs" that can do the various
behaviors in the most subtle ways imaginable, it would still work in
essentially the same way, or could be
simulated in exactly the same way.

>
>Other complications are easy to incorporate: For example, you could
have a rule that responded to a large set of inputs, and have those inputs
change... or you could have rules link themselves together to change
simultaneously... or you could have the agent send several inputs to the same
rule by making it less accurate in detection. You could have rules that delay
sending the behavior command... or you could just have a delay built into
certain behavior commands. 
>
>
>Eric
>
>P.S. I'm sorry for the
bandwidth all, but I am continuing to communicate through the list because I am
hoping someone far more experienced than I will chime in if I am giving poor
advice. 
>>

>
>
>
>On Sun, Aug 23, 2009 10:32 PM, Russ Abbott
<<#>> wrote:
>



>

>
>My original request
was for an ABM system in which rules were first class objects and could be
constructed and modified dynamically. Although your discussion casually
suggests that rules can be treated the same way as agents, you haven't
mentioned a system in which that was the case. Which system would you use to
implement your example? How, for example, can a rule alter itself over time?
I'm not talking about systems in which a rule modifies a field in a fixed
template. I'm talking about modifications that are more flexible.

>
>Certainly there are many examples in which rule modifications occur
within very limited domains. The various Prisoner Dilemma systems in which the
rules combine with each other come to mind. But the domain of PD rules is very
limited.  
>
>Suppose you really wanted to do something along the
lines that your example suggests.  What sort of ABM system would you use?
How could a rule "randomly (or
non-randomly) generate a new contingency" in some way other than simply
plugging new values into a fixed template? As I've said, that's not what I want
to do.
>
>If you know of an ABM system that has a built-in Genetic
Programming capability for generating rules, that would be a good start. Do you
know of any such system?
>
>-- Russ
>
>
>
>



>
>
>
>On Mon, Aug 24, 2009 at 11:10 AM, ERIC P. CHARLES
<<#1234a8fe9728124b_>> wrote:
>

>Well, there are some ways of playing fast and loose with the metaphor.
There are almost always easy, but computationally non-elegant, ways to simulate
things like this. Remember, we have quotes because "rules" and "agents" are
just two classes of agents with different structures. 
>
>Some
options:
>1) The "rules" can alter themselves over time, as they can be
agents in a Darwinian algorithm or any other source of system level change you
want to impose. 
>2) The "rules" could accept instructions from the "agents"
telling them how to change. 
>3) The "agents" could adjust their responses to
commands given by the "rules" which effectively changes what the rule (now not
in quotes) does. 
>
>To get some examples, let's start with a "rule" that
says "when in a red patch, turn left". That is, in the starting conditions the
"agent" tells the rule it is in a red patch, the "rule" replies back "turn
left":
>1) Over time that particular "rule" could be deemed not-useful and
therefore done away with in some master way. It could either be replaced by a
different "rule", or there could just no longer be a "rule" about what to do in
red patches. 
>2) An "agent" in a red patch could for some reason no longer
be able to turn left. When this happens, it could send a command to the "rule"
telling the "rule" it needs to change, and the "rule" could randomly (or
non-randomly) generate a new contingency. 
>3) In the same situation, an
"agent" could simply modify itself to turns right instead; that is, when the
command "turn left" is received through that "rule" (or perhaps from any
"rule"), the "agent" now turns right. This is analogous to what happens at some
point for children when "don't touch that" becomes "touch that". The parents
persist in issuing the same command, but the rule (now not in quotes) has
clearly changed. 
>
>Either way, if you are trying to answer a question, I
think it something like one of the above options is bound to work. If there is
some higher reason you are trying to do something in a particular way, or you
have reason to be worried about processor time, then it might not be exactly
what you are after. 
>
>Eric>

>
>
>On Sun, Aug 23, 2009 05:18 PM,
Russ Abbott <<#1234a8fe9728124b_>>
wrote:
>



>

>
>Thanks Eric. It doesn't sound like your suggestion will do
what I want. I want to be able to create new rules dynamically as in rule
evolution. As I understand your scheme, the set of rule-agents is fixed in
advance.
>
>-- Russ 
>
>
>
>



>
>On Sun, Aug 23, 2009 at 8:30 AM, ERIC P. CHARLES
<<#1234a8fe9728124b_12349f6e507ad945_>> wrote:
>

>
>
>




>>Russ, 
>I'm probably just saying this out of ignorance,
but... If you want to "really" do that, I'm not sure how to do so.... However,
given that you are simulating anyway... If you want to simulate doing that, it
seems
straightforward. Pick any agent-based simulation program, create two classes of
agents, call one class "rules" and the others "agents". Let individuals in the
"rules" class do all sorts of things to individuals in the "agents" class
(including controlling which other "rules" they accept commands from and how
they respond to those commands). 
>
>Not the most elegant solution in the
world, but it would likely be able to answer whatever question you want to
answer (assuming it is a question answering task you wish to engage in), with
minimum time spent banging your head against the wall programming it. My biases
(and lack of programming brilliance) typically lead me to find the simplest way
to simulate what I want, even if that means the computers need to run a little
longer. I assume there is some reason this would not be satisfactory?

>
>Eric>
>
>

>
>
>
>On Sat, Aug 22,
2009 11:13 PM, Russ Abbott <<#1234a8fe9728124b_12349f6e507ad945_>>
wrote:
>



>
>
>

>
>Hi,
>
>I'm interesting in developing a model that uses
rule-driven agents. I would like the agent rules to be condition-action rules,
i.e., similar to the sorts of rules one finds in forward chaining blackboard
systems. In addition, I would like both the agents and the rules themselves to
be first class objects. In other words, the rules should be able: 
>



to refer to agents, 
>

to create and destroy agents, 
>

to create new rules for newly created agents, 
>

to disable rules for existing agents, and 
>

to modify existing rules for existing agents. 
>

Does anyone know of a system like that?
>
>-- Russ 
>



>

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to