From the article: "Artificial society modeling allows us to 'grow' social structures /in silico/ demonstrating that certain sets of microspecifications are /sufficient to generate/ the macrophenomena of interest."
The issue hinges on what "sufficient to generate" means for a particular model in terms of that model's explanatory power. I have come to suspect that it does not mean as much as is sometimes thought. There are questions about whether a model's description of the domain is unique and/or salient, whether the local dynamics are stationary, how we characterize the influence of the experimental design, what does it mean to validate the model, and so on. Assumptions about the answers to these questions can be as influential (and hidden) as assumptions about rational economic actors. To me, growing the model is a fine methodology (since it explicitly recognizes that we create models relative to a specific epistemological context), but we recognize that for a big class of these models that one can generate a lot of models from the same specification (a kind of pleiotropy), and a lot of different specifications may generate very similar models under certain conditions. A given ABM is less for explanation or prediction than for exploration and understanding; it helps (or not) clarify the issues and concepts under consideration relative to some space of such ABMs. Whether we can build a particular model that generates some expected social behavior does not necessarily mean that the particular model constitutes a complete explanation. As we are coming to understand in developmental biology, whether a gene microspecification is associated with some some macrophenomena trait has minimal explanatory power. It's important to understand how the RNA works. In the same way, it would not be at all surprising to find that "rules" were not sufficient microspecifications for spaces of models of social behavior. Modeling complexity is itself a complex activity. Carl Pamela McCorduck wrote: > What kind of explanation of social behavior would satisfy you? > > > On Jun 26, 2007, at 8:31 AM, Robert Holmes wrote: > >> Epstein has a new book and MIT Tech Review are running an article on >> artificial societies on the back of it >> >> http://www.technologyreview.com/Infotech/18880/page1/ >> <http://www.technologyreview.com/Infotech/18880/page1/> >> >> And again, there's that old chestnut: these models explain, not >> predict. Do we still believe this? I agree - they do not predict, but >> do they even explain? I'm getting increasingly troubled about this >> whole notion that the rules the researcher puts in the agents >> actually have some sort of analog in actual people. Even when >> conclusions are presented as "this is AN explanation" not "this is >> THE explanation", I suspect that the ABM researcher is being somewhat >> optimistic. >> >> So what is the relationship between the rules in the artificial >> agents and the rules in real people? >> >> Robert >> >> ============================================================ >> FRIAM Applied Complexity Group listserv >> Meets Fridays 9a-11:30 at cafe at St. John's College >> lectures, archives, unsubscribe, maps at http://www.friam.org > > “Good judgment comes from experience, and experience – well, that > comes from poor judgment.” A.A. Milne > > > > ------------------------------------------------------------------------ > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org
