I think I know what you are talking about, but I'm not sure what the best way to model it would be, or what we would gain from the modeling exercise. Are you talking about something like this?
Institutional review boards (IRBs) oversee research that involves human participants. This body was formed due to laxness/nastiness on the part of biomedical researchers. It was later extended due to (perceived) laxness/nastiness on the part of social science researchers. At first, all they did was to declare studies ethically alright, or not. Later, they were taken over by a number of outside forces, including university's "risk-management" departments. Their main function is now to try to avoid lawsuits, with secondary functions of promoting arbitrary bureaucratic rules and arbitrary whims of committee members. Giving a "pass or fail" on ethics is, at best, a tertiary goal. To make things worse, the lawyers and bureaucracy have actually done a lot to undermine the semblance of ethical stricture they produce. If this is the type of thing you are talking about, it seems an oddly complex thing to try to model, mostly because it is extremely open-ended. You need 1) agents with different agendas, 2) the ability to assess and usurp rules created by other agents, 3) the ability to force other agents to adopt your rules. Note, also, that in this particular case, the corruption is accomplished by stacking contradictory rules on top of each other. Thus you need 4) an ability to implement contradictory rules, or at least choose between so-called rules. The bigger challenge seems to be figuring out a way to accomplish such a model without in some essential way, pre-programing the outcome (for example, in the way you set agent agendas and allow agents to form new rules). What variables would be manipulated in the modeling space? What is to be discovered beyond "agents programmed to be self-interested act in their own best interest"? I'm also not sure what this has to do with agents that "actively obfuscate the participatory nature of the democratic decision." So... maybe I'm completely off base. Can you give a concrete example? Eric On Sun, May 8, 2011 06:56 AM, Mohammed El-Beltagy <moham...@computer.org> wrote: >>Eric, > > >>Thats an interesting way of looking at it. As complex game of information hiding. >> > >>I was thinking along the line of of having a schema for rule creation. The schema here is like a constitution, and players can generate new rules based on that schema to promote their self interest. For rules to become "laws" they have to be the choice on the majority (or subject to some other social choice mechanism), this system allows for group formation and coalition building to get the new rules passed into laws. The interesting bit is how the drive for self interest amongst some of those groups and their coalitions can give rise to rules renders the original schema and/or the social choice mechanism ineffective. By "ineffective", I mean that they yield results and behavior that run counter to the purpose for which they were originally designed. >> > >>What do you think? >> > >>Cheers, >> > >>Mohammed >> > >>On Sun, May 8, 2011 at 2:44 AM, ERIC P. CHARLES <<#>> wrote: > >>I can't see that this posted, sorry if it is a duplicate >--------> > > >>Mohammed, >Being totally unqualified to help >you with this problem... it >seems interesting to me because most models I know of this sort (social systems >models) are about information acquisition and deployment. That is, the modeled >critters try to find out stuff, and then they do actions dependent upon what >they find. If we are modeling active obfuscation, then we would be doing the >opposite - we would be modeling an information-hiding game. Of course, there is >lots of game theory work on information hiding in two critter encounters (I'm >thinking evolutionary-game-theory-looking-at-deception). I haven't seen >anything, though, looking at distributed information hiding. > >The idea >that you could create a system full of autonomous agents in which information >ends up hidden, but no particular individuals have done the hiding, is kind of >cool. Seems like the type of thing encryption guys could get into (or already >are into, or have already moved past). > >Eric > >On Fri, May 6, 2011 >10:05 PM, Mohammed El-Beltagy <<#>> >wrote: > > > I have a question I would like to pose to the group in that regard: > >Can we model/simulate how in a democracy that is inherently open (as >stated in the constitution: for the people, by the people etc..) there >emerges "decision masking structures" emerge that actively obfuscate >the participatory nature of the democratic decision making for their >ends?
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org