Russ,

"agent" is an overloaded word in our work. While there's overlap, I don't
think there will ever be a single definition to cover them all. I break
our use into two classes: software architecture design and discussions
around Agency (ie acting on its own or others behalf)

*Software Design and Architecture*
I use the term "agent" when in software design less about "agency" and is
more about communicating the software architecture pattern of minimal
centralized control through actors with simulated or actual concurrency.
While we are often interested in issues around agency, I think it's
important to preserve this use of "agent" in software without bringing in
 a second word like agency. Both are suitcase words
<https://alexvermeer.com/unpacking-suitcase-words/> ala Minsky.  Simulated
concurrency might have a scheduler issuing "step" or "go" events to these
"agents" but we try to minimize any global centralized coordinator of logic
and we expect coordination to emerge from the interaction of the agents (eg
flocking, ising or ant foraging model). The term agent is used to
distinguish from other approaches like object-oriented, procedural and
functional. While agents are certainly implemented with objects, procedural
and functional patterns we tend to mean the agents are semi-autonomous in
their actions. Pattie Maes in the 90s described agents as objects that can
say "no" :-) Relatedly, Uri Wilensky stresses the use of "ask" to request
the action of another agent without the ability do directly do so. This use
of "ask" was locked into the api in later versions of Netlogo.

   1. agents in agent-based modeling which in Netlogo are turtles, links
   and patches. Or in other frameworks might be lagrangian particles and
   eulerian cells and links/edges. I call these lowercase "a" agents. Often we
   focus on the interaction behaviors between many lightweight agents and less
   on internal logic. I often say ABM might be better termed Interaction Based
   Modeling. Interactions are often hybrid between turtles, links and patches.
   2. agents in multi-agent systems and distributed AI. It's a rough
   distinction but here the agents tend to be heavier on internal processes
   and less focused on the interactions. It's less a technical distinction and
   more about the communities of researchers and developers.
   3. agent-oriented programming: similar to the 1 and 2 but the agents are
   deployed sensing and acting in the world (eg Pan-Tilt-Zoom cameras on
   mountain tops watching for wildfire and coordinating with a network of
   other cameras and tracked resources). Here, we use agent-oriented
   programming to distinguish it from

*Agency / Telelogic / Teleonomic*

   1. Autonomous Agents - when speaking in this context I often say capital
   "A" agents with collaborators. Here we're in the realm of emergent Agency
   ala Stu's Autonomous Agents from 2000 Investigations. Short summary
   article  <https://redfish.com/papers/Kauffman_autonomous-agents.pdf>.
   Stu's autonomous agents was his stab at defining a living system.
   2. Personal Software Agents - these are related to agent-oriented
   programming above but also take on Agency as acting on your behalf. eg,
   your camera agents and location agents  that monitor your private cameras
   and GPs to coordinate with other agents share information but not the raw
   data for collective intelligence and collective action.
   3. Structure-Agency: the bidirectional feedback in sociology and social
   theory pertains to the degree to which individuals' independent actions
   (agency) are influenced or constrained by societal patterns and structures
   and how the structures are created by the Agents.
   4. Principal-Agent: in economics and contract theory where one party
   (the agent) is expected to act in the best interest of another party (the
   principal)   eg divorce lawyers or sports agents negotiating on behalf of
   their clients where they can expose private preferences to the other agent
   to find best terms under rules of nondisclosure and professional conduct
   without revealing private data to either of the clients. This can also
   relate to the Pricniple-Agent problem where there is the potential or
   incentive to act in their own self-interest instead. eg real estate
   representing the buyer but might want to maximize sales price and
   commission or a corporate executive maximizing salary or stability of
   employment vs the goals of the shareholders. obvious need here to expand to
   stakeholders (employees, customers, community) and not just shareholders.
   5. Agents as ecological emergents with relation to extremum principles
   like Principle of Stationary Action  I will often talk about the emergent
   cognition of the ant foraging system as a whole as an uppercase "A" Agent.
   As mentioned on the list before, when we look at multiple interacting
   fields with derivatives of action with concentrations in one field driving
   symmetry breaking and structure formation in a second field we can use
   teleonomic language like the purpose of Benard convection cell is to
   dissipate the temperature gradient through mass structure faster than
   unorganized flow. Here, the full system is an Agent not the individual oil
   molecules. Note the stationary action principle still needs to be extended
   to non-equilibrium symmetry breaking of coupled fields but if you take the
   full system into account, stationary action still governs across the full
   system. I hope for a revised Noether's theorem where broken symmetries with
   respect to dual lagrangians of action will imply non-conserved
   quantities(dissipation).
   6. Teleonomic Material: the latest use by David Krakauer on Sean
   Carroll's recent podcast
   
<https://www.preposterousuniverse.com/podcast/2023/07/10/242-david-krakauer-on-complexity-agency-and-information/>
   in summarizing Complexity. Hurricanes, flocks and Benard Cells according to
   David are not Complex, BTW. I find the move a little frustrating
   and disappointing but I always respect his perspective.


-Stephen


On Fri, Jul 14, 2023 at 1:01 PM Russ Abbott <russ.abb...@gmail.com> wrote:

> I was recently wondering about the informal distinction we make between
> things that are agents and things that aren't.
>
> For example, I would consider most living things to be agents. I would
> also consider many computer programs when in operation as agents. The most
> obvious examples (for me) are programs that play games like chess.
>
> I would not consider a rock an agent -- mainly because it doesn't do
> anything, especially on its own. But a boulder crashnng down a hill and
> destroying something at the bottom is reasonably called "an agent of
> destruction." Perhaps this is just playing with words: "agent" can have
> multiple meanings.  A writer's agent represents the writer in
> negotiations with publishers. Perhaps that's just another meaning.
>
> My tentative definition is that an agent must have access to energy, and
> it must use that energy to interact with the world. It must also have some
> internal logic that determines how it interacts with the world. This final
> condition rules out boulders rolling down a hill.
>
> But I doubt that I would call a flashlight (with an on-off switch) an
> agent even though it satisfies my definition.  Does this suggest that an
> agent must manifest a certain minimal level of complexity in its
> interactions? If so, I don't have a suggestion about what that minimal
> level of complexity might be.
>
> I'm writing all this because in my search for a characterization of agents
> I looked at the article on Agency
> <https://plato.stanford.edu/archives/win2019/entries/agency/> in the *Stanford
> Encyclopedia of Philosophy.* I found that article almost a parody of the
> "armchair philosopher." Here are the first few sentences from the article
> overview.
>
> In very general terms, an agent is a being with the capacity to act, and
> ‘agency’ denotes the exercise or manifestation of this capacity. The
> philosophy of action provides us with a standard conception and a standard
> theory of action. The former construes action in terms of intentionality,
> the latter explains the intentionality of action in terms of causation by
> the agent’s mental states and events.
>
>
> That seems to me to raise more questions than it answers. At the same
> time, it seems to limit the notion of *agent* to things that can have
> intentions and mental models.  (To be fair, the article does consider the
> possibility that there can be agents without these properties. But those
> discussions seem relatively tangential.)
>
> Apologies for going on so long. Thanks, Frank, for opening this can of
> worms. And thanks to the others who replied so far.
>
> -- Russ Abbott
> Professor Emeritus, Computer Science
> California State University, Los Angeles
>
>
>
> On Fri, Jul 14, 2023 at 8:33 AM Frank Wimberly <wimber...@gmail.com>
> wrote:
>
>> Joe Ramsey, who took over my job.in the Philosophy Department at
>> Carnegie Mellon, posted the following on Facebook:
>>
>> I like Neil DeGrasse Tyson a lot, but I saw him give a spirited defense
>> of science in which he oddly gave no credit to philosophers at all. His
>> straw man philosopher is a dedicated *armchair* philosopher who spins
>> theories without paying attention to scientific practice and contributes
>> nothing to scientific understanding. He misses that scientists themselves
>> are constantly raising obviously philosophical questions and are often
>> ill-equipped to think about them clearly. What is the correct
>> interpretation of quantum mechanics? What is the right way to think about
>> reductionism? Is reductionism the right way to think about science? What is
>> the nature of consciousness? Can you explain consciousness in terms of
>> neuroscience? Are biological kinds real? What does it even mean to be real?
>> Or is realism a red herring; should we be pragmatists instead? Scientists
>> raise all kinds of philosophical questions and have ill-informed opinions
>> about them. But *philosophers* try to answer them, and scientists do pay
>> attention to the controversies. At least the smart ones do.
>>
>> ---
>> Frank C. Wimberly
>> 140 Calle Ojo Feliz,
>> Santa Fe, NM 87505
>>
>> 505 670-9918
>> Santa Fe, NM
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present
>> https://redfish.com/pipermail/friam_redfish.com/
>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to