Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread Roger Critchlow
On Mon, Jul 17, 2023 at 2:35 PM David Eric Smith 
wrote:

> [...] [Yoshi Oono's The Nonlinear World]
> in which he argues that the phenomena you mention are only
> “pseudo-complex”.  Yoshi, like David but with less of the predictable
> “Darwin-was-better; now what subject are we discussing today?” vibe, argues
> that there is a threshold to “true complexity” that is only crossed in
> systems that obey what Yoshi calls a “Pasteur principle”; they are of a
> kind that effectively can’t emerge spontaneously, but can evolve from
> ancestors once they exist.  He says (translating slightly from his words to
> mine) that such systems split the notion of “boundary conditions” into two
> sub-kinds that differ qualitatively.  There are the “fundamental
> conditions” (in biology, the contents of genomes with indefinitely deep
> ancestry), that mine an indefinite past sparsely and selectively, versus
> ordinary “boundary conditions”, which are the dense here-and-now.  The
> fundamental conditions often provide criteria that allow the complex thing
> to respond to parts of the here-and-now, and ignore other parts, feeding
> back onto the update of the fundamental conditions.
>
> I don’t know when I will get time to listen to David’s appearance with
> Sean, so with apologies cannot know whether his argument is similar in its
> logic.  But Yoshi’s framing appeals to me a lot, because it is like a kind
> of spontaneous symmetry breaking or ergodicity breaking in the
> representations of information and how they modulate the networks of
> connection to the space-time continuum.  That seems to me a very fertile
> idea.  I am still looking for some concrete model that makes it compelling
> and useful for something I want to solve.  (I probably have written this on
> the list before, in which case apologies for being repetitive.  But this
> mention is framed specifically to your question whether one should be
> disappointed in the demotion of the complexity in phenomena.)
> [...]
>
> On Jul 18, 2023, at 4:37 AM, Stephen Guerin 
> wrote:
>
> [...]
>
>1. Teleonomic Material: the latest use by David Krakauer on Sean
>Carroll's recent podcast
>
> 
>in summarizing Complexity. Hurricanes, flocks and Benard Cells according to
>David are not Complex, BTW. I find the move a little frustrating
>and disappointing but I always respect his perspective.
>
> Okay, I listened to the podcast.

DK says that real complexity starts with teleonomic matter, also known as
particles that think.  He says that such agents carry around some
representation of the external world.  And then the discussion gets
distracted to other topics, at one point getting to "large language model
paper clip nightmares".

My response to Eric's description of Oono's  "Pasteur principle" was that
it sounds a lot like "Attention Is All You Need" (
https://arxiv.org/pdf/1706.03762.pdf), the founding paper of the
Transformer class of neural network models.

The "fundamental conditions" in a Transformer would be the trained neural
net which specifies the patterns of attention and responses learned during
training.  The "ordinary conditions" would be the input sequence given to
the Transformer.  The Transformer breaks up the input sequence into
attention patterns, evaluates the response to the current set of input
values selected by the attention patterns,  emits an element to the output
sequence, and advances the input cursor.

Anyone else see the family resemblance here?

-- rec --
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread Russ Abbott
Hi all,

I asked what I thought, naively, was a fairly simple question, namely
something like Nick's question (2): "*What are the conditions that require
us to identify something [as] an agent?*" I wasn't intending to be
prescriptive and wouldn't have used "require us." I was more musing to
myself: What properties/conditions would lead me to label something an
agent? If I/we came up with something that felt satisfying, I would then
want to know whether others would be similarly inclined to label things
that satisfied those properties, and only those things, as agents.

I now feel like I'm being offered enormous pools of sophisticated
scholarship. As Eric says in a message that arrived as I was writing this,
there is so much good stuff in your replies that I feel overwhelmed by the
amount of work I would have to do to reply intelligently. I almost wish we
were carrying on this conversation via Twitter (or Threads). I could
probably handle posts of 256 characters!  With that as a preliminary
disclaimer, here are some Tweet-like comments.

*Glen,* I deliberately didn't include a finger/hand/arm to press the
flashlight's button. The button press in my view serves as an external
event that the flashlight "senses" and to which it responds by turning on
its light. Also, I wouldn't insist that agents have the means to replenish
their energy supplies. All, agents (in my view) expend energy, which must
be renewed if they are to continue to act. Since there are so many ways
energy supplies can be recharged, the particular way that serves a
particular agent is, in my view, a necessary but relatively unimportant
detail.

*Nick,* as I said, I intended something like your question 2. I'm not
asking an agent to explain itself or to explain how it acquired the means
to act as an agent.

*Dave,* I wouldn't require the properties I'm seeking as defining an agent
to reflect only externally observable behaviors. In fact, I would expect
any collection of agent-defining properties to include something about what
goes on in the agent. I'm not sure why you want to exclude such properties.

*Stephen,* My interest is not in things that are agents because they act on
behalf of something else. A would categorize agents in agent-based models
as simulated agents. Most agent-based models ignore the energy agents
expend and how it is renewed. Software agents other than ABM agents seem to
me to be objects (as in object-oriented programming) with internal
threads--which enable them to act on their own. That seems like an
important distinction. Objects can act in response to external triggers,
i.e., calls to their interfaces, but only objects with internal threads
have the means to initiate actions autonomously. I would consider the
agents (e.g., turtles, etc.) in NetLogo valid agents. They have the
equivalent of threads that are run every clock tick. They don't require
that other agents interact with them in order to act.

-- Russ


On Mon, Jul 17, 2023 at 1:00 PM Nicholas Thompson 
wrote:

> By the way, not all designers are individuals.  Foxes design the behavior
> of rabbits and rabbits design the behavior of foxes, but I wouldn't be
> quick to call foxes an individual or rabbits an individual.  Natural
> selection designs but it is not itself designed to do so.
>
> On Mon, Jul 17, 2023 at 2:05 PM Nicholas Thompson 
> wrote:
>
>> Hi, Russ,
>>
>> I have a non-scientist friend to whom I sometimes show my posts here for
>> guidance.  I showed him some recent posts and he wrote back, "Wow, Nick!
>> You are really swinging for the fences, here!"  He and I know that one who
>> swings for the fences, rarely hits the ball, let alone the fences.
>>
>> So please can we precede in little tiny steps.
>>
>> You raise the question, _ *what makes an agent?*.
>>
>> This expression is ambiguous in just the way I was trying to highlight in
>> my response:
>>
>> It could mean, *(1) What are the conditions that bring an agent into
>> being? *
>>
>> Or it could mean, *(2) What are the conditions that require us to
>> identify something an agent?.*
>>
>> The first (I think) is the explanatory question; the second, the
>> descriptive question.   Wittgenstein was said to have said that something
>> cannot be its own explanation, and I believed him.  Whatever else might be
>> said about the relation between explanations and descriptions is that
>> descriptions are states of affairs taken for granted by explanations.  If
>> you ask me why the chicken crossed the road, my answering your quest
>> commits me to the premise that the chicken did indeed cross the road.
>>
>> A definition is *explanatory *when it  describes a process which
>> explains something else and which, itself, is in need of explanation.
>>
>> So:  Can I come back to you with a question?   Which of the two meanings
>> did you intend.  And if you were looking  to define agents in terms of the
>> internal mechanism that makes agency possible, what precisely is the state
>> of affairs, behavior, 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread David Eric Smith
Stephen, 

Too much good here for me almost-even to be able to read in scarce time, but on 
your final point 6, about whether various dissipative structures are complex, 
or not by what measure:

Do you know Yoshi Oono’s wonderful idiosyncratic book The Nonlinear World?
https://link.springer.com/book/10.1007/978-4-431-54029-8
The Nonlinear World
link.springer.com

I believe it’s the final chapter (Toward complexity), which apparently one can 
just download:
https://link.springer.com/content/pdf/10.1007/978-4-431-54029-8_5?pdf=chapter%20toc
Toward Complexity
link.springer.com

in which he argues that the phenomena you mention are only “pseudo-complex”.  
Yoshi, like David but with less of the predictable “Darwin-was-better; now what 
subject are we discussing today?” vibe, argues that there is a threshold to 
“true complexity” that is only crossed in systems that obey what Yoshi calls a 
“Pasteur principle”; they are of a kind that effectively can’t emerge 
spontaneously, but can evolve from ancestors once they exist.  He says 
(translating slightly from his words to mine) that such systems split the 
notion of “boundary conditions” into two sub-kinds that differ qualitatively.  
There are the “fundamental conditions” (in biology, the contents of genomes 
with indefinitely deep ancestry), that mine an indefinite past sparsely and 
selectively, versus ordinary “boundary conditions”, which are the dense 
here-and-now.  The fundamental conditions often provide criteria that allow the 
complex thing to respond to parts of the here-and-now, and ignore other parts, 
feeding back onto the update of the fundamental conditions.  

I don’t know when I will get time to listen to David’s appearance with Sean, so 
with apologies cannot know whether his argument is similar in its logic.  But 
Yoshi’s framing appeals to me a lot, because it is like a kind of spontaneous 
symmetry breaking or ergodicity breaking in the representations of information 
and how they modulate the networks of connection to the space-time continuum.  
That seems to me a very fertile idea.  I am still looking for some concrete 
model that makes it compelling and useful for something I want to solve.  (I 
probably have written this on the list before, in which case apologies for 
being repetitive.  But this mention is framed specifically to your question 
whether one should be disappointed in the demotion of the complexity in 
phenomena.)


Sorry for such a long email.  I thought this one would be short.  I haven’t 
tried to answer Russ yet because I expected that one to be long, and cannot 
yet….

Eric






> On Jul 18, 2023, at 4:37 AM, Stephen Guerin  
> wrote:
> 
> Russ,
> 
> "agent" is an overloaded word in our work. While there's overlap, I don't 
> think there will ever be a single definition to cover them all. I break our 
> use into two classes: software architecture design and discussions around 
> Agency (ie acting on its own or others behalf)
> 
> Software Design and Architecture
> I use the term "agent" when in software design less about "agency" and is 
> more about communicating the software architecture pattern of minimal 
> centralized control through actors with simulated or actual concurrency. 
> While we are often interested in issues around agency, I think it's important 
> to preserve this use of "agent" in software without bringing in  a second 
> word like agency. Both are suitcase words 
>  ala Minsky.  Simulated 
> concurrency might have a scheduler issuing "step" or "go" events to these 
> "agents" but we try to minimize any global centralized coordinator of logic 
> and we expect coordination to emerge from the interaction of the agents (eg 
> flocking, ising or ant foraging model). The term agent is used to distinguish 
> from other approaches like object-oriented, procedural and functional. While 
> agents are certainly implemented with objects, procedural and functional 
> patterns we tend to mean the agents are semi-autonomous in their actions. 
> Pattie Maes in the 90s described agents as objects that can say "no" :-) 
> Relatedly, Uri Wilensky stresses the use of "ask" to request the action of 
> another agent without the ability do directly do so. This use of "ask" was 
> locked into the api in later versions of Netlogo.
> agents in agent-based modeling which in Netlogo are turtles, links and 
> patches. Or in other frameworks might be lagrangian particles and eulerian 
> cells and links/edges. I call these lowercase "a" agents. Often we focus on 
> the interaction behaviors between many lightweight agents and less on 
> internal logic. I often say ABM might be better termed Interaction Based 
> Modeling. Interactions are often hybrid between turtles, links and patches.
> agents in multi-agent systems and distributed AI. It's a rough distinction 
> but here the agents tend to be heavier on internal processes and less focused 
> on the interactions. It's less 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread Nicholas Thompson
By the way, not all designers are individuals.  Foxes design the behavior
of rabbits and rabbits design the behavior of foxes, but I wouldn't be
quick to call foxes an individual or rabbits an individual.  Natural
selection designs but it is not itself designed to do so.

On Mon, Jul 17, 2023 at 2:05 PM Nicholas Thompson 
wrote:

> Hi, Russ,
>
> I have a non-scientist friend to whom I sometimes show my posts here for
> guidance.  I showed him some recent posts and he wrote back, "Wow, Nick!
> You are really swinging for the fences, here!"  He and I know that one who
> swings for the fences, rarely hits the ball, let alone the fences.
>
> So please can we precede in little tiny steps.
>
> You raise the question, _ *what makes an agent?*.
>
> This expression is ambiguous in just the way I was trying to highlight in
> my response:
>
> It could mean, *(1) What are the conditions that bring an agent into
> being? *
>
> Or it could mean, *(2) What are the conditions that require us to
> identify something an agent?.*
>
> The first (I think) is the explanatory question; the second, the
> descriptive question.   Wittgenstein was said to have said that something
> cannot be its own explanation, and I believed him.  Whatever else might be
> said about the relation between explanations and descriptions is that
> descriptions are states of affairs taken for granted by explanations.  If
> you ask me why the chicken crossed the road, my answering your quest
> commits me to the premise that the chicken did indeed cross the road.
>
> A definition is *explanatory *when it  describes a process which explains
> something else and which, itself, is in need of explanation.
>
> So:  Can I come back to you with a question?   Which of the two meanings
> did you intend.  And if you were looking  to define agents in terms of the
> internal mechanism that makes agency possible, what precisely is the state
> of affairs, behavior, what-have-you, that such agents are called upon to
> explain.!
>
> For me agency is design in behavior, and an agent is an individual whose
> behavior is designed.  All of this has to be worked out before your
> explanatory question becomes relevant, What is the neural mechanism by
> which such designs come about?
>
> nick
>
>
>
> On Sun, Jul 16, 2023 at 3:18 PM Russ Abbott  wrote:
>
>> Nick,
>>
>> I just asked Eric for examples. Your examples confuse me because I don't
>> see how you relate them to agenthood. Are you really suggesting that you
>> think of waves and puddles as agents? My suggestion was that you need some
>> sort of internal decision-making mechanism to qualify as an agent.
>>
>> I don't know anything about the carotid sinus.
>>
>> Your thermostat example strikes me as similar to my flashlight example. I
>> might put as: a thermostat senses the temperature and twiddles the controls
>> of the heating/AC units in response.
>>
>> I'm not sure where you are going by labeling my discussion explanatory. I
>> wasn't thinking that I was explaining anything, other, perhaps, than my
>> intuition of what makes an agent.
>>
>> -- Russ
>>
>>
>> On Fri, Jul 14, 2023 at 8:06 PM Nicholas Thompson <
>> thompnicks...@gmail.com> wrote:
>>
>>> Some examples I like to think about:
>>>
>>> Waves arrange pebbles on a beach from small to large
>>>
>>> A puddle maintains its temperature at 32 degrees as long as it has ice
>>> in it.
>>>
>>> The carotid sinus maintains the acidity of the blood by causing us to
>>> breath more oxygen when it gets to acid.  (I hope I have that right.
>>>
>>> An old-fashioned thermostat maintains the temperature of a house by
>>> maintaining the level of a vial of mercury attached to a bi-metallic coil.
>>>
>>> Russ, the objection would have with your definition is that it is
>>> explanatory.   An explanatory  definition identifies a phenomenon with its
>>> causes, bypassing  the phenomenon that raises the need for an explanation
>>> in the first place?   What is the relation between agents and their
>>> surroundings that makes them seem agentish?  Having answered that question,
>>> your explanation now comes into play.
>>>
>>> The thing about the above examples that makes them all seem agenty is
>>> that they keep bringing the system back to the same place.  The thing about
>>> them that makes them seem less agenty is that they have only one means to
>>> do so. Give that thermostat a solar panel, and a heat pump, and an oil
>>> furnace and have it switch from one to the other as circumstances vary, now
>>> the thermostat becomes much more agenty.
>>>
>>> Does that make any sense?  I think the nastiest problems here are (1)
>>> keeping the levels of organization straight and (2) teasing out the
>>> individual that is the agent.
>>>
>>> Nick
>>>
>>> On Fri, Jul 14, 2023 at 7:29 PM Russ Abbott 
>>> wrote:
>>>
 I'm not sure what "closure to efficient cause" means. I considered
 using as an example an outdoor light that charges itself (and stays off)
 during the day and goes on 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread Stephen Guerin
Russ,

"agent" is an overloaded word in our work. While there's overlap, I don't
think there will ever be a single definition to cover them all. I break
our use into two classes: software architecture design and discussions
around Agency (ie acting on its own or others behalf)

*Software Design and Architecture*
I use the term "agent" when in software design less about "agency" and is
more about communicating the software architecture pattern of minimal
centralized control through actors with simulated or actual concurrency.
While we are often interested in issues around agency, I think it's
important to preserve this use of "agent" in software without bringing in
 a second word like agency. Both are suitcase words
 ala Minsky.  Simulated
concurrency might have a scheduler issuing "step" or "go" events to these
"agents" but we try to minimize any global centralized coordinator of logic
and we expect coordination to emerge from the interaction of the agents (eg
flocking, ising or ant foraging model). The term agent is used to
distinguish from other approaches like object-oriented, procedural and
functional. While agents are certainly implemented with objects, procedural
and functional patterns we tend to mean the agents are semi-autonomous in
their actions. Pattie Maes in the 90s described agents as objects that can
say "no" :-) Relatedly, Uri Wilensky stresses the use of "ask" to request
the action of another agent without the ability do directly do so. This use
of "ask" was locked into the api in later versions of Netlogo.

   1. agents in agent-based modeling which in Netlogo are turtles, links
   and patches. Or in other frameworks might be lagrangian particles and
   eulerian cells and links/edges. I call these lowercase "a" agents. Often we
   focus on the interaction behaviors between many lightweight agents and less
   on internal logic. I often say ABM might be better termed Interaction Based
   Modeling. Interactions are often hybrid between turtles, links and patches.
   2. agents in multi-agent systems and distributed AI. It's a rough
   distinction but here the agents tend to be heavier on internal processes
   and less focused on the interactions. It's less a technical distinction and
   more about the communities of researchers and developers.
   3. agent-oriented programming: similar to the 1 and 2 but the agents are
   deployed sensing and acting in the world (eg Pan-Tilt-Zoom cameras on
   mountain tops watching for wildfire and coordinating with a network of
   other cameras and tracked resources). Here, we use agent-oriented
   programming to distinguish it from

*Agency / Telelogic / Teleonomic*

   1. Autonomous Agents - when speaking in this context I often say capital
   "A" agents with collaborators. Here we're in the realm of emergent Agency
   ala Stu's Autonomous Agents from 2000 Investigations. Short summary
   article  .
   Stu's autonomous agents was his stab at defining a living system.
   2. Personal Software Agents - these are related to agent-oriented
   programming above but also take on Agency as acting on your behalf. eg,
   your camera agents and location agents  that monitor your private cameras
   and GPs to coordinate with other agents share information but not the raw
   data for collective intelligence and collective action.
   3. Structure-Agency: the bidirectional feedback in sociology and social
   theory pertains to the degree to which individuals' independent actions
   (agency) are influenced or constrained by societal patterns and structures
   and how the structures are created by the Agents.
   4. Principal-Agent: in economics and contract theory where one party
   (the agent) is expected to act in the best interest of another party (the
   principal)   eg divorce lawyers or sports agents negotiating on behalf of
   their clients where they can expose private preferences to the other agent
   to find best terms under rules of nondisclosure and professional conduct
   without revealing private data to either of the clients. This can also
   relate to the Pricniple-Agent problem where there is the potential or
   incentive to act in their own self-interest instead. eg real estate
   representing the buyer but might want to maximize sales price and
   commission or a corporate executive maximizing salary or stability of
   employment vs the goals of the shareholders. obvious need here to expand to
   stakeholders (employees, customers, community) and not just shareholders.
   5. Agents as ecological emergents with relation to extremum principles
   like Principle of Stationary Action  I will often talk about the emergent
   cognition of the ant foraging system as a whole as an uppercase "A" Agent.
   As mentioned on the list before, when we look at multiple interacting
   fields with derivatives of action with concentrations in one field driving
   

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread Prof David West
Where angels fear to tread, dave rushes in.

Question 1) seems, to me, to be nonsensical; or hopelessly anthropocentric; or, 
unanswerable in any generalized or abstract form.

Paraphrasing question 2) — what set of observables (behaviors) must be present 
before We/I can assert, "*t***hat *thing is an agent.*" Assuming such a set 
exists: a) it tells me nothing about the "internals" of the thing; and b) it 
tells me little about anything except how We (assuming some consistency among 
all human beings) go about naming / categorizing things. I would also bet money 
that any such set is culturally grounded and that it is unlikely that any 
"universal" set exists. Certainly no "universal" set shared by humans and our 
elusive alien neighbors.

If we were to examine the inhabitants of any set of things that came about via 
question 2), why would we expect any commonality among the "conditions" 
(state?? characteristics?? patterns of same???) internal to each member? 
Granted, there might be subsets of the set (e.g. all instances of a human 
being, or a dog, or, for some, an AI) where we would expect and find some kind 
of, at least, statistical commonality. I say statistical because there are 
always outliers and exceptions.

Another issue, implied by the way question 1) is phrased, concerns the 
possibility of knowing the train of events, steps in an evolutionary process, 
engaging the "internals" of an entity as it proceeds from non-agent to 
proto-agent to agent. How can this be anything other than idiosyncratic?

As to explanation vs. description: given any "description," the number of 
"explanations" is infinite—or, at least co-extensive to the number of 
"explainers." No matter what Pierce might hope, consensus is unlikely.

davew


On Mon, Jul 17, 2023, at 12:05 PM, Nicholas Thompson wrote:
> Hi, Russ, 
> 
> I have a non-scientist friend to whom I sometimes show my posts here for 
> guidance.  I showed him some recent posts and he wrote back, "Wow, Nick!  You 
> are really swinging for the fences, here!"  He and I know that one who swings 
> for the fences, rarely hits the ball, let alone the fences.
> 
> So please can we precede in little tiny steps.
> 
> You raise the question, _ **what makes an agent?**.
> 
> This expression is ambiguous in just the way I was trying to highlight in my 
> response:
> 
> It could mean, **(1) What are the conditions that bring an agent into being?**
> 
> Or it could mean, **(2) What are the conditions that require us to identify 
> something an agent?.** 
> 
> The first (I think) is the explanatory question; the second, the descriptive 
> question.   Wittgenstein was said to have said that something cannot be its 
> own explanation, and I believed him.  Whatever else might be said about the 
> relation between explanations and descriptions is that descriptions are 
> states of affairs taken for granted by explanations.  If you ask me why the 
> chicken crossed the road, my answering your quest commits me to the premise 
> that the chicken did indeed cross the road. 
> 
> A definition is **explanatory* *when it  describes a process which explains 
> something else and which, itself, is in need of explanation. 
> 
> So:  Can I come back to you with a question?   Which of the two meanings did 
> you intend.  And if you were looking  to define agents in terms of the  
> internal mechanism that makes agency possible, what precisely is the state of 
> affairs, behavior, what-have-you, that such agents are called upon to 
> explain.!
> 
> For me agency is design in behavior, and an agent is an individual whose 
> behavior is designed.  All of this has to be worked out before your 
> explanatory question becomes relevant, What is the neural mechanism by which 
> such designs come about?  
> 
> nick
> 
> 
> 
> On Sun, Jul 16, 2023 at 3:18 PM Russ Abbott  wrote:
>> Nick,
>> 
>> I just asked Eric for examples. Your examples confuse me because I don't see 
>> how you relate them to agenthood. Are you really suggesting that you think 
>> of waves and puddles as agents? My suggestion was that you need some sort of 
>> internal decision-making mechanism to qualify as an agent.
>> 
>> I don't know anything about the carotid sinus.
>> 
>> Your thermostat example strikes me as similar to my flashlight example. I 
>> might put as: a thermostat senses the temperature and twiddles the controls 
>> of the heating/AC units in response.
>> 
>> I'm not sure where you are going by labeling my discussion explanatory. I 
>> wasn't thinking that I was explaining anything, other, perhaps, than my 
>> intuition of what makes an agent. 
>> __
>> __-- Russ 
>> 
>> 
>> On Fri, Jul 14, 2023 at 8:06 PM Nicholas Thompson  
>> wrote:
>>> Some examples I like to think about:
>>> 
>>> Waves arrange pebbles on a beach from small to large
>>> 
>>> A puddle maintains its temperature at 32 degrees as long as it has ice in 
>>> it.
>>> 
>>> The carotid sinus maintains the acidity of the blood by causing us to 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread Nicholas Thompson
Hi, Russ,

I have a non-scientist friend to whom I sometimes show my posts here for
guidance.  I showed him some recent posts and he wrote back, "Wow, Nick!
You are really swinging for the fences, here!"  He and I know that one who
swings for the fences, rarely hits the ball, let alone the fences.

So please can we precede in little tiny steps.

You raise the question, _ *what makes an agent?*.

This expression is ambiguous in just the way I was trying to highlight in
my response:

It could mean, *(1) What are the conditions that bring an agent into being?
*

Or it could mean, *(2) What are the conditions that require us to identify
something an agent?.*

The first (I think) is the explanatory question; the second, the
descriptive question.   Wittgenstein was said to have said that something
cannot be its own explanation, and I believed him.  Whatever else might be
said about the relation between explanations and descriptions is that
descriptions are states of affairs taken for granted by explanations.  If
you ask me why the chicken crossed the road, my answering your quest
commits me to the premise that the chicken did indeed cross the road.

A definition is *explanatory *when it  describes a process which explains
something else and which, itself, is in need of explanation.

So:  Can I come back to you with a question?   Which of the two meanings
did you intend.  And if you were looking  to define agents in terms of the
internal mechanism that makes agency possible, what precisely is the state
of affairs, behavior, what-have-you, that such agents are called upon to
explain.!

For me agency is design in behavior, and an agent is an individual whose
behavior is designed.  All of this has to be worked out before your
explanatory question becomes relevant, What is the neural mechanism by
which such designs come about?

nick



On Sun, Jul 16, 2023 at 3:18 PM Russ Abbott  wrote:

> Nick,
>
> I just asked Eric for examples. Your examples confuse me because I don't
> see how you relate them to agenthood. Are you really suggesting that you
> think of waves and puddles as agents? My suggestion was that you need some
> sort of internal decision-making mechanism to qualify as an agent.
>
> I don't know anything about the carotid sinus.
>
> Your thermostat example strikes me as similar to my flashlight example. I
> might put as: a thermostat senses the temperature and twiddles the controls
> of the heating/AC units in response.
>
> I'm not sure where you are going by labeling my discussion explanatory. I
> wasn't thinking that I was explaining anything, other, perhaps, than my
> intuition of what makes an agent.
>
> -- Russ
>
>
> On Fri, Jul 14, 2023 at 8:06 PM Nicholas Thompson 
> wrote:
>
>> Some examples I like to think about:
>>
>> Waves arrange pebbles on a beach from small to large
>>
>> A puddle maintains its temperature at 32 degrees as long as it has ice in
>> it.
>>
>> The carotid sinus maintains the acidity of the blood by causing us to
>> breath more oxygen when it gets to acid.  (I hope I have that right.
>>
>> An old-fashioned thermostat maintains the temperature of a house by
>> maintaining the level of a vial of mercury attached to a bi-metallic coil.
>>
>> Russ, the objection would have with your definition is that it is
>> explanatory.   An explanatory  definition identifies a phenomenon with its
>> causes, bypassing  the phenomenon that raises the need for an explanation
>> in the first place?   What is the relation between agents and their
>> surroundings that makes them seem agentish?  Having answered that question,
>> your explanation now comes into play.
>>
>> The thing about the above examples that makes them all seem agenty is
>> that they keep bringing the system back to the same place.  The thing about
>> them that makes them seem less agenty is that they have only one means to
>> do so. Give that thermostat a solar panel, and a heat pump, and an oil
>> furnace and have it switch from one to the other as circumstances vary, now
>> the thermostat becomes much more agenty.
>>
>> Does that make any sense?  I think the nastiest problems here are (1)
>> keeping the levels of organization straight and (2) teasing out the
>> individual that is the agent.
>>
>> Nick
>>
>> On Fri, Jul 14, 2023 at 7:29 PM Russ Abbott 
>> wrote:
>>
>>> I'm not sure what "closure to efficient cause" means. I considered using
>>> as an example an outdoor light that charges itself (and stays off) during
>>> the day and goes on at night. In what important way is that different from
>>> a flashlight? They both have energy storage systems (batteries). Does it
>>> really matter that the garden light "recharges itself" rather than relying
>>> on a more direct outside force to change its batteries? And they both have
>>> on-off switches. The flashlight's is more conventional whereas the garden
>>> light's is a light sensor. Does that really matter? They are both tripped
>>> by outside forces.
>>>
>>> BTW, congratulations on 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread glen

EricS gives what looks a bit like a derivation of "closure to efficient cause" 
from first principles. 8^D And Dave's reference to autopoesis is perfectly apt. (There's 
a lot of hemming and hawing about whether Rosen's M-R Systems are a particular instance 
of autopoiesis.) But Eric's more traditional build-up from control systems and 
information theory is probably better, less prone to woo/mysticism.

No, I see no *essential* [⛧] difference between the solar-battery-powered garden light 
versus the flashlight equipped with a sensor and a robotic arm (presumably with a battery 
that powers the arm and the light ... a battery that could be charged with a solar 
panel). But it is slightly different. To see how, forget the flashlight and compare the 
garden light to something like a mercury mechanism thermostat. The "inner life" 
of the garden light lies in the circuit architecture and the battery. Cf Eric's 
discussion of simulation, the circuitry of the garden light is (just a tiny bit) 
virtualized/simulated. The mercury mechanism thermostat is a mechanical computer, whereas 
the circuitry in the garden light is an electrical computer. Were we alien 
anthropologists, from which do we think it would be easier to agnostically *infer* the 
purpose/intention of the computer?

I argue it would be easier to infer the purpose of the electrical computer than the 
mechanical one because of the virtualization. Virtualization is directly proportional to 
expressibility. Hence, again cf Conant & Ashby (or Shannon), if the controller is more 
expressive than the system being controlled, then given *one* purpose/intention, it's more 
reasonable that the maker of the artifact intended it to do that one thing. The 
anthropologist might think to herself "Of all the things I might do with this 
controller, *this* is what they chose to do with it?"

Personally, I think the antikythera 
 is an excellent foil for 
resolving one's thoughts on agency (both passthrough/open and sticky/closed).


[⛧] I use "essential" as a slur. Details are not merely important. They're 
crucial. But I realize most people are essentialist. So I have to talk this way a lot and 
might give the impression I like talking this way.


On 7/14/23 16:28, Russ Abbott wrote:

I'm not sure what "closure to efficient cause" means. I considered using as an example an 
outdoor light that charges itself (and stays off) during the day and goes on at night. In what 
important way is that different from a flashlight? They both have energy storage systems 
(batteries). Does it really matter that the garden light "recharges itself" rather than 
relying on a more direct outside force to change its batteries? And they both have on-off switches. 
The flashlight's is more conventional whereas the garden light's is a light sensor. Does that 
really matter? They are both tripped by outside forces.

BTW, congratulations on your phrase /epistemological trespassing/!
_
_
__-- Russ

On Fri, Jul 14, 2023 at 1:47 PM glen mailto:geprope...@gmail.com>> wrote:

I'm still attracted to Rosen's closure to efficient cause. Your flashlight 
example is classified as non-agent (or non-living ... tomayto tomahto) because 
the efficient cause is open. Now, attach sensor and effector to the flashlight 
so that it can flick it*self* on when it gets dark and off when it gets bright, 
then that (partially) closes it. Maybe we merely kicked the can down the road a 
bit. But then we can talk about decoupling and hierarchies of scale. From the 
armchair, there is no such thing as a (pure) agent just like there is no such 
thing as free will. But for practical purposes, you can draw the boundary 
somewhere and call it a day.

On 7/14/23 12:01, Russ Abbott wrote:
 > I was recently wondering about the informal distinction we make between 
things that are agents and things that aren't.
 >
 > For example, I would consider most living things to be agents. I would 
also consider many computer programs when in operation as agents. The most obvious 
examples (for me) are programs that play games like chess.
 >
 > I would not consider a rock an agent -- mainly because it doesn't do anything, especially on 
its own. But a boulder crashnng down a hill and destroying something at the bottom is reasonably 
called "an agent of destruction." Perhaps this is just playing with words: "agent" 
can have multiple meanings.  A writer's agent represents the writer in negotiations with publishers. 
Perhaps that's just another meaning.
 >
 > My tentative definition is that an agent must have access to energy, and 
it must use that energy to interact with the world. It must also have some 
internal logic that determines how it interacts with the world. This final 
condition rules out boulders rolling down a hill.
 >
 > But I doubt that I would call a flashlight (with an on-off switch) an 
agent even though it satisfies my