Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-18 Thread glen

Hm. It seems to me that self-attention allows a kind of circularity, where dependence on ancestors (or a built 
environment, or whatever "deep" structure exists in one's current context) does not. Nick's diatribe against 
teleonomy back in '87 focuses on circularity and, I suspect, would appeal to those of us who think circularity is a 
bug. I suppose it's reasonable to think of the sequential versus parallel inputs into such a beast and make the analogy 
of the parts that depend (crucially) on iteration as "fundamental" and those that depend (crucially) on 
synchrony/now/instantaneous as "ordinary". And I guess that's why Eric brought it up. A weakly stable thing 
like a hurricane seems to depend more on now-ness and less on any kind of deep structure memorized in the weather. But 
we might claim the structure, size, geology, of the earth as a fundamental driver for any *particular* hurricane 
(though not the abstract class of hurricane-like things). So, the "fundamental" conditions serve as a 
long-term memory structure or a canalization, where structured here-and-now objects serve as a proxy for distant past, 
high order Markovian things.

This might imply that the registration of an agent is simply a forgetting, a 
way to truncate the impossibly long/big details we would have to yap about in 
order to treat it as non-agent. I.e. agency is a convenient fiction, a 
shortcut, similar to free will.

But unlike some, I'm not allergic to circularity and don't tend to believe that unrolled 
iteration is identical to rolled iteration. Even if self-attention in transformers is 
ultimately unrollable (which I think it is), it still allows for a kind of circularity upon 
which one might be able to found agency (or "anti-found" anyway >8^D).

On 7/17/23 21:27, Roger Critchlow wrote:



On Mon, Jul 17, 2023 at 2:35 PM David Eric Smith mailto:desm...@santafe.edu>> wrote:

[...] [Yoshi Oono's The Nonlinear World]
in which he argues that the phenomena you mention are only 
“pseudo-complex”.  Yoshi, like David but with less of the predictable 
“Darwin-was-better; now what subject are we discussing today?” vibe, argues 
that there is a threshold to “true complexity” that is only crossed in systems 
that obey what Yoshi calls a “Pasteur principle”; they are of a kind that 
effectively can’t emerge spontaneously, but can evolve from ancestors once they 
exist.  He says (translating slightly from his words to mine) that such systems 
split the notion of “boundary conditions” into two sub-kinds that differ 
qualitatively.  There are the “fundamental conditions” (in biology, the 
contents of genomes with indefinitely deep ancestry), that mine an indefinite 
past sparsely and selectively, versus ordinary “boundary conditions”, which are 
the dense here-and-now.  The fundamental conditions often provide criteria that 
allow the complex thing to respond to parts of the here-and-now, and ignore 
other
parts, feeding back onto the update of the fundamental conditions.

I don’t know when I will get time to listen to David’s appearance with 
Sean, so with apologies cannot know whether his argument is similar in its 
logic.  But Yoshi’s framing appeals to me a lot, because it is like a kind of 
spontaneous symmetry breaking or ergodicity breaking in the representations of 
information and how they modulate the networks of connection to the space-time 
continuum.  That seems to me a very fertile idea.  I am still looking for some 
concrete model that makes it compelling and useful for something I want to 
solve.  (I probably have written this on the list before, in which case 
apologies for being repetitive.  But this mention is framed specifically to 
your question whether one should be disappointed in the demotion of the 
complexity in phenomena.)
[...]

On Jul 18, 2023, at 4:37 AM, Stephen Guerin mailto:stephengue...@fas.harvard.edu>> wrote:

[...]

 1. Teleonomic Material: the latest use by David Krakauer on Sean Carroll's 
recent podcast 

 in summarizing Complexity. Hurricanes, flocks and Benard Cells according to David 
are not Complex, BTW. I find the move a little frustrating and disappointing but I 
always respect his perspective.


Okay, I listened to the podcast.

DK says that real complexity starts with teleonomic matter, also known as particles that 
think.  He says that such agents carry around some representation of the external world.  
And then the discussion gets distracted to other topics, at one point getting to 
"large language model paper clip nightmares".

My response to Eric's description of Oono's  "Pasteur principle" was that it sounds a lot like 
"Attention Is All You Need" (https://arxiv.org/pdf/1706.03762.pdf 
), the founding paper of the Transformer class of neural 
network models.

The "fundamental conditions" in a 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-18 Thread Prof David West
teleonomic matter (particles that think) - totally consistent with Vedic / 
Buddhist cosmology. Even elementary particles are subject to the Law of Karma 
if they "misbehave"—something very unlikely, but not impossible.

davew


On Mon, Jul 17, 2023, at 10:27 PM, Roger Critchlow wrote:
> 
> 
> On Mon, Jul 17, 2023 at 2:35 PM David Eric Smith  wrote:
>> [...] [Yoshi Oono's The Nonlinear World]
>> in which he argues that the phenomena you mention are only “pseudo-complex”. 
>>  Yoshi, like David but with less of the predictable “Darwin-was-better; now 
>> what subject are we discussing today?” vibe, argues that there is a 
>> threshold to “true complexity” that is only crossed in systems that obey 
>> what Yoshi calls a “Pasteur principle”; they are of a kind that effectively 
>> can’t emerge spontaneously, but can evolve from ancestors once they exist.  
>> He says (translating slightly from his words to mine) that such systems 
>> split the notion of “boundary conditions” into two sub-kinds that differ 
>> qualitatively.  There are the “fundamental conditions” (in biology, the 
>> contents of genomes with indefinitely deep ancestry), that mine an 
>> indefinite past sparsely and selectively, versus ordinary “boundary 
>> conditions”, which are the dense here-and-now.  The fundamental conditions 
>> often provide criteria that allow the complex thing to respond to parts of 
>> the here-and-now, and ignore other parts, feeding back onto the update of 
>> the fundamental conditions.  
>> 
>> I don’t know when I will get time to listen to David’s appearance with Sean, 
>> so with apologies cannot know whether his argument is similar in its logic.  
>> But Yoshi’s framing appeals to me a lot, because it is like a kind of 
>> spontaneous symmetry breaking or ergodicity breaking in the representations 
>> of information and how they modulate the networks of connection to the 
>> space-time continuum.  That seems to me a very fertile idea.  I am still 
>> looking for some concrete model that makes it compelling and useful for 
>> something I want to solve.  (I probably have written this on the list 
>> before, in which case apologies for being repetitive.  But this mention is 
>> framed specifically to your question whether one should be disappointed in 
>> the demotion of the complexity in phenomena.)
>> [...]
>>> On Jul 18, 2023, at 4:37 AM, Stephen Guerin  
>>> wrote:
>>> 
>>> [...]
>>>  1. Teleonomic Material: the latest use by David Krakauer on Sean Carroll's 
>>> recent podcast 
>>> 
>>>  in summarizing Complexity. Hurricanes, flocks and Benard Cells according 
>>> to David are not Complex, BTW. I find the move a little frustrating and 
>>> disappointing but I always respect his perspective.
> Okay, I listened to the podcast.
> 
> DK says that real complexity starts with teleonomic matter, also known as 
> particles that think.  He says that such agents carry around some 
> representation of the external world.  And then the discussion gets 
> distracted to other topics, at one point getting to "large language model 
> paper clip nightmares".
> 
> My response to Eric's description of Oono's  "Pasteur principle" was that it 
> sounds a lot like "Attention Is All You Need" 
> (https://arxiv.org/pdf/1706.03762.pdf), the founding paper of the Transformer 
> class of neural network models.  
> 
> The "fundamental conditions" in a Transformer would be the trained neural net 
> which specifies the patterns of attention and responses learned during 
> training.  The "ordinary conditions" would be the input sequence given to the 
> Transformer.  The Transformer breaks up the input sequence into attention 
> patterns, evaluates the response to the current set of input values selected 
> by the attention patterns,  emits an element to the output sequence, and 
> advances the input cursor.
> 
> Anyone else see the family resemblance here?
> 
> -- rec --
> 
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
> 
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread Roger Critchlow
On Mon, Jul 17, 2023 at 2:35 PM David Eric Smith 
wrote:

> [...] [Yoshi Oono's The Nonlinear World]
> in which he argues that the phenomena you mention are only
> “pseudo-complex”.  Yoshi, like David but with less of the predictable
> “Darwin-was-better; now what subject are we discussing today?” vibe, argues
> that there is a threshold to “true complexity” that is only crossed in
> systems that obey what Yoshi calls a “Pasteur principle”; they are of a
> kind that effectively can’t emerge spontaneously, but can evolve from
> ancestors once they exist.  He says (translating slightly from his words to
> mine) that such systems split the notion of “boundary conditions” into two
> sub-kinds that differ qualitatively.  There are the “fundamental
> conditions” (in biology, the contents of genomes with indefinitely deep
> ancestry), that mine an indefinite past sparsely and selectively, versus
> ordinary “boundary conditions”, which are the dense here-and-now.  The
> fundamental conditions often provide criteria that allow the complex thing
> to respond to parts of the here-and-now, and ignore other parts, feeding
> back onto the update of the fundamental conditions.
>
> I don’t know when I will get time to listen to David’s appearance with
> Sean, so with apologies cannot know whether his argument is similar in its
> logic.  But Yoshi’s framing appeals to me a lot, because it is like a kind
> of spontaneous symmetry breaking or ergodicity breaking in the
> representations of information and how they modulate the networks of
> connection to the space-time continuum.  That seems to me a very fertile
> idea.  I am still looking for some concrete model that makes it compelling
> and useful for something I want to solve.  (I probably have written this on
> the list before, in which case apologies for being repetitive.  But this
> mention is framed specifically to your question whether one should be
> disappointed in the demotion of the complexity in phenomena.)
> [...]
>
> On Jul 18, 2023, at 4:37 AM, Stephen Guerin 
> wrote:
>
> [...]
>
>1. Teleonomic Material: the latest use by David Krakauer on Sean
>Carroll's recent podcast
>
> 
>in summarizing Complexity. Hurricanes, flocks and Benard Cells according to
>David are not Complex, BTW. I find the move a little frustrating
>and disappointing but I always respect his perspective.
>
> Okay, I listened to the podcast.

DK says that real complexity starts with teleonomic matter, also known as
particles that think.  He says that such agents carry around some
representation of the external world.  And then the discussion gets
distracted to other topics, at one point getting to "large language model
paper clip nightmares".

My response to Eric's description of Oono's  "Pasteur principle" was that
it sounds a lot like "Attention Is All You Need" (
https://arxiv.org/pdf/1706.03762.pdf), the founding paper of the
Transformer class of neural network models.

The "fundamental conditions" in a Transformer would be the trained neural
net which specifies the patterns of attention and responses learned during
training.  The "ordinary conditions" would be the input sequence given to
the Transformer.  The Transformer breaks up the input sequence into
attention patterns, evaluates the response to the current set of input
values selected by the attention patterns,  emits an element to the output
sequence, and advances the input cursor.

Anyone else see the family resemblance here?

-- rec --
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread Russ Abbott
Hi all,

I asked what I thought, naively, was a fairly simple question, namely
something like Nick's question (2): "*What are the conditions that require
us to identify something [as] an agent?*" I wasn't intending to be
prescriptive and wouldn't have used "require us." I was more musing to
myself: What properties/conditions would lead me to label something an
agent? If I/we came up with something that felt satisfying, I would then
want to know whether others would be similarly inclined to label things
that satisfied those properties, and only those things, as agents.

I now feel like I'm being offered enormous pools of sophisticated
scholarship. As Eric says in a message that arrived as I was writing this,
there is so much good stuff in your replies that I feel overwhelmed by the
amount of work I would have to do to reply intelligently. I almost wish we
were carrying on this conversation via Twitter (or Threads). I could
probably handle posts of 256 characters!  With that as a preliminary
disclaimer, here are some Tweet-like comments.

*Glen,* I deliberately didn't include a finger/hand/arm to press the
flashlight's button. The button press in my view serves as an external
event that the flashlight "senses" and to which it responds by turning on
its light. Also, I wouldn't insist that agents have the means to replenish
their energy supplies. All, agents (in my view) expend energy, which must
be renewed if they are to continue to act. Since there are so many ways
energy supplies can be recharged, the particular way that serves a
particular agent is, in my view, a necessary but relatively unimportant
detail.

*Nick,* as I said, I intended something like your question 2. I'm not
asking an agent to explain itself or to explain how it acquired the means
to act as an agent.

*Dave,* I wouldn't require the properties I'm seeking as defining an agent
to reflect only externally observable behaviors. In fact, I would expect
any collection of agent-defining properties to include something about what
goes on in the agent. I'm not sure why you want to exclude such properties.

*Stephen,* My interest is not in things that are agents because they act on
behalf of something else. A would categorize agents in agent-based models
as simulated agents. Most agent-based models ignore the energy agents
expend and how it is renewed. Software agents other than ABM agents seem to
me to be objects (as in object-oriented programming) with internal
threads--which enable them to act on their own. That seems like an
important distinction. Objects can act in response to external triggers,
i.e., calls to their interfaces, but only objects with internal threads
have the means to initiate actions autonomously. I would consider the
agents (e.g., turtles, etc.) in NetLogo valid agents. They have the
equivalent of threads that are run every clock tick. They don't require
that other agents interact with them in order to act.

-- Russ


On Mon, Jul 17, 2023 at 1:00 PM Nicholas Thompson 
wrote:

> By the way, not all designers are individuals.  Foxes design the behavior
> of rabbits and rabbits design the behavior of foxes, but I wouldn't be
> quick to call foxes an individual or rabbits an individual.  Natural
> selection designs but it is not itself designed to do so.
>
> On Mon, Jul 17, 2023 at 2:05 PM Nicholas Thompson 
> wrote:
>
>> Hi, Russ,
>>
>> I have a non-scientist friend to whom I sometimes show my posts here for
>> guidance.  I showed him some recent posts and he wrote back, "Wow, Nick!
>> You are really swinging for the fences, here!"  He and I know that one who
>> swings for the fences, rarely hits the ball, let alone the fences.
>>
>> So please can we precede in little tiny steps.
>>
>> You raise the question, _ *what makes an agent?*.
>>
>> This expression is ambiguous in just the way I was trying to highlight in
>> my response:
>>
>> It could mean, *(1) What are the conditions that bring an agent into
>> being? *
>>
>> Or it could mean, *(2) What are the conditions that require us to
>> identify something an agent?.*
>>
>> The first (I think) is the explanatory question; the second, the
>> descriptive question.   Wittgenstein was said to have said that something
>> cannot be its own explanation, and I believed him.  Whatever else might be
>> said about the relation between explanations and descriptions is that
>> descriptions are states of affairs taken for granted by explanations.  If
>> you ask me why the chicken crossed the road, my answering your quest
>> commits me to the premise that the chicken did indeed cross the road.
>>
>> A definition is *explanatory *when it  describes a process which
>> explains something else and which, itself, is in need of explanation.
>>
>> So:  Can I come back to you with a question?   Which of the two meanings
>> did you intend.  And if you were looking  to define agents in terms of the
>> internal mechanism that makes agency possible, what precisely is the state
>> of affairs, behavior, 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread David Eric Smith
Stephen, 

Too much good here for me almost-even to be able to read in scarce time, but on 
your final point 6, about whether various dissipative structures are complex, 
or not by what measure:

Do you know Yoshi Oono’s wonderful idiosyncratic book The Nonlinear World?
https://link.springer.com/book/10.1007/978-4-431-54029-8
The Nonlinear World
link.springer.com

I believe it’s the final chapter (Toward complexity), which apparently one can 
just download:
https://link.springer.com/content/pdf/10.1007/978-4-431-54029-8_5?pdf=chapter%20toc
Toward Complexity
link.springer.com

in which he argues that the phenomena you mention are only “pseudo-complex”.  
Yoshi, like David but with less of the predictable “Darwin-was-better; now what 
subject are we discussing today?” vibe, argues that there is a threshold to 
“true complexity” that is only crossed in systems that obey what Yoshi calls a 
“Pasteur principle”; they are of a kind that effectively can’t emerge 
spontaneously, but can evolve from ancestors once they exist.  He says 
(translating slightly from his words to mine) that such systems split the 
notion of “boundary conditions” into two sub-kinds that differ qualitatively.  
There are the “fundamental conditions” (in biology, the contents of genomes 
with indefinitely deep ancestry), that mine an indefinite past sparsely and 
selectively, versus ordinary “boundary conditions”, which are the dense 
here-and-now.  The fundamental conditions often provide criteria that allow the 
complex thing to respond to parts of the here-and-now, and ignore other parts, 
feeding back onto the update of the fundamental conditions.  

I don’t know when I will get time to listen to David’s appearance with Sean, so 
with apologies cannot know whether his argument is similar in its logic.  But 
Yoshi’s framing appeals to me a lot, because it is like a kind of spontaneous 
symmetry breaking or ergodicity breaking in the representations of information 
and how they modulate the networks of connection to the space-time continuum.  
That seems to me a very fertile idea.  I am still looking for some concrete 
model that makes it compelling and useful for something I want to solve.  (I 
probably have written this on the list before, in which case apologies for 
being repetitive.  But this mention is framed specifically to your question 
whether one should be disappointed in the demotion of the complexity in 
phenomena.)


Sorry for such a long email.  I thought this one would be short.  I haven’t 
tried to answer Russ yet because I expected that one to be long, and cannot 
yet….

Eric






> On Jul 18, 2023, at 4:37 AM, Stephen Guerin  
> wrote:
> 
> Russ,
> 
> "agent" is an overloaded word in our work. While there's overlap, I don't 
> think there will ever be a single definition to cover them all. I break our 
> use into two classes: software architecture design and discussions around 
> Agency (ie acting on its own or others behalf)
> 
> Software Design and Architecture
> I use the term "agent" when in software design less about "agency" and is 
> more about communicating the software architecture pattern of minimal 
> centralized control through actors with simulated or actual concurrency. 
> While we are often interested in issues around agency, I think it's important 
> to preserve this use of "agent" in software without bringing in  a second 
> word like agency. Both are suitcase words 
>  ala Minsky.  Simulated 
> concurrency might have a scheduler issuing "step" or "go" events to these 
> "agents" but we try to minimize any global centralized coordinator of logic 
> and we expect coordination to emerge from the interaction of the agents (eg 
> flocking, ising or ant foraging model). The term agent is used to distinguish 
> from other approaches like object-oriented, procedural and functional. While 
> agents are certainly implemented with objects, procedural and functional 
> patterns we tend to mean the agents are semi-autonomous in their actions. 
> Pattie Maes in the 90s described agents as objects that can say "no" :-) 
> Relatedly, Uri Wilensky stresses the use of "ask" to request the action of 
> another agent without the ability do directly do so. This use of "ask" was 
> locked into the api in later versions of Netlogo.
> agents in agent-based modeling which in Netlogo are turtles, links and 
> patches. Or in other frameworks might be lagrangian particles and eulerian 
> cells and links/edges. I call these lowercase "a" agents. Often we focus on 
> the interaction behaviors between many lightweight agents and less on 
> internal logic. I often say ABM might be better termed Interaction Based 
> Modeling. Interactions are often hybrid between turtles, links and patches.
> agents in multi-agent systems and distributed AI. It's a rough distinction 
> but here the agents tend to be heavier on internal processes and less focused 
> on the interactions. It's less 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread Nicholas Thompson
By the way, not all designers are individuals.  Foxes design the behavior
of rabbits and rabbits design the behavior of foxes, but I wouldn't be
quick to call foxes an individual or rabbits an individual.  Natural
selection designs but it is not itself designed to do so.

On Mon, Jul 17, 2023 at 2:05 PM Nicholas Thompson 
wrote:

> Hi, Russ,
>
> I have a non-scientist friend to whom I sometimes show my posts here for
> guidance.  I showed him some recent posts and he wrote back, "Wow, Nick!
> You are really swinging for the fences, here!"  He and I know that one who
> swings for the fences, rarely hits the ball, let alone the fences.
>
> So please can we precede in little tiny steps.
>
> You raise the question, _ *what makes an agent?*.
>
> This expression is ambiguous in just the way I was trying to highlight in
> my response:
>
> It could mean, *(1) What are the conditions that bring an agent into
> being? *
>
> Or it could mean, *(2) What are the conditions that require us to
> identify something an agent?.*
>
> The first (I think) is the explanatory question; the second, the
> descriptive question.   Wittgenstein was said to have said that something
> cannot be its own explanation, and I believed him.  Whatever else might be
> said about the relation between explanations and descriptions is that
> descriptions are states of affairs taken for granted by explanations.  If
> you ask me why the chicken crossed the road, my answering your quest
> commits me to the premise that the chicken did indeed cross the road.
>
> A definition is *explanatory *when it  describes a process which explains
> something else and which, itself, is in need of explanation.
>
> So:  Can I come back to you with a question?   Which of the two meanings
> did you intend.  And if you were looking  to define agents in terms of the
> internal mechanism that makes agency possible, what precisely is the state
> of affairs, behavior, what-have-you, that such agents are called upon to
> explain.!
>
> For me agency is design in behavior, and an agent is an individual whose
> behavior is designed.  All of this has to be worked out before your
> explanatory question becomes relevant, What is the neural mechanism by
> which such designs come about?
>
> nick
>
>
>
> On Sun, Jul 16, 2023 at 3:18 PM Russ Abbott  wrote:
>
>> Nick,
>>
>> I just asked Eric for examples. Your examples confuse me because I don't
>> see how you relate them to agenthood. Are you really suggesting that you
>> think of waves and puddles as agents? My suggestion was that you need some
>> sort of internal decision-making mechanism to qualify as an agent.
>>
>> I don't know anything about the carotid sinus.
>>
>> Your thermostat example strikes me as similar to my flashlight example. I
>> might put as: a thermostat senses the temperature and twiddles the controls
>> of the heating/AC units in response.
>>
>> I'm not sure where you are going by labeling my discussion explanatory. I
>> wasn't thinking that I was explaining anything, other, perhaps, than my
>> intuition of what makes an agent.
>>
>> -- Russ
>>
>>
>> On Fri, Jul 14, 2023 at 8:06 PM Nicholas Thompson <
>> thompnicks...@gmail.com> wrote:
>>
>>> Some examples I like to think about:
>>>
>>> Waves arrange pebbles on a beach from small to large
>>>
>>> A puddle maintains its temperature at 32 degrees as long as it has ice
>>> in it.
>>>
>>> The carotid sinus maintains the acidity of the blood by causing us to
>>> breath more oxygen when it gets to acid.  (I hope I have that right.
>>>
>>> An old-fashioned thermostat maintains the temperature of a house by
>>> maintaining the level of a vial of mercury attached to a bi-metallic coil.
>>>
>>> Russ, the objection would have with your definition is that it is
>>> explanatory.   An explanatory  definition identifies a phenomenon with its
>>> causes, bypassing  the phenomenon that raises the need for an explanation
>>> in the first place?   What is the relation between agents and their
>>> surroundings that makes them seem agentish?  Having answered that question,
>>> your explanation now comes into play.
>>>
>>> The thing about the above examples that makes them all seem agenty is
>>> that they keep bringing the system back to the same place.  The thing about
>>> them that makes them seem less agenty is that they have only one means to
>>> do so. Give that thermostat a solar panel, and a heat pump, and an oil
>>> furnace and have it switch from one to the other as circumstances vary, now
>>> the thermostat becomes much more agenty.
>>>
>>> Does that make any sense?  I think the nastiest problems here are (1)
>>> keeping the levels of organization straight and (2) teasing out the
>>> individual that is the agent.
>>>
>>> Nick
>>>
>>> On Fri, Jul 14, 2023 at 7:29 PM Russ Abbott 
>>> wrote:
>>>
 I'm not sure what "closure to efficient cause" means. I considered
 using as an example an outdoor light that charges itself (and stays off)
 during the day and goes on 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread Stephen Guerin
Russ,

"agent" is an overloaded word in our work. While there's overlap, I don't
think there will ever be a single definition to cover them all. I break
our use into two classes: software architecture design and discussions
around Agency (ie acting on its own or others behalf)

*Software Design and Architecture*
I use the term "agent" when in software design less about "agency" and is
more about communicating the software architecture pattern of minimal
centralized control through actors with simulated or actual concurrency.
While we are often interested in issues around agency, I think it's
important to preserve this use of "agent" in software without bringing in
 a second word like agency. Both are suitcase words
 ala Minsky.  Simulated
concurrency might have a scheduler issuing "step" or "go" events to these
"agents" but we try to minimize any global centralized coordinator of logic
and we expect coordination to emerge from the interaction of the agents (eg
flocking, ising or ant foraging model). The term agent is used to
distinguish from other approaches like object-oriented, procedural and
functional. While agents are certainly implemented with objects, procedural
and functional patterns we tend to mean the agents are semi-autonomous in
their actions. Pattie Maes in the 90s described agents as objects that can
say "no" :-) Relatedly, Uri Wilensky stresses the use of "ask" to request
the action of another agent without the ability do directly do so. This use
of "ask" was locked into the api in later versions of Netlogo.

   1. agents in agent-based modeling which in Netlogo are turtles, links
   and patches. Or in other frameworks might be lagrangian particles and
   eulerian cells and links/edges. I call these lowercase "a" agents. Often we
   focus on the interaction behaviors between many lightweight agents and less
   on internal logic. I often say ABM might be better termed Interaction Based
   Modeling. Interactions are often hybrid between turtles, links and patches.
   2. agents in multi-agent systems and distributed AI. It's a rough
   distinction but here the agents tend to be heavier on internal processes
   and less focused on the interactions. It's less a technical distinction and
   more about the communities of researchers and developers.
   3. agent-oriented programming: similar to the 1 and 2 but the agents are
   deployed sensing and acting in the world (eg Pan-Tilt-Zoom cameras on
   mountain tops watching for wildfire and coordinating with a network of
   other cameras and tracked resources). Here, we use agent-oriented
   programming to distinguish it from

*Agency / Telelogic / Teleonomic*

   1. Autonomous Agents - when speaking in this context I often say capital
   "A" agents with collaborators. Here we're in the realm of emergent Agency
   ala Stu's Autonomous Agents from 2000 Investigations. Short summary
   article  .
   Stu's autonomous agents was his stab at defining a living system.
   2. Personal Software Agents - these are related to agent-oriented
   programming above but also take on Agency as acting on your behalf. eg,
   your camera agents and location agents  that monitor your private cameras
   and GPs to coordinate with other agents share information but not the raw
   data for collective intelligence and collective action.
   3. Structure-Agency: the bidirectional feedback in sociology and social
   theory pertains to the degree to which individuals' independent actions
   (agency) are influenced or constrained by societal patterns and structures
   and how the structures are created by the Agents.
   4. Principal-Agent: in economics and contract theory where one party
   (the agent) is expected to act in the best interest of another party (the
   principal)   eg divorce lawyers or sports agents negotiating on behalf of
   their clients where they can expose private preferences to the other agent
   to find best terms under rules of nondisclosure and professional conduct
   without revealing private data to either of the clients. This can also
   relate to the Pricniple-Agent problem where there is the potential or
   incentive to act in their own self-interest instead. eg real estate
   representing the buyer but might want to maximize sales price and
   commission or a corporate executive maximizing salary or stability of
   employment vs the goals of the shareholders. obvious need here to expand to
   stakeholders (employees, customers, community) and not just shareholders.
   5. Agents as ecological emergents with relation to extremum principles
   like Principle of Stationary Action  I will often talk about the emergent
   cognition of the ant foraging system as a whole as an uppercase "A" Agent.
   As mentioned on the list before, when we look at multiple interacting
   fields with derivatives of action with concentrations in one field driving
   

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread Prof David West
Where angels fear to tread, dave rushes in.

Question 1) seems, to me, to be nonsensical; or hopelessly anthropocentric; or, 
unanswerable in any generalized or abstract form.

Paraphrasing question 2) — what set of observables (behaviors) must be present 
before We/I can assert, "*t***hat *thing is an agent.*" Assuming such a set 
exists: a) it tells me nothing about the "internals" of the thing; and b) it 
tells me little about anything except how We (assuming some consistency among 
all human beings) go about naming / categorizing things. I would also bet money 
that any such set is culturally grounded and that it is unlikely that any 
"universal" set exists. Certainly no "universal" set shared by humans and our 
elusive alien neighbors.

If we were to examine the inhabitants of any set of things that came about via 
question 2), why would we expect any commonality among the "conditions" 
(state?? characteristics?? patterns of same???) internal to each member? 
Granted, there might be subsets of the set (e.g. all instances of a human 
being, or a dog, or, for some, an AI) where we would expect and find some kind 
of, at least, statistical commonality. I say statistical because there are 
always outliers and exceptions.

Another issue, implied by the way question 1) is phrased, concerns the 
possibility of knowing the train of events, steps in an evolutionary process, 
engaging the "internals" of an entity as it proceeds from non-agent to 
proto-agent to agent. How can this be anything other than idiosyncratic?

As to explanation vs. description: given any "description," the number of 
"explanations" is infinite—or, at least co-extensive to the number of 
"explainers." No matter what Pierce might hope, consensus is unlikely.

davew


On Mon, Jul 17, 2023, at 12:05 PM, Nicholas Thompson wrote:
> Hi, Russ, 
> 
> I have a non-scientist friend to whom I sometimes show my posts here for 
> guidance.  I showed him some recent posts and he wrote back, "Wow, Nick!  You 
> are really swinging for the fences, here!"  He and I know that one who swings 
> for the fences, rarely hits the ball, let alone the fences.
> 
> So please can we precede in little tiny steps.
> 
> You raise the question, _ **what makes an agent?**.
> 
> This expression is ambiguous in just the way I was trying to highlight in my 
> response:
> 
> It could mean, **(1) What are the conditions that bring an agent into being?**
> 
> Or it could mean, **(2) What are the conditions that require us to identify 
> something an agent?.** 
> 
> The first (I think) is the explanatory question; the second, the descriptive 
> question.   Wittgenstein was said to have said that something cannot be its 
> own explanation, and I believed him.  Whatever else might be said about the 
> relation between explanations and descriptions is that descriptions are 
> states of affairs taken for granted by explanations.  If you ask me why the 
> chicken crossed the road, my answering your quest commits me to the premise 
> that the chicken did indeed cross the road. 
> 
> A definition is **explanatory* *when it  describes a process which explains 
> something else and which, itself, is in need of explanation. 
> 
> So:  Can I come back to you with a question?   Which of the two meanings did 
> you intend.  And if you were looking  to define agents in terms of the  
> internal mechanism that makes agency possible, what precisely is the state of 
> affairs, behavior, what-have-you, that such agents are called upon to 
> explain.!
> 
> For me agency is design in behavior, and an agent is an individual whose 
> behavior is designed.  All of this has to be worked out before your 
> explanatory question becomes relevant, What is the neural mechanism by which 
> such designs come about?  
> 
> nick
> 
> 
> 
> On Sun, Jul 16, 2023 at 3:18 PM Russ Abbott  wrote:
>> Nick,
>> 
>> I just asked Eric for examples. Your examples confuse me because I don't see 
>> how you relate them to agenthood. Are you really suggesting that you think 
>> of waves and puddles as agents? My suggestion was that you need some sort of 
>> internal decision-making mechanism to qualify as an agent.
>> 
>> I don't know anything about the carotid sinus.
>> 
>> Your thermostat example strikes me as similar to my flashlight example. I 
>> might put as: a thermostat senses the temperature and twiddles the controls 
>> of the heating/AC units in response.
>> 
>> I'm not sure where you are going by labeling my discussion explanatory. I 
>> wasn't thinking that I was explaining anything, other, perhaps, than my 
>> intuition of what makes an agent. 
>> __
>> __-- Russ 
>> 
>> 
>> On Fri, Jul 14, 2023 at 8:06 PM Nicholas Thompson  
>> wrote:
>>> Some examples I like to think about:
>>> 
>>> Waves arrange pebbles on a beach from small to large
>>> 
>>> A puddle maintains its temperature at 32 degrees as long as it has ice in 
>>> it.
>>> 
>>> The carotid sinus maintains the acidity of the blood by causing us to 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread Nicholas Thompson
Hi, Russ,

I have a non-scientist friend to whom I sometimes show my posts here for
guidance.  I showed him some recent posts and he wrote back, "Wow, Nick!
You are really swinging for the fences, here!"  He and I know that one who
swings for the fences, rarely hits the ball, let alone the fences.

So please can we precede in little tiny steps.

You raise the question, _ *what makes an agent?*.

This expression is ambiguous in just the way I was trying to highlight in
my response:

It could mean, *(1) What are the conditions that bring an agent into being?
*

Or it could mean, *(2) What are the conditions that require us to identify
something an agent?.*

The first (I think) is the explanatory question; the second, the
descriptive question.   Wittgenstein was said to have said that something
cannot be its own explanation, and I believed him.  Whatever else might be
said about the relation between explanations and descriptions is that
descriptions are states of affairs taken for granted by explanations.  If
you ask me why the chicken crossed the road, my answering your quest
commits me to the premise that the chicken did indeed cross the road.

A definition is *explanatory *when it  describes a process which explains
something else and which, itself, is in need of explanation.

So:  Can I come back to you with a question?   Which of the two meanings
did you intend.  And if you were looking  to define agents in terms of the
internal mechanism that makes agency possible, what precisely is the state
of affairs, behavior, what-have-you, that such agents are called upon to
explain.!

For me agency is design in behavior, and an agent is an individual whose
behavior is designed.  All of this has to be worked out before your
explanatory question becomes relevant, What is the neural mechanism by
which such designs come about?

nick



On Sun, Jul 16, 2023 at 3:18 PM Russ Abbott  wrote:

> Nick,
>
> I just asked Eric for examples. Your examples confuse me because I don't
> see how you relate them to agenthood. Are you really suggesting that you
> think of waves and puddles as agents? My suggestion was that you need some
> sort of internal decision-making mechanism to qualify as an agent.
>
> I don't know anything about the carotid sinus.
>
> Your thermostat example strikes me as similar to my flashlight example. I
> might put as: a thermostat senses the temperature and twiddles the controls
> of the heating/AC units in response.
>
> I'm not sure where you are going by labeling my discussion explanatory. I
> wasn't thinking that I was explaining anything, other, perhaps, than my
> intuition of what makes an agent.
>
> -- Russ
>
>
> On Fri, Jul 14, 2023 at 8:06 PM Nicholas Thompson 
> wrote:
>
>> Some examples I like to think about:
>>
>> Waves arrange pebbles on a beach from small to large
>>
>> A puddle maintains its temperature at 32 degrees as long as it has ice in
>> it.
>>
>> The carotid sinus maintains the acidity of the blood by causing us to
>> breath more oxygen when it gets to acid.  (I hope I have that right.
>>
>> An old-fashioned thermostat maintains the temperature of a house by
>> maintaining the level of a vial of mercury attached to a bi-metallic coil.
>>
>> Russ, the objection would have with your definition is that it is
>> explanatory.   An explanatory  definition identifies a phenomenon with its
>> causes, bypassing  the phenomenon that raises the need for an explanation
>> in the first place?   What is the relation between agents and their
>> surroundings that makes them seem agentish?  Having answered that question,
>> your explanation now comes into play.
>>
>> The thing about the above examples that makes them all seem agenty is
>> that they keep bringing the system back to the same place.  The thing about
>> them that makes them seem less agenty is that they have only one means to
>> do so. Give that thermostat a solar panel, and a heat pump, and an oil
>> furnace and have it switch from one to the other as circumstances vary, now
>> the thermostat becomes much more agenty.
>>
>> Does that make any sense?  I think the nastiest problems here are (1)
>> keeping the levels of organization straight and (2) teasing out the
>> individual that is the agent.
>>
>> Nick
>>
>> On Fri, Jul 14, 2023 at 7:29 PM Russ Abbott 
>> wrote:
>>
>>> I'm not sure what "closure to efficient cause" means. I considered using
>>> as an example an outdoor light that charges itself (and stays off) during
>>> the day and goes on at night. In what important way is that different from
>>> a flashlight? They both have energy storage systems (batteries). Does it
>>> really matter that the garden light "recharges itself" rather than relying
>>> on a more direct outside force to change its batteries? And they both have
>>> on-off switches. The flashlight's is more conventional whereas the garden
>>> light's is a light sensor. Does that really matter? They are both tripped
>>> by outside forces.
>>>
>>> BTW, congratulations on 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-17 Thread glen

EricS gives what looks a bit like a derivation of "closure to efficient cause" 
from first principles. 8^D And Dave's reference to autopoesis is perfectly apt. (There's 
a lot of hemming and hawing about whether Rosen's M-R Systems are a particular instance 
of autopoiesis.) But Eric's more traditional build-up from control systems and 
information theory is probably better, less prone to woo/mysticism.

No, I see no *essential* [⛧] difference between the solar-battery-powered garden light 
versus the flashlight equipped with a sensor and a robotic arm (presumably with a battery 
that powers the arm and the light ... a battery that could be charged with a solar 
panel). But it is slightly different. To see how, forget the flashlight and compare the 
garden light to something like a mercury mechanism thermostat. The "inner life" 
of the garden light lies in the circuit architecture and the battery. Cf Eric's 
discussion of simulation, the circuitry of the garden light is (just a tiny bit) 
virtualized/simulated. The mercury mechanism thermostat is a mechanical computer, whereas 
the circuitry in the garden light is an electrical computer. Were we alien 
anthropologists, from which do we think it would be easier to agnostically *infer* the 
purpose/intention of the computer?

I argue it would be easier to infer the purpose of the electrical computer than the 
mechanical one because of the virtualization. Virtualization is directly proportional to 
expressibility. Hence, again cf Conant & Ashby (or Shannon), if the controller is more 
expressive than the system being controlled, then given *one* purpose/intention, it's more 
reasonable that the maker of the artifact intended it to do that one thing. The 
anthropologist might think to herself "Of all the things I might do with this 
controller, *this* is what they chose to do with it?"

Personally, I think the antikythera 
 is an excellent foil for 
resolving one's thoughts on agency (both passthrough/open and sticky/closed).


[⛧] I use "essential" as a slur. Details are not merely important. They're 
crucial. But I realize most people are essentialist. So I have to talk this way a lot and 
might give the impression I like talking this way.


On 7/14/23 16:28, Russ Abbott wrote:

I'm not sure what "closure to efficient cause" means. I considered using as an example an 
outdoor light that charges itself (and stays off) during the day and goes on at night. In what 
important way is that different from a flashlight? They both have energy storage systems 
(batteries). Does it really matter that the garden light "recharges itself" rather than 
relying on a more direct outside force to change its batteries? And they both have on-off switches. 
The flashlight's is more conventional whereas the garden light's is a light sensor. Does that 
really matter? They are both tripped by outside forces.

BTW, congratulations on your phrase /epistemological trespassing/!
_
_
__-- Russ

On Fri, Jul 14, 2023 at 1:47 PM glen mailto:geprope...@gmail.com>> wrote:

I'm still attracted to Rosen's closure to efficient cause. Your flashlight 
example is classified as non-agent (or non-living ... tomayto tomahto) because 
the efficient cause is open. Now, attach sensor and effector to the flashlight 
so that it can flick it*self* on when it gets dark and off when it gets bright, 
then that (partially) closes it. Maybe we merely kicked the can down the road a 
bit. But then we can talk about decoupling and hierarchies of scale. From the 
armchair, there is no such thing as a (pure) agent just like there is no such 
thing as free will. But for practical purposes, you can draw the boundary 
somewhere and call it a day.

On 7/14/23 12:01, Russ Abbott wrote:
 > I was recently wondering about the informal distinction we make between 
things that are agents and things that aren't.
 >
 > For example, I would consider most living things to be agents. I would 
also consider many computer programs when in operation as agents. The most obvious 
examples (for me) are programs that play games like chess.
 >
 > I would not consider a rock an agent -- mainly because it doesn't do anything, especially on 
its own. But a boulder crashnng down a hill and destroying something at the bottom is reasonably 
called "an agent of destruction." Perhaps this is just playing with words: "agent" 
can have multiple meanings.  A writer's agent represents the writer in negotiations with publishers. 
Perhaps that's just another meaning.
 >
 > My tentative definition is that an agent must have access to energy, and 
it must use that energy to interact with the world. It must also have some 
internal logic that determines how it interacts with the world. This final 
condition rules out boulders rolling down a hill.
 >
 > But I doubt that I would call a flashlight (with an on-off switch) an 
agent even though it satisfies my 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-16 Thread Russ Abbott
Nick,

I just asked Eric for examples. Your examples confuse me because I don't
see how you relate them to agenthood. Are you really suggesting that you
think of waves and puddles as agents? My suggestion was that you need some
sort of internal decision-making mechanism to qualify as an agent.

I don't know anything about the carotid sinus.

Your thermostat example strikes me as similar to my flashlight example. I
might put as: a thermostat senses the temperature and twiddles the controls
of the heating/AC units in response.

I'm not sure where you are going by labeling my discussion explanatory. I
wasn't thinking that I was explaining anything, other, perhaps, than my
intuition of what makes an agent.

-- Russ


On Fri, Jul 14, 2023 at 8:06 PM Nicholas Thompson 
wrote:

> Some examples I like to think about:
>
> Waves arrange pebbles on a beach from small to large
>
> A puddle maintains its temperature at 32 degrees as long as it has ice in
> it.
>
> The carotid sinus maintains the acidity of the blood by causing us to
> breath more oxygen when it gets to acid.  (I hope I have that right.
>
> An old-fashioned thermostat maintains the temperature of a house by
> maintaining the level of a vial of mercury attached to a bi-metallic coil.
>
> Russ, the objection would have with your definition is that it is
> explanatory.   An explanatory  definition identifies a phenomenon with its
> causes, bypassing  the phenomenon that raises the need for an explanation
> in the first place?   What is the relation between agents and their
> surroundings that makes them seem agentish?  Having answered that question,
> your explanation now comes into play.
>
> The thing about the above examples that makes them all seem agenty is that
> they keep bringing the system back to the same place.  The thing about them
> that makes them seem less agenty is that they have only one means to do so.
> Give that thermostat a solar panel, and a heat pump, and an oil furnace and
> have it switch from one to the other as circumstances vary, now the
> thermostat becomes much more agenty.
>
> Does that make any sense?  I think the nastiest problems here are (1)
> keeping the levels of organization straight and (2) teasing out the
> individual that is the agent.
>
> Nick
>
> On Fri, Jul 14, 2023 at 7:29 PM Russ Abbott  wrote:
>
>> I'm not sure what "closure to efficient cause" means. I considered using
>> as an example an outdoor light that charges itself (and stays off) during
>> the day and goes on at night. In what important way is that different from
>> a flashlight? They both have energy storage systems (batteries). Does it
>> really matter that the garden light "recharges itself" rather than relying
>> on a more direct outside force to change its batteries? And they both have
>> on-off switches. The flashlight's is more conventional whereas the garden
>> light's is a light sensor. Does that really matter? They are both tripped
>> by outside forces.
>>
>> BTW, congratulations on your phrase *epistemological trespassing*!
>>
>> -- Russ
>>
>> On Fri, Jul 14, 2023 at 1:47 PM glen  wrote:
>>
>>> I'm still attracted to Rosen's closure to efficient cause. Your
>>> flashlight example is classified as non-agent (or non-living ... tomayto
>>> tomahto) because the efficient cause is open. Now, attach sensor and
>>> effector to the flashlight so that it can flick it*self* on when it gets
>>> dark and off when it gets bright, then that (partially) closes it. Maybe we
>>> merely kicked the can down the road a bit. But then we can talk about
>>> decoupling and hierarchies of scale. From the armchair, there is no such
>>> thing as a (pure) agent just like there is no such thing as free will. But
>>> for practical purposes, you can draw the boundary somewhere and call it a
>>> day.
>>>
>>> On 7/14/23 12:01, Russ Abbott wrote:
>>> > I was recently wondering about the informal distinction we make
>>> between things that are agents and things that aren't.
>>> >
>>> > For example, I would consider most living things to be agents. I would
>>> also consider many computer programs when in operation as agents. The most
>>> obvious examples (for me) are programs that play games like chess.
>>> >
>>> > I would not consider a rock an agent -- mainly because it doesn't do
>>> anything, especially on its own. But a boulder crashnng down a hill and
>>> destroying something at the bottom is reasonably called "an agent of
>>> destruction." Perhaps this is just playing with words: "agent" can have
>>> multiple meanings.  A writer's agent represents the writer in
>>> negotiations with publishers. Perhaps that's just another meaning.
>>> >
>>> > My tentative definition is that an agent must have access to energy,
>>> and it must use that energy to interact with the world. It must also have
>>> some internal logic that determines how it interacts with the world. This
>>> final condition rules out boulders rolling down a hill.
>>> >
>>> > But I doubt that I would call a 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-16 Thread Russ Abbott
Eric,

Thanks for your thoughtful additional thoughts. To make it easier for me to
understand where you are going, would it be possible to include a
prototypical example for each of your categories?

Thanks.

-- Russ


On Fri, Jul 14, 2023 at 5:30 PM David Eric Smith 
wrote:

> I have had a version of this problem for several years, because I want to
> start with small-molecule chemistry on early planets, and eventually talk
> about biospheres full of evolving actors.  I have wanted to have a rough
> category system for how many qualitative kinds of transitions I should need
> to account for, and to explain within ordinary materials by the action of
> random processes.  Just because I am not a(n analytical) philosopher, I
> have no ambition to shoehorn the universe into a system or suppose that my
> categories subsume all questions even I might someday care about, or that
> they are sure to have unambiguous boundaries.  I just want a kind of sketch
> that seems like it will carry some weight.  For now.
>
> Autonomy: One early division to me would be between matter that responds
> “passively” to its environment moment-by-moment, and as a result takes on
> an internal state that is an effectively given function of the surroundings
> at the time, versus one that has some protection for some internal
> variables from the constant outside harassment, and a source of autonomous
> dynamics for those internal variables.  One could bring in words like
> “energy”, but I would rather not for a variety of reasons.  Often, though,
> when others do, I will understand why and be willing to go along with the
> choice.
>
> Control: The category of things with autonomous internal degrees of
> freedom that have some immunity from the slings and arrows of the immediate
> surroundings is extremely broad.  Within it there could be very many
> different kinds of organizations that, if we lack a better word, we might
> call “architectures”.  One family of architectures that I recognize is that
> of control systems.  Major components include whatever is controlled (in
> chem-eng used to be called “the plant”), a “model” in the sense of Conant
> and Ashby, “sensors” to respond to the plant and signal the model, and
> “effectors” to get an output from the model and somehow influence the
> plant.  One could ask when the organization of some material system is well
> described by this control-loop architecture.  I think the control-loop
> architecture entails some degree of autonomy, else the whole system is
> adequately described by passive response to the environment.  But probably
> a sophist could find counterexamples.
>
> One could ask whether having the control-loop architecture counts as
> having agency.  By discriminating among states of the world according to
> their relation to states indexed in the model, and then acting on the world
> (even by so little as acting on one’s own position in the world), one could
> be said to express some sort of “goal”, and in that sense to have “had”
> such a goal.
>
> Is that enough for agency?  Maybe.  Or maybe not.
>
> Reflection: The controller’s model could, in the previous level, be
> anything.  So again very broad.  Presumably a subset of control systems
> have models that incorporate some notion of a a “self”, so they could not
> only specifically model the conditions of the world, but also the condition
> of the self and of the self relative to the world, and then all of these
> variables become eligible targets for control actions.
>
> Conterfactuals and simulation: autonomy need not be limited to the
> receiving of signals and responding to them with control commands.  It
> could include producing values for counterfactual states within the
> controller’s model, of playing out representations of the consequences of
> control signals (another level of reflection, this time on the dynamics of
> the command loop), and then choosing according to a meta-criterion.  Here I
> have in mind something like the simulation that goes on in the tactical
> look-ahead in combinatorial games.  We now have a couple levels of
> representation between wherever the criteria are hard-coded and wherever
> the control signal (the “choice”) acts.  They are all still control loops,
> but it seems likely that control loops can have different enough major
> categories of design that there is a place for names for such intermediate
> layers of abstraction to distinguish some kinds as having them, from others
> that don’t.
>
> How much internal reflective representation does one want to require to
> satisfy one or another concept of agency?  None of them, in particular?  A
> particular subset?
>
> For different purposes I can see arguing for different answers, and I am
> not sure how many categories it will be broadly useful to recognize.
>
> Eric
>
>
> On Jul 15, 2023, at 8:28 AM, Russ Abbott  wrote:
>
> I'm not sure what "closure to efficient cause" means. I considered using
> as an example an outdoor light 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-15 Thread David Eric Smith
Thank you Dave,

Yes, one of the fourteeners I should climb, and before I get too old to do it.

Eric



> On Jul 16, 2023, at 3:51 AM, Prof David West  wrote:
> 
> If you have not read it — I highly recommend The Tree of Knowledge by 
> Humberto Maturana and Francisco Varela. Self organization from simple to 
> complex via a single mechanism. 
> 
> On Fri, Jul 14, 2023, at 7:30 PM, David Eric Smith wrote:
>> I have had a version of this problem for several years, because I want to 
>> start with small-molecule chemistry on early planets, and eventually talk 
>> about biospheres full of evolving actors.  I have wanted to have a rough 
>> category system for how many qualitative kinds of transitions I should need 
>> to account for, and to explain within ordinary materials by the action of 
>> random processes.  Just because I am not a(n analytical) philosopher, I have 
>> no ambition to shoehorn the universe into a system or suppose that my 
>> categories subsume all questions even I might someday care about, or that 
>> they are sure to have unambiguous boundaries.  I just want a kind of sketch 
>> that seems like it will carry some weight.  For now.
>> 
>> Autonomy: One early division to me would be between matter that responds 
>> “passively” to its environment moment-by-moment, and as a result takes on an 
>> internal state that is an effectively given function of the surroundings at 
>> the time, versus one that has some protection for some internal variables 
>> from the constant outside harassment, and a source of autonomous dynamics 
>> for those internal variables.  One could bring in words like “energy”, but I 
>> would rather not for a variety of reasons.  Often, though, when others do, I 
>> will understand why and be willing to go along with the choice.
>> 
>> Control: The category of things with autonomous internal degrees of freedom 
>> that have some immunity from the slings and arrows of the immediate 
>> surroundings is extremely broad.  Within it there could be very many 
>> different kinds of organizations that, if we lack a better word, we might 
>> call “architectures”.  One family of architectures that I recognize is that 
>> of control systems.  Major components include whatever is controlled (in 
>> chem-eng used to be called “the plant”), a “model” in the sense of Conant 
>> and Ashby, “sensors” to respond to the plant and signal the model, and 
>> “effectors” to get an output from the model and somehow influence the plant. 
>>  One could ask when the organization of some material system is well 
>> described by this control-loop architecture.  I think the control-loop 
>> architecture entails some degree of autonomy, else the whole system is 
>> adequately described by passive response to the environment.  But probably a 
>> sophist could find counterexamples.
>> 
>> One could ask whether having the control-loop architecture counts as having 
>> agency.  By discriminating among states of the world according to their 
>> relation to states indexed in the model, and then acting on the world (even 
>> by so little as acting on one’s own position in the world), one could be 
>> said to express some sort of “goal”, and in that sense to have “had” such a 
>> goal.  
>> 
>> Is that enough for agency?  Maybe.  Or maybe not.
>> 
>> Reflection: The controller’s model could, in the previous level, be 
>> anything.  So again very broad.  Presumably a subset of control systems have 
>> models that incorporate some notion of a a “self”, so they could not only 
>> specifically model the conditions of the world, but also the condition of 
>> the self and of the self relative to the world, and then all of these 
>> variables become eligible targets for control actions.  
>> 
>> Conterfactuals and simulation: autonomy need not be limited to the receiving 
>> of signals and responding to them with control commands.  It could include 
>> producing values for counterfactual states within the controller’s model, of 
>> playing out representations of the consequences of control signals (another 
>> level of reflection, this time on the dynamics of the command loop), and 
>> then choosing according to a meta-criterion.  Here I have in mind something 
>> like the simulation that goes on in the tactical look-ahead in combinatorial 
>> games.  We now have a couple levels of representation between wherever the 
>> criteria are hard-coded and wherever the control signal (the “choice”) acts. 
>>  They are all still control loops, but it seems likely that control loops 
>> can have different enough major categories of design that there is a place 
>> for names for such intermediate layers of abstraction to distinguish some 
>> kinds as having them, from others that don’t.
>> 
>> How much internal reflective representation does one want to require to 
>> satisfy one or another concept of agency?  None of them, in particular?  A 
>> particular subset?
>> 
>> For different purposes I can see arguing for 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-15 Thread Frank Wimberly
Eric,

I hadn't seen your mail until David quoted it.

What you say reminds me of a project I worked on for a couple of years in
the Robotics Institute at Carnegie.  Under the global title of Factory of
the Future I coordinated a project to automate and optimize a fluorescent
lamp factory.  There were a sequence of processes that a lamp (bulb) went
through from sand to melted glass to cutting into five foot tubes.  Then
white "paint" flowed through the tube after which it was baked.  To make a
long story short, electrodes were added to the ends, the tube was cured and
tested by running a current through it.  Our approach was to saturate the
sequence of machines with sensors including visual, chemical, viscosity,
electrical, etc.  One goal was to reduce "shrinkage" or rejection of
bulbs.  The existing rate was about 10% as I recall.  It was known that
there were interactions of the elements of the sequence of processes.
Therefore we had hopes that we could have the processors, actuators, and
sensors take advantage of an understanding of those interactions.

The Dutch firm Phillips bought all of Westinghouse's lamp manufacturing
operations and they cancelled our work during the very early design stage.

Disclaimers:  my memory is less than perfect about events from the early
eighties.

Frank

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Sat, Jul 15, 2023, 12:52 PM Prof David West  wrote:

> If you have not read it — I highly recommend The Tree of Knowledge by
> Humberto Maturana and Francisco Varela. Self organization from simple to
> complex via a single mechanism.
>
> On Fri, Jul 14, 2023, at 7:30 PM, David Eric Smith wrote:
>
> I have had a version of this problem for several years, because I want to
> start with small-molecule chemistry on early planets, and eventually talk
> about biospheres full of evolving actors.  I have wanted to have a rough
> category system for how many qualitative kinds of transitions I should need
> to account for, and to explain within ordinary materials by the action of
> random processes.  Just because I am not a(n analytical) philosopher, I
> have no ambition to shoehorn the universe into a system or suppose that my
> categories subsume all questions even I might someday care about, or that
> they are sure to have unambiguous boundaries.  I just want a kind of sketch
> that seems like it will carry some weight.  For now.
>
> Autonomy: One early division to me would be between matter that responds
> “passively” to its environment moment-by-moment, and as a result takes on
> an internal state that is an effectively given function of the surroundings
> at the time, versus one that has some protection for some internal
> variables from the constant outside harassment, and a source of autonomous
> dynamics for those internal variables.  One could bring in words like
> “energy”, but I would rather not for a variety of reasons.  Often, though,
> when others do, I will understand why and be willing to go along with the
> choice.
>
> Control: The category of things with autonomous internal degrees of
> freedom that have some immunity from the slings and arrows of the immediate
> surroundings is extremely broad.  Within it there could be very many
> different kinds of organizations that, if we lack a better word, we might
> call “architectures”.  One family of architectures that I recognize is that
> of control systems.  Major components include whatever is controlled (in
> chem-eng used to be called “the plant”), a “model” in the sense of Conant
> and Ashby, “sensors” to respond to the plant and signal the model, and
> “effectors” to get an output from the model and somehow influence the
> plant.  One could ask when the organization of some material system is well
> described by this control-loop architecture.  I think the control-loop
> architecture entails some degree of autonomy, else the whole system is
> adequately described by passive response to the environment.  But probably
> a sophist could find counterexamples.
>
> One could ask whether having the control-loop architecture counts as
> having agency.  By discriminating among states of the world according to
> their relation to states indexed in the model, and then acting on the world
> (even by so little as acting on one’s own position in the world), one could
> be said to express some sort of “goal”, and in that sense to have “had”
> such a goal.
>
> Is that enough for agency?  Maybe.  Or maybe not.
>
> Reflection: The controller’s model could, in the previous level, be
> anything.  So again very broad.  Presumably a subset of control systems
> have models that incorporate some notion of a a “self”, so they could not
> only specifically model the conditions of the world, but also the condition
> of the self and of the self relative to the world, and then all of these
> variables become eligible targets for control actions.
>
> Conterfactuals and simulation: autonomy need 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-15 Thread Prof David West
If you have not read it — I highly recommend The Tree of Knowledge by Humberto 
Maturana and Francisco Varela. Self organization from simple to complex via a 
single mechanism. 

On Fri, Jul 14, 2023, at 7:30 PM, David Eric Smith wrote:
> I have had a version of this problem for several years, because I want to 
> start with small-molecule chemistry on early planets, and eventually talk 
> about biospheres full of evolving actors.  I have wanted to have a rough 
> category system for how many qualitative kinds of transitions I should need 
> to account for, and to explain within ordinary materials by the action of 
> random processes.  Just because I am not a(n analytical) philosopher, I have 
> no ambition to shoehorn the universe into a system or suppose that my 
> categories subsume all questions even I might someday care about, or that 
> they are sure to have unambiguous boundaries.  I just want a kind of sketch 
> that seems like it will carry some weight.  For now.
> 
> Autonomy: One early division to me would be between matter that responds 
> “passively” to its environment moment-by-moment, and as a result takes on an 
> internal state that is an effectively given function of the surroundings at 
> the time, versus one that has some protection for some internal variables 
> from the constant outside harassment, and a source of autonomous dynamics for 
> those internal variables.  One could bring in words like “energy”, but I 
> would rather not for a variety of reasons.  Often, though, when others do, I 
> will understand why and be willing to go along with the choice.
> 
> Control: The category of things with autonomous internal degrees of freedom 
> that have some immunity from the slings and arrows of the immediate 
> surroundings is extremely broad.  Within it there could be very many 
> different kinds of organizations that, if we lack a better word, we might 
> call “architectures”.  One family of architectures that I recognize is that 
> of control systems.  Major components include whatever is controlled (in 
> chem-eng used to be called “the plant”), a “model” in the sense of Conant and 
> Ashby, “sensors” to respond to the plant and signal the model, and 
> “effectors” to get an output from the model and somehow influence the plant.  
> One could ask when the organization of some material system is well described 
> by this control-loop architecture.  I think the control-loop architecture 
> entails some degree of autonomy, else the whole system is adequately 
> described by passive response to the environment.  But probably a sophist 
> could find counterexamples.
> 
> One could ask whether having the control-loop architecture counts as having 
> agency.  By discriminating among states of the world according to their 
> relation to states indexed in the model, and then acting on the world (even 
> by so little as acting on one’s own position in the world), one could be said 
> to express some sort of “goal”, and in that sense to have “had” such a goal.  
> 
> Is that enough for agency?  Maybe.  Or maybe not.
> 
> Reflection: The controller’s model could, in the previous level, be anything. 
>  So again very broad.  Presumably a subset of control systems have models 
> that incorporate some notion of a a “self”, so they could not only 
> specifically model the conditions of the world, but also the condition of the 
> self and of the self relative to the world, and then all of these variables 
> become eligible targets for control actions.  
> 
> Conterfactuals and simulation: autonomy need not be limited to the receiving 
> of signals and responding to them with control commands.  It could include 
> producing values for counterfactual states within the controller’s model, of 
> playing out representations of the consequences of control signals (another 
> level of reflection, this time on the dynamics of the command loop), and then 
> choosing according to a meta-criterion.  Here I have in mind something like 
> the simulation that goes on in the tactical look-ahead in combinatorial 
> games.  We now have a couple levels of representation between wherever the 
> criteria are hard-coded and wherever the control signal (the “choice”) acts.  
> They are all still control loops, but it seems likely that control loops can 
> have different enough major categories of design that there is a place for 
> names for such intermediate layers of abstraction to distinguish some kinds 
> as having them, from others that don’t.
> 
> How much internal reflective representation does one want to require to 
> satisfy one or another concept of agency?  None of them, in particular?  A 
> particular subset?
> 
> For different purposes I can see arguing for different answers, and I am not 
> sure how many categories it will be broadly useful to recognize.
> 
> Eric
> 
> 
>> On Jul 15, 2023, at 8:28 AM, Russ Abbott  wrote:
>> 
>> I'm not sure what "closure to efficient cause" means. I considered using as 
>> 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-14 Thread Nicholas Thompson
Some examples I like to think about:

Waves arrange pebbles on a beach from small to large

A puddle maintains its temperature at 32 degrees as long as it has ice in
it.

The carotid sinus maintains the acidity of the blood by causing us to
breath more oxygen when it gets to acid.  (I hope I have that right.

An old-fashioned thermostat maintains the temperature of a house by
maintaining the level of a vial of mercury attached to a bi-metallic coil.

Russ, the objection would have with your definition is that it is
explanatory.   An explanatory  definition identifies a phenomenon with its
causes, bypassing  the phenomenon that raises the need for an explanation
in the first place?   What is the relation between agents and their
surroundings that makes them seem agentish?  Having answered that question,
your explanation now comes into play.

The thing about the above examples that makes them all seem agenty is that
they keep bringing the system back to the same place.  The thing about them
that makes them seem less agenty is that they have only one means to do so.
Give that thermostat a solar panel, and a heat pump, and an oil furnace and
have it switch from one to the other as circumstances vary, now the
thermostat becomes much more agenty.

Does that make any sense?  I think the nastiest problems here are (1)
keeping the levels of organization straight and (2) teasing out the
individual that is the agent.

Nick

On Fri, Jul 14, 2023 at 7:29 PM Russ Abbott  wrote:

> I'm not sure what "closure to efficient cause" means. I considered using
> as an example an outdoor light that charges itself (and stays off) during
> the day and goes on at night. In what important way is that different from
> a flashlight? They both have energy storage systems (batteries). Does it
> really matter that the garden light "recharges itself" rather than relying
> on a more direct outside force to change its batteries? And they both have
> on-off switches. The flashlight's is more conventional whereas the garden
> light's is a light sensor. Does that really matter? They are both tripped
> by outside forces.
>
> BTW, congratulations on your phrase *epistemological trespassing*!
>
> -- Russ
>
> On Fri, Jul 14, 2023 at 1:47 PM glen  wrote:
>
>> I'm still attracted to Rosen's closure to efficient cause. Your
>> flashlight example is classified as non-agent (or non-living ... tomayto
>> tomahto) because the efficient cause is open. Now, attach sensor and
>> effector to the flashlight so that it can flick it*self* on when it gets
>> dark and off when it gets bright, then that (partially) closes it. Maybe we
>> merely kicked the can down the road a bit. But then we can talk about
>> decoupling and hierarchies of scale. From the armchair, there is no such
>> thing as a (pure) agent just like there is no such thing as free will. But
>> for practical purposes, you can draw the boundary somewhere and call it a
>> day.
>>
>> On 7/14/23 12:01, Russ Abbott wrote:
>> > I was recently wondering about the informal distinction we make between
>> things that are agents and things that aren't.
>> >
>> > For example, I would consider most living things to be agents. I would
>> also consider many computer programs when in operation as agents. The most
>> obvious examples (for me) are programs that play games like chess.
>> >
>> > I would not consider a rock an agent -- mainly because it doesn't do
>> anything, especially on its own. But a boulder crashnng down a hill and
>> destroying something at the bottom is reasonably called "an agent of
>> destruction." Perhaps this is just playing with words: "agent" can have
>> multiple meanings.  A writer's agent represents the writer in
>> negotiations with publishers. Perhaps that's just another meaning.
>> >
>> > My tentative definition is that an agent must have access to energy,
>> and it must use that energy to interact with the world. It must also have
>> some internal logic that determines how it interacts with the world. This
>> final condition rules out boulders rolling down a hill.
>> >
>> > But I doubt that I would call a flashlight (with an on-off switch) an
>> agent even though it satisfies my definition.  Does this suggest that an
>> agent must manifest a certain minimal level of complexity in its
>> interactions? If so, I don't have a suggestion about what that minimal
>> level of complexity might be.
>> >
>> > I'm writing all this because in my search for a characterization of
>> agents I looked at the article on Agency <
>> https://plato.stanford.edu/archives/win2019/entries/agency/> in the
>> /Stanford Encyclopedia of Philosophy./ I found that article almost a parody
>> of the "armchair philosopher." Here are the first few sentences from the
>> article overview.
>> >
>> > In very general terms, an agent is a being with the capacity to
>> act, and ‘agency’ denotes the exercise or manifestation of this capacity.
>> The philosophy of action provides us with a standard conception and a
>> 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-14 Thread David Eric Smith
I have had a version of this problem for several years, because I want to start 
with small-molecule chemistry on early planets, and eventually talk about 
biospheres full of evolving actors.  I have wanted to have a rough category 
system for how many qualitative kinds of transitions I should need to account 
for, and to explain within ordinary materials by the action of random 
processes.  Just because I am not a(n analytical) philosopher, I have no 
ambition to shoehorn the universe into a system or suppose that my categories 
subsume all questions even I might someday care about, or that they are sure to 
have unambiguous boundaries.  I just want a kind of sketch that seems like it 
will carry some weight.  For now.

Autonomy: One early division to me would be between matter that responds 
“passively” to its environment moment-by-moment, and as a result takes on an 
internal state that is an effectively given function of the surroundings at the 
time, versus one that has some protection for some internal variables from the 
constant outside harassment, and a source of autonomous dynamics for those 
internal variables.  One could bring in words like “energy”, but I would rather 
not for a variety of reasons.  Often, though, when others do, I will understand 
why and be willing to go along with the choice.

Control: The category of things with autonomous internal degrees of freedom 
that have some immunity from the slings and arrows of the immediate 
surroundings is extremely broad.  Within it there could be very many different 
kinds of organizations that, if we lack a better word, we might call 
“architectures”.  One family of architectures that I recognize is that of 
control systems.  Major components include whatever is controlled (in chem-eng 
used to be called “the plant”), a “model” in the sense of Conant and Ashby, 
“sensors” to respond to the plant and signal the model, and “effectors” to get 
an output from the model and somehow influence the plant.  One could ask when 
the organization of some material system is well described by this control-loop 
architecture.  I think the control-loop architecture entails some degree of 
autonomy, else the whole system is adequately described by passive response to 
the environment.  But probably a sophist could find counterexamples.

One could ask whether having the control-loop architecture counts as having 
agency.  By discriminating among states of the world according to their 
relation to states indexed in the model, and then acting on the world (even by 
so little as acting on one’s own position in the world), one could be said to 
express some sort of “goal”, and in that sense to have “had” such a goal.  

Is that enough for agency?  Maybe.  Or maybe not.

Reflection: The controller’s model could, in the previous level, be anything.  
So again very broad.  Presumably a subset of control systems have models that 
incorporate some notion of a a “self”, so they could not only specifically 
model the conditions of the world, but also the condition of the self and of 
the self relative to the world, and then all of these variables become eligible 
targets for control actions.  

Conterfactuals and simulation: autonomy need not be limited to the receiving of 
signals and responding to them with control commands.  It could include 
producing values for counterfactual states within the controller’s model, of 
playing out representations of the consequences of control signals (another 
level of reflection, this time on the dynamics of the command loop), and then 
choosing according to a meta-criterion.  Here I have in mind something like the 
simulation that goes on in the tactical look-ahead in combinatorial games.  We 
now have a couple levels of representation between wherever the criteria are 
hard-coded and wherever the control signal (the “choice”) acts.  They are all 
still control loops, but it seems likely that control loops can have different 
enough major categories of design that there is a place for names for such 
intermediate layers of abstraction to distinguish some kinds as having them, 
from others that don’t.

How much internal reflective representation does one want to require to satisfy 
one or another concept of agency?  None of them, in particular?  A particular 
subset?

For different purposes I can see arguing for different answers, and I am not 
sure how many categories it will be broadly useful to recognize.

Eric


> On Jul 15, 2023, at 8:28 AM, Russ Abbott  wrote:
> 
> I'm not sure what "closure to efficient cause" means. I considered using as 
> an example an outdoor light that charges itself (and stays off) during the 
> day and goes on at night. In what important way is that different from a 
> flashlight? They both have energy storage systems (batteries). Does it really 
> matter that the garden light "recharges itself" rather than relying on a more 
> direct outside force to change its batteries? And they both have on-off 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-14 Thread Russ Abbott
I'm not sure what "closure to efficient cause" means. I considered using as
an example an outdoor light that charges itself (and stays off) during the
day and goes on at night. In what important way is that different from a
flashlight? They both have energy storage systems (batteries). Does it
really matter that the garden light "recharges itself" rather than relying
on a more direct outside force to change its batteries? And they both have
on-off switches. The flashlight's is more conventional whereas the garden
light's is a light sensor. Does that really matter? They are both tripped
by outside forces.

BTW, congratulations on your phrase *epistemological trespassing*!

-- Russ

On Fri, Jul 14, 2023 at 1:47 PM glen  wrote:

> I'm still attracted to Rosen's closure to efficient cause. Your flashlight
> example is classified as non-agent (or non-living ... tomayto tomahto)
> because the efficient cause is open. Now, attach sensor and effector to the
> flashlight so that it can flick it*self* on when it gets dark and off when
> it gets bright, then that (partially) closes it. Maybe we merely kicked the
> can down the road a bit. But then we can talk about decoupling and
> hierarchies of scale. From the armchair, there is no such thing as a (pure)
> agent just like there is no such thing as free will. But for practical
> purposes, you can draw the boundary somewhere and call it a day.
>
> On 7/14/23 12:01, Russ Abbott wrote:
> > I was recently wondering about the informal distinction we make between
> things that are agents and things that aren't.
> >
> > For example, I would consider most living things to be agents. I would
> also consider many computer programs when in operation as agents. The most
> obvious examples (for me) are programs that play games like chess.
> >
> > I would not consider a rock an agent -- mainly because it doesn't do
> anything, especially on its own. But a boulder crashnng down a hill and
> destroying something at the bottom is reasonably called "an agent of
> destruction." Perhaps this is just playing with words: "agent" can have
> multiple meanings.  A writer's agent represents the writer in
> negotiations with publishers. Perhaps that's just another meaning.
> >
> > My tentative definition is that an agent must have access to energy, and
> it must use that energy to interact with the world. It must also have some
> internal logic that determines how it interacts with the world. This final
> condition rules out boulders rolling down a hill.
> >
> > But I doubt that I would call a flashlight (with an on-off switch) an
> agent even though it satisfies my definition.  Does this suggest that an
> agent must manifest a certain minimal level of complexity in its
> interactions? If so, I don't have a suggestion about what that minimal
> level of complexity might be.
> >
> > I'm writing all this because in my search for a characterization of
> agents I looked at the article on Agency <
> https://plato.stanford.edu/archives/win2019/entries/agency/> in the
> /Stanford Encyclopedia of Philosophy./ I found that article almost a parody
> of the "armchair philosopher." Here are the first few sentences from the
> article overview.
> >
> > In very general terms, an agent is a being with the capacity to act,
> and ‘agency’ denotes the exercise or manifestation of this capacity. The
> philosophy of action provides us with a standard conception and a standard
> theory of action. The former construes action in terms of intentionality,
> the latter explains the intentionality of action in terms of causation by
> the agent’s mental states and events.
> >
> > _
> > _
> > That seems to me to raise more questions than it answers. At the same
> time, it seems to limit the notion of /agent/ to things that can have
> intentions and mental models.  (To be fair, the article does consider the
> possibility that there can be agents without these properties. But those
> discussions seem relatively tangential.)
> >
> > Apologies for going on so long. Thanks, Frank, for opening this can of
> worms. And thanks to the others who replied so far.
> >
> > __-- Russ Abbott
> > Professor Emeritus, Computer Science
> > California State University, Los Angeles
> >
> >
> >
> > On Fri, Jul 14, 2023 at 8:33 AM Frank Wimberly  > wrote:
> >
> > Joe Ramsey, who took over my job.in  the Philosophy
> Department at Carnegie Mellon, posted the following on Facebook:
> >
> > I like Neil DeGrasse Tyson a lot, but I saw him give a spirited
> defense of science in which he oddly gave no credit to philosophers at all.
> His straw man philosopher is a dedicated *armchair* philosopher who spins
> theories without paying attention to scientific practice and contributes
> nothing to scientific understanding. He misses that scientists themselves
> are constantly raising obviously philosophical questions and are often
> ill-equipped to think about them clearly. What is the correct
> 

Re: [FRIAM] What is an agent [was: Philosophy and Science}

2023-07-14 Thread glen

I'm still attracted to Rosen's closure to efficient cause. Your flashlight 
example is classified as non-agent (or non-living ... tomayto tomahto) because 
the efficient cause is open. Now, attach sensor and effector to the flashlight 
so that it can flick it*self* on when it gets dark and off when it gets bright, 
then that (partially) closes it. Maybe we merely kicked the can down the road a 
bit. But then we can talk about decoupling and hierarchies of scale. From the 
armchair, there is no such thing as a (pure) agent just like there is no such 
thing as free will. But for practical purposes, you can draw the boundary 
somewhere and call it a day.

On 7/14/23 12:01, Russ Abbott wrote:

I was recently wondering about the informal distinction we make between things 
that are agents and things that aren't.

For example, I would consider most living things to be agents. I would also 
consider many computer programs when in operation as agents. The most obvious 
examples (for me) are programs that play games like chess.

I would not consider a rock an agent -- mainly because it doesn't do anything, especially on its 
own. But a boulder crashnng down a hill and destroying something at the bottom is reasonably called 
"an agent of destruction." Perhaps this is just playing with words: "agent" can 
have multiple meanings.  A writer's agent represents the writer in negotiations with publishers. 
Perhaps that's just another meaning.

My tentative definition is that an agent must have access to energy, and it 
must use that energy to interact with the world. It must also have some 
internal logic that determines how it interacts with the world. This final 
condition rules out boulders rolling down a hill.

But I doubt that I would call a flashlight (with an on-off switch) an agent 
even though it satisfies my definition.  Does this suggest that an agent must 
manifest a certain minimal level of complexity in its interactions? If so, I 
don't have a suggestion about what that minimal level of complexity might be.

I'm writing all this because in my search for a characterization of agents I looked at the 
article on Agency  in the 
/Stanford Encyclopedia of Philosophy./ I found that article almost a parody of the 
"armchair philosopher." Here are the first few sentences from the article overview.

In very general terms, an agent is a being with the capacity to act, and 
‘agency’ denotes the exercise or manifestation of this capacity. The philosophy 
of action provides us with a standard conception and a standard theory of 
action. The former construes action in terms of intentionality, the latter 
explains the intentionality of action in terms of causation by the agent’s 
mental states and events.

_
_
That seems to me to raise more questions than it answers. At the same time, it 
seems to limit the notion of /agent/ to things that can have intentions and 
mental models.  (To be fair, the article does consider the possibility that 
there can be agents without these properties. But those discussions seem 
relatively tangential.)

Apologies for going on so long. Thanks, Frank, for opening this can of worms. 
And thanks to the others who replied so far.

__-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles



On Fri, Jul 14, 2023 at 8:33 AM Frank Wimberly mailto:wimber...@gmail.com>> wrote:

Joe Ramsey, who took over my job.in  the Philosophy 
Department at Carnegie Mellon, posted the following on Facebook:

I like Neil DeGrasse Tyson a lot, but I saw him give a spirited defense of 
science in which he oddly gave no credit to philosophers at all. His straw man 
philosopher is a dedicated *armchair* philosopher who spins theories without 
paying attention to scientific practice and contributes nothing to scientific 
understanding. He misses that scientists themselves are constantly raising 
obviously philosophical questions and are often ill-equipped to think about 
them clearly. What is the correct interpretation of quantum mechanics? What is 
the right way to think about reductionism? Is reductionism the right way to 
think about science? What is the nature of consciousness? Can you explain 
consciousness in terms of neuroscience? Are biological kinds real? What does it 
even mean to be real? Or is realism a red herring; should we be pragmatists 
instead? Scientists raise all kinds of philosophical questions and have 
ill-informed opinions about them. But *philosophers* try to answer
them, and scientists do pay attention to the controversies. At least the 
smart ones do.



--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe