With this example, I understand why Glen prefers triggers and actions to be distinct, if/when/as I might use this language for agent modeling I would prefer the same.
In my naive version of pop psych, I am used to "triggers" and "reactions" being somewhat convolved. The firearm metaphor is quite apt here... with a "hair trigger" and a gun whose "safety is off" or is "cocked" (double-action revolver/rifle). I take Glen's use of "installed triggers" to be something more like "quantized" self-awareness... and pre-existing/organic/unselfaware triggers to be *reactions*. As for "heuristics", I agree that they are more about responses than conditions for those responses, but I do have a hard time separating them entirely. I think of a heuristic as a "simplified rule of thumb used to respond quickly to an event where a more elaborately considered response may not be timely enough or simply not worth the effort". 1) in Marcus model seems to be the way thoughtful/considerate but possibly not terribly sophisticated people operate all of the time... they perk along 'reacting' to the world as it impinges on their bubble but may recognize when they are being oversensitive or causing undue harm with their reactions and re-evaluate one or both. 2) is what a more overtly self-aware person does, and in fact may do this kind of self-reflection and planning under the guidance of a friend, partner, mentor, therapist, or spiritual leader. In any case, the word "trigger" used to be a "trigger" for me, and made me withdraw from the conversation. When used to describe *my* (re)actions, it felt like a weapon being wielded to try to modify my behaviour,and when used to describe the speaker's actions, it felt like a shield being used to excuse/rationalize their own action. Like many pop-psych terms, I've "metaprogrammed" myself to recognize many of my "triggers", apply a "simple heuristic" in response, and then log the event for later review and consider installing a more sophisticated heuristic. - Steve On 2/22/20 4:11 PM, Marcus Daniels wrote: > I think different views may arise like this. > > 1) Agent runs all the time and has an active set of triggers and actions > associated with these triggers. When triggers are too sensitive or actions > too consequential, they are changed. Triggers and actions are being added > and changed all the time. I'll call this the interpreter model. > > 2) Agent has an offline planning mode, and compiles the set of triggers and > action, and periodically puts them into production. The planning mode is > reflective and may use abstracted/specialized language to do the planning. > Ethical thinking occurs in planning mode. I'll call this the > metaprogramming model. The advantage of the metaprogramming model is that > the triggers and action can operate at high speed. Don't think about > dancing, dance. The metaprogrammer observes the consequences of > reprogramming its reptile brain, but only a period of letting the reptile > brain operate in the wild. > > Marcus > > On 2/22/20, 1:41 PM, "Friam on behalf of uǝlƃ ☣" <[email protected] > on behalf of [email protected]> wrote: > > > > On 2/22/20 7:45 AM, Marcus Daniels wrote: > > Glen writes: > > > > < By asking for more examples, it seems the original one (Ellison's > Trump support) isn't meaningful for you? Another example might be learning > that your organization accepted money from a convicted sex offender like > Epstein. These are triggers for some people. They'd trigger me, too. > > > > > A reason I can see for avoiding a term like EI is because others might > not have a binding for it, or there are too many different bindings observed > for it. And, specifically, that it is "pompous" to use the term if it is > expected there is no binding -- a way to bully the conversation in some > direction putting the other party at a disadvantage. But it is hypocritical > if one turns around and assumes there are shared values and that we should or > do all have them. This is arguing in bad faith because some values are > assumed to be mandatory and other optional, rather than all things being > optional. > > Well, a) I didn't assume any shared values. I explicitly stated that such > things are triggers for *some* people. I didn't say *all* people should be > triggered by getting money from Epstein. And, given the popular culture at > the moment I said I would *advise* Pinker to install a trigger, not that he > must or even *should*. So, b) if you're accusing me of arguing in bad faith > for rejecting the need for a sophisticated concept like EI, I think it's a > false accusation. > > Even in my first post, I think I made the explicit comment that it > doesn't matter whether the Oracle employee likes or dislikes that Ellison > supports Trump. What matters is that the employee knows that Ellison = > Oracle, hence Oracle supports Trump. And the question was whether that's a > good trigger to have, regardless of how you react to the trigger. > > So, there are no shared values, here, only a rejection that we need > sophisticated rhetoric like EI. > > -- > ☣ uǝlƃ > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com > archives back to 2003: http://friam.471366.n2.nabble.com/ > FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com > archives back to 2003: http://friam.471366.n2.nabble.com/ > FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove > ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
