Nick, ... come on man.... 1) You can't infinite regress until you know what a single iteration entails. It does no good to try to convince me it is "turtles all the way down" before we have at least some agreement about what the heck a turtle is.
2) Infinite regress is not always the right way to go. Sometimes you see a turtle on the side of the road and, after excitedly lifting it up, are startled to find there is not even a single other turtles underneath it! 3) To the direct question... "To me, you are a model, right?"... the answer is a flat "No." Stop trying to do a bait and switch. YOU don't get to do that (at least not in this context). 3a) Serving as bait, we have a distinction between dealing with a thing, and dealing with a model of a thing. When you play with the toy train, you are playing with a model (to use your example from the paper)... but the train driver who is driving an actual train is playing with the actual train. Sometimes, in some conversations, the actual train might serve as a model for something else, but it isn't a model for the thing it is, it IS the thing that it is. 3b) For the switch, we have a way of talking about "mental models" where we become dualists, and then eventually idealists, by asserting that all anyone ever knows is their mental "idea"/"model"/"representation"/"sign". There is some sense in which that is true, which is what makes it tempting to go to the extreme, but it is a very, very different topic of discussion from the toy train issue. 3c) You would not, for example, in model-train store, seriously assert to explain to a child playing with a model train that actual train drivers works equally with model trains, because both the train driver and the child are ultimately are working with their personal mental model of what a train is.** 3d) So, we need to keep our conversations straight, and not be so eager to find infinite regress at every turn. When you interact with me, you interact with me. If you have something that is not-me, which you use as a model of me (a voodoo doll, perhaps?), then when you interact with THAT, you are interacting with a model of me. 4) You also ask... "So, when you intend something by a model, it is a case of a model intending a model, right? So, models intend, right? So why not just say so, in the first instance." Sticking with the above distinction.... There is a great episode of House M.D., where the great doctor is stuck on a plane where passengers start getting sick. He assembles of team of people with vague physical resemblance to his team, and starts instructing them how to act. The younger white guy is instructed to agree with him, the woman to be morally outraged by the whole thing, etc. **** At that point House is intending models, who might themselves intend models. So now we have the multiple layers you wanted, but, now that we have two layers constructed properly, we find that it doesn't enhance the discussion at all. Once we have avoided the bait-and-switch, we find that "a model intending a model" is still *just *a person with a model. ** Full disclosure... I would totally say something like that to a kid in a model-train store... but I like intellectually messing with people, and would enjoy watching them struggle with the dilemma. **** Link to the House scene (it is shorter than I remember... but I think he goes back and talks to them again later in the episode) https://www.youtube.com/watch?v=bA6T5L196pg On Wed, Jan 15, 2020, 3:44 PM <thompnicks...@gmail.com> wrote: > Eric, > > > > I apologize forwhat may seem sophomoric smarminess but….. > > > > To me, you are a model, right? Whatever you are, it is my model of you > with which I am dealing. So, when you intend something by a model, it is > a case of a model intending a model, right? So, models intend, right? So > why not just say so, in the first instance. > > > > Nick > > > > Nicholas Thompson > > Emeritus Professor of Ethology and Psychology > > Clark University > > thompnicks...@gmail.com > > https://wordpress.clarku.edu/nthompson/ > > > > > > *From:* Friam <friam-boun...@redfish.com> *On Behalf Of *Eric Charles > *Sent:* Wednesday, January 15, 2020 1:27 PM > *To:* The Friday Morning Applied Complexity Coffee Group < > email@example.com> > *Subject:* Re: [FRIAM] description - explanation - metaphor - model - and > reply > > > > There is an interesting issue that often comes up in these contexts, in > which someone asserts that the models mean something all on their own. If > it is someone who has picked up our language, they might, for example, > ask "What does the model intend? The Model, itself? " > > > > Glen does this by saying "there's good reason to believe you will *never* > actually understand how your model works." > > > > I have seen Nick oscillate in those discussions, towards and away from > thinking he needs to rewrite everything. > > > > I insist that is not the direction should be going in. The model doesn't > intend anything. A person, who is offering a model, intends something by > it, and does not intend other things. Because THAT is what we'r are > talking about.... There IS a chance (though no guarentee) that the person > offering a model (fully) understands what they do or do not intend to match > between the model and the situation that is modeled. > > > > We aren't talking about anything other than people doing things. X is "a > model" if/when someone thinks an aspect of X matches something happening > somewhere else, and all models contain both intended and unintended > implications. This makes a question of whether or not someone "fully > understands their model" a question primarily about the understanding, not > primarily about "the model itself". > > > > > > > > > > > > On Wed, Jan 15, 2020, 1:13 PM uǝlƃ ☣ <geprope...@gmail.com> wrote: > > Did Epstein ever respond to your criticism? > > For what little it's worth, I disagree with your lesson. Obtuse models can > be very useful. In fact, there's good reason to believe you will *never* > actually understand how your model works, any more than you'll ever > understand how that model's referent(s) work. I may even be able to use > Pierce to argue that to you. 8^) > > On 1/15/20 9:23 AM, thompnicks...@gmail.com wrote: > > The lesson is, if you > > don’t understand how your model works, you aren’t doing yourself any > favors by inventing it. This led to my war with Epstein in the pages of > JSSS about the relation between explanation and prediction. > > -- > ☣ uǝlƃ > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com > archives back to 2003: http://friam.471366.n2.nabble.com/ > FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com > archives back to 2003: http://friam.471366.n2.nabble.com/ > FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove >
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove