Glen - Your diatribe reminds me of the way I used to frame my (rare) pitches in DC back during my time working in the "Decision Support Systems" division at LANL. I started out with "I'm here to help you NOT make a decision". This appalled them, becuase "by golly, by gosh, they were Decision Makers". About that time, in fact, the "Decider in Chief" was coined by his own claim: "I am the Decider!".
My point was that they were, by the nature of their roles, and the self-selection of choosing to work in those roles *decisive people*. They were men and women of *action*! But to act (intentionally) they had to Decide! and to Decide! they had to have data and facts and models to back up their decision. But frankly as often as not, I saw them use our work to *justify* the decision they had already made or were leaning heavily toward, *apparently* based on larger strategic biases. Often these "larger strategic biases" are what you and I would call "political agendas". The military folks were less "political" in the usual sense, but they seemed to have *much* larger biases (or maybe the consequences of their decisions were MUCH more acute and direct?). Nobody seemed to truly be interested in "making a better decision" and as a developer of such tools, I was acutely aware of the risk that some tool I helped deliver them *might* help them make a *bad decision* with the wrong perspective/filter/lens on the facts available. Maybe it was my own sense of (wanting to avoid) responsibility that had me judging that they "weren't really using our tools to inform or make their decision, but rather using it to justify the one they already were set to make") Maybe I am Pollyanna, but the work SimTable is doing (and perhaps many others in this space) is being used by people "closer to the ground". Perhaps my problem at LANL was that our "customers" were Agency/Department program managers and their high-level decision makers (e.g. Cabinet-level or at least their staff). As for your gut-level (and often well articulated) mistrust of "metaphorical thinking", you may conflate a belief (such as mine) that language is metaphorical at it's base with being a "metaphorical thinker". Metaphor gets a bad rap/rep perhaps because of the "metaphorical license" often taken in creative arts (albeit for a different and possibly higher purpose). I know we've argued this back and forth (what... like tossing a ball... or fencing with swords?) here and offline (off line? what "line"?), so we might be beating a dead horse (what? there is no horse, there is no whip, no stick, no beating going on!). I will agree that substituting a clever or familiar metaphor for more strict analysis is always risky, and if what you mean by "metaphorical thinking" is retreating to trite and over-used metaphors when something much tighter is called for, then I agree with your dismissal (dismiss? Can an argument be dismissed like an unruly subordinate?) Life is like a Simile, - Steve On 4/17/20 5:50 PM, uǝlƃ ☣ wrote: > So, if you're serious about *your* attempt to model Nate Silver, then you > would find something in your experience that *means* something similar to > what Nate means. And jargonal "expected value" <=> vernacular "I expect" > isn't that thing. > > Your last paragraph comes closer. But you chose to frame it as something you > would prefer him to say, as opposed to using your own words to restate what > he's actually saying. > > To me, I think what he's actually saying is "It's my job to collect and clean > some data, often based on heuristics, then run that data through some > (admittedly biased) algorithms, present the result to you, and engage in some > light-handed (also biased) interpretation of that data." Then he might go on > to say something like "What you infer from that output data is your own > business. But don't tell me what I implied simply based on your (mistaken) > inference." > > That's *my* rewording because it's analogous to experiences I have every > single day building and running models for (often computationally > incompetent) people. It has nothing to do with prediction and *everything* to > do with putting computational power into the hands of people who, without me, > wouldn't ordinarily have that power. Nate's a (horizontal) technologist. It's > regrettable that he's being thought of as some sort of oracle. (Even if he > ends up getting off on the attention.) > > Technologists, like scientists, struggle a LOT with packaging what they do > and how their produce can be used. And *always* ... always always always, > there's some non-tech person somewhere imputing things that are not there (or > ignoring things that are there). It would help a lot if you "soft skilled" > people would actually use your soft skills and make a real effort to > understand what's being said without imputing what you want to hear. (To be > clear, I'm not making accusations against you or anyone here, right now... > just venting a little. ... Just this morning a fellow technologist was > telling me how his executives renamed a relatively straightforward machine > learning tool with some high-falutin' misleading references to "virtuality" > and AI. Arg. You metaphor people make our lives so difficult.) > > On 4/17/20 4:08 PM, [email protected] wrote: >> I think an obsessively metaphorical thinker is one who has the arrogance to >> suppose that s/he has */some/* familiar experience by which s/he can model >> any experience of another person. I actually don't believe that that is >> true, but I think it is true enough that I feel it is my obligation to try. >> >> >> I am deeply suspicious of modal talk of any form because it is so often used >> in human interactions to manipulate other people. "I probably will return >> your tools tomorrow". My colleagues used to say, "I think the Department >> should improve its teaching." So often in human affairs, modal language has >> no practicial consequences whatsoever except to confuse and lull the >> audience. >> >> Now, what most people wanted to know from Nate Silver is whether Clinton was >> going to win the election. Nate constantly says that making such >> predictions is, strictly speaking, not his job. As long as what happens >> falls within the error of his prediction, he feels justified in having made >> it. He will say things like, "actually we were right." I would prefer him >> to say, "Actually we were wrong, /but I would make the same prediction under >> the same circumstances the next time.” /In other words, the right procedure >> produced, on this occasion, a wrong result. .-. .- -. -.. --- -- -..-. -.. --- - ... -..-. .- -. -.. -..-. -.. .- ... .... . ... FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/
