This is good, Nick, There are some things you say here to which I think I can reply constructively. I will commit the indiscretion (c.f. Pound) of responding to a couple of sentences that weren’t addressed to me, and then others that were.
(TL:DR of course) > On Mar 21, 2026, at 16:07, Nicholas Thompson <[email protected]> wrote: > > I will work backwards. What is the test by which you determine whether your > talc-ing of the lens has improved your vision or whether it has made it > worse? That's not a rhetorical question. Work it through in your > imagination. How exactly do you determine this? This is the question that > lies at the bottom of EricS's complaint about TK. As usual for me, I don’t know how to describe a general system, but I think I can work backward from example cases. Note: I will need to say later that you seem to scope the word “metaphor” much wider than I ever would, and as a result you seem to assimilate categories that to me are very different and need to be handled differently. Because I tend to see lots of different types at work, my examples tend to be fairly narrow and particular. Case 1: There are certain formal presentations that claim to capture what makes a dynamic “evolutionary”. They are derived from the population geneticists’ conventions. As formal systems, they are whatever they are. Terms are defined; you can check whether they are coherent, and wherever they are, you can calculate things with them. There isn’t anything in this exercise of a formal system that one could regard as being “in error”, as long as you use the terms in ways that keep them coherent. To the extent that they get bound do observations of phenomena, they are being treated as a model (in one sense of that term; several somewhat different senses exist). The place where metaphor enters is in the choice to see this formal system, as opposed to some other one, as a model, and here I think we can argue that there are understandable motives, and also errors. I take this to be the main program you are after, and one that I also want. Within the usual setup, there is an (appropriately) ugly term called “units of selection”. It has all sorts of problems and inadequacies, hacks, compromises, and so forth. For a very limited range of cases, it is a good abstraction, and sufficient for certain statistical needs. (That last sentence has a trumpy structure.) It is related to another term that need not be ugly but is often made so: “levels of selection”, for which the obligatory book to cite (whether or not one has read it) is Samir Okasha’s. I will argue that the choice to insist, cult-like, that an evolutionary process can only be formalized around “units of selection” is an instance of making a metaphor in which the source domain is our folk-experience of “organisms” about as it would have been available (and would have seemed unproblematic) to Darwin. And I would argue that an error is made when the metaphor is applied to systems in which the target lacks infrastructure that could ever let it take on capabilities that organisms have (say, of long-term type-stability of kinds that biology usually doesn’t even question), and that are essential for organisms to provide what they provide to evolutionary dynamics. This misattribution gets particularly acute for Origin of Life problems, so they bring it into sharp relief. Notably, the _only_ reason I am willing to make this charge is that I can construct other formal systems to adopt as models, which don’t insist on anything that is metaphorically so literally bound to organisms a la Darwin, and that will be more faithful to the play-out of the phenomena to which they are being applied — even by the terms that the evolutionists set out for themselves about what is the specifically “evolutionary” aspect of some dynamic. This is the usual situation of not knowing there was a question until having constructed an answer enables one to articulate that question. Case 2: I second either DaveW or John Searle (different as those two are) in not much liking the “brain is a computer” classification, and in thinking it is a mis-used metaphor. In two ways, my objection can’t be like Dave’s is: 1) I don’t have something else that I want to put in place of the “computer” source domain, and 2) I don’t understand the mystics’ position. And Searle in general has too much other baggage that I don’t want to endorse, that I have only maybe a sliver of overlap with him. But I can say I have my own, very tentative, objection. A place I differ from the others is that I am a lot more optimistic that current ML systems will have _a lot_ to say about brains and thought, along lines Marcus advocates, and I think there is a crisp question one can pose from that view. For me, the core of the objection is that, as I use the term “computer”, if I mean a formal system, then I intend one of the formalized classes: maybe Church/Turing equivalent, or maybe some formalized process calculus, or maybe various kinds of gradient-descent as in neural net architectures with their training protocols. If I mean the common/sub-formal usage, I intend the family-resemblance sense in which we will clump many engineered systems as “computers” if they will carry some parts of the formal markings without inconsistency (von Neumann machines that don’t have infinite Turing tapes, but that will carry out algorithm fragments exactly as the Turing machine would if the tape isn’t limiting to that fragment, etc.) My speculative resistance to the “is a computer” assignment, is that it seems to make brains a sub-class of some other engineered system, or to suppose that the partial ability of brain activity to carry formal markings without inconsistency gives us reason to believe that there are _no other markings_ that it might also carry that are worth articulating, and that are essential to its brain-ness or thought-ness. I take this to be Searle’s point in saying “my brain is like my stomach”: it is its whole self, which may as well be indefinitely complicated and unknown to me, beyond a few of its functions that I can describe. Don’t, if I can use a Glen construction correctly, “pre-emptively register”. My optimism for bringing in the computer metaphor heavily is that it seems entirely reasonable to suppose that there are many things the brain does, which we experience and classify (in our folk language since antiquity) as “thinking”, that computers are also doing, and that for practical purposes they are doing “in the same way”. About the latter I need to give two details to be understandable, because here I think there is a puzzle to solve (tip to TK). I would maybe _not_ want to pursue either of the arguments that: 1) the brain _cannot_ be doing what the ML computing machines are doing, because they are different substrates; or 2) that the brain _should_ be doing what the ML computing machines are doing because Hopfield designed them to imitate what he understood neurons to be doing. A much more fun problem for me would be the conjecture: There is a certain class of pattern-finding and pattern-modeling problems that are richly furnished by the world, for which there is a class of “most easily found” ways of solving them. Brains and ML computing machines may be doing effectively the same thing, because they have both found these easiest ways, in spite of some difference of substrate, and perhaps guiding the convergence of other aspects of substrate (approximated, let us guess, by Hopfield et seq.) This is the kind of computational-complexity or algorithm-type argument I would make for protein folding, or for metabolism, along similar lines. Abstract a little bit away from material things, and ask about the nature of patterns of events in a world of noise, and then ask for paths of least resistance. So the metaphor of “is a computer” had great service to give, if it directed us to understand the nature of problems, after which, like WIttgenstein’s ladder, we have the sense to throw it away. Moreover, the ability of computers to do _really what_ brains are doing in one respect, while being different in others, gives us angles on the brain by disaggregating these aspects, in ways we rarely get with whole brains, apart from idiosyncratic cases of pathology that expose part of the machinery, though generally not for the sake of making it easy for us to interpret. I think one could go on, with similar rather narrow cases of this kind. Continuing my self-indulgence, I do want to repeat something I wrote some months ago, about one aspect of the term “metaphor”. My understanding is that, narrowly, it applies to a specific language construct: calling one thing by the name of something else that both speaker and listener understand it is not. I am willing to extend that as far as “referring to one thing in the image of another that both referrer and recipient know it is not”. So, to go beyond just names, to general constructs for images or referents. I would not, then, refer to just-any-old-gesture in some direction as “a metaphor”. I think (in current American English usage; I won’t proclaim about old Greek), the deliberate violation is a core sense of the term “metaphor” that sets it apart from other assimilation-indicating terms, like “analogy”, or “family resemblance”, or various other senses of “model”. I am actually even violating my own usage here, in the previous two examples, in that the known-and-intended name violation is often _not_ part of what the units-of-selection crowd or the brain-is-a-computer crowd do. i will say that, when processed properly, that violation aspect does arise and makes them (imagistic) metaphors-proper. > We would like to think it's not "just" a social test. This is where Peirce > takes off, I think: yes, it's a social test, but it's not "just" a social > test. In fact, the long term social test has methods, it has rigor, it has > precision, it has all the good things that collective human cognition can > have. These are scientific methods and the pursuit of such methods will > lead you to have fewer surprises in your life. The one thing it will never > have is experience of entities beyond the realm of human experience. It > follows that every sighting of a thing previously not encountered has to be a > metaphor. This is the point I telegraphed above: Nick’s usage of “metaphor” is so wide that he is willing to make a logical-ish claim that “It follows that….” In my usage, no such “it follows” can be asserted. There are all kinds of new sightings in my world that “kinda remind me of” something else more familiar. That is the part of what you say that I think is generic. We are little systems (pace Dave) in a world that is bigger than we are, and there is no general solution to the Problem of Induction. For that reason we cannot help but classify. But such classification can be any-old kind of binding of situations and systems. I think that when people on the list have preferred “model” over “metaphor” for cases in point, it is the sense of “model” that: two phenomena are somehow in some common category, and whatever that category is, I am willing to take one as “a model of” the other. I actually think that, at one remove, this is what we do with our formal systems as well. Once declared into existence, a formal system becomes one more thing-in-the-world. We can exercise it, participate in its use, and grow our own experience-base with it. When we “bind” that experience with the formal system to our experiences with some other phenomena — taking the formal system for “a model of” the naturally occurring phenomenon — that binding occurs in ineffable domains that can be as hazardous as those that assimilate experiences with distinct natural phenomena. It is here that the question of what empiricism is becomes both hard and interesting. > EricS. I have, as you know, enormous respect for you,, and gratitude for > your open-mindednss back in the old days when we heathens were allowed to > tread the sacred halls of the Institute and, if we were very respectful, pet > the cat.. When i remember those days, it is bizarre to me that we cannot > share a good laugh about the contradictions in your paragraph: > > It is tempting to laugh at the way to take Glen’s meaning; where when Glen > argues that one should look through the lens in the sense of meaning-making, > Nick claims innocently to understand that as looking through the lens at > (invented? Freudian-style emotional motives. There will never be a thrust > that gets through that parry, we have watched it for decades. All good fun. I think you left out a couple of words, perhaps in an act of social generosity. But let’s see if I can understand what you say is a contradiction. > First we need to clear up some metaphysical ground. Two bird watchers out for > a walk in Santa Fe on a recent afternoon. > > BW#1: I just saw a blue-footed booby. > BW#2: There are no blue-footed boobies in Santa Fe in April. > BW#1: Nonetheless, I just saw a blue-footed booby. > > Now there are several understandings of this situation that resolve the > contradiction and still leave BW#1's assertion to be true. This can go down lines a lot like Imre Lakatos’s critique of the Logical Empiricists’ program. But, I understand, not the point of this thread of sentences. > These include all sorts of altered states, delusion, illusion, etc. . And > there is the intenSionality issue; BW#1 may speak a language in which > "blue-footed booby" refers to the bird which BW#2 would call a "Stellars > Jay." Science can handle all of those. > > The tricky understandings are those in which we declare BW#1 to be telling > some sort of lie. He is joking, he is faking, he is claiming falsely, he is > arguing dishonestly, etc. Now these understandings all imply that BW#1 is > disingenuous. Such claims can, of course, can be verified, but such > verifications involve long experience with the speaker and his context and > require one to assume that he is not to be trusted or does not know the > nature and quality of his own acts, or both. Such claims are inherently > insulting, and, at the best, require great circumspection and tact in their > verification. I agree. Bad Faith is a nasty charge to level against somebody, and I think I don’t want to do that. > So, lets assume that Nick is being honest when he says what he first sees > when he reads a correspondent's posts is a troll under a bridge grumping > about who is walking over his bridge. Let's assume that that is what Nick > naively sees. That's his first impression. I actually believe this. How much it was the point of the “lenses are for seeing meaning through” analogy, versus how much not, is a fair question. My take was: the trolling is largely for entertainment, and to add force and emphasis here and there, but there was a serious accusation — to the main point of the thread — of behaving like an Analytical Philosopher of my acquaintance, who when she hears “Necessity is the mother of invention” responds with “Tell me about your mother”, sort of refusing to admit that she knows there is relevant context beyond the surface forms. Moreover, refusing to admit she can figure out that there must be other context, even if unknown to her, by choosing, among the many glosses for a word, one that clearly made the sentence senseless. (In the case of the Analytical Philosopher, over years, I came to learn that she often didn’t actually know about some of this context, and with time to deliver it across gradually, she became quite willing to engage with it, to her credit. So she turned out to be a good-faith actor. Just very, very hard to have a non-maddening conversation with, in any short spell.) > And let's assume that he has to actually apply all sorts of analytical tools > to see through to the valuable arguments beneath. Let's not assume (unless > you are prepared to make the argument) that he is lying about having this > experience. > > Now this is tricky, because I (a somewhat respected correspondent) have been > goaded into reporting a first impression of the words of another (more > respected correspondent) which is apparently discordant with your > understanding of the latter . Dunno. Is the affect you sincerely perceive actually there? Or is it invented or imprinted? I am _so_ lucky I am a third person who can’t know either, and I can stay off to the side. Too much responsibility if I had to have an opinion. So my saying that the affect response is not the point is not a claim that I either agree or disagree with the affect assertion. > The one thing we cannot do is shrug. If there is one thing that must be true > about scientists is that they care about disagreement. Obviously, not all > disagreements are of equal importance, so one has to prioritize. But if one > does tackle a disagreement, one has to examine not only the disagreement > between two experiences, but the lenses through which those experiences have > been seen. > > Those of us who use metaphors must be willing to examine them. We can never > say, "Of course that is what my metaphor means." That is all I have ever > been saying. And of course, with this, I have been on board all along, which is why I post. Where I bicker, it is usually about how many different categories there are, and when those differences matter. > Those of you who claim (if there are any of you) that some things are > directly seen while others arrived at through analysis, pretense and > sophistry must be prepared to explain how some experiences come to be simple > while others are complex. > > > Nick Anyway; enough out of me for today, Eric
.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... --- -- . / .- .-. . / ..- ... . ..-. ..- .-.. FRIAM Applied Complexity Group listserv Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom https://bit.ly/virtualfriam to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/ 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/
