Nick,

You have inadvertently hit upon a pet peeve.

For fifty plus years, Software Engineering — both theory and practice — has 
engaged in building models/simulacrum that are orders of magnitude more 
complicated, and those same orders of magnitude less understandable, than the 
business systems that are ostensibly modeled/simulated.

The ultimate absurdity is that business continues to pay for what they KNOW 
will not work.

davew



On Mon, Aug 10, 2020, at 12:20 PM, [email protected] wrote:
> Dear Frennemies,

>  

> I have had my ears boxed so often for dragging threads into my metaphor den, 
> that I thought I ought to rethread this.  But the paper Glen posts and Russ 
> applauds posts is really interesting, describing the manner in which implicit 
> assumptions into our AI can lead it wildly astray: “There’s more than one way 
> to [see] a cat.”

>  

> The article had an additional lesson for me.  To the extent that you-folks 
> will permit me to think of simulations as contrived metaphors, as opposed to 
> Natural metaphors – ie., objects that are built solely for the purpose of 
> being metaphors, as opposed to objects that are found in the world and 
> appropriated for that purpose, then that reminds me of a book by Evelyn Fox 
> Keller which argues that a model (i.e., a scientific metaphor) can only be 
> useful if it is  more easily understood than the thing it models.  Don’t use 
> chimpanzees as models if you are interested in mice.   

>  

> Simulations would seem to me to have the same obligation.  If you write a 
> simulation of a process that you don’t understand any better than the thing 
> you are simulating, then you have gotten nowhere, right?  So If you are 
> publishing papers in which you investigate what your AI is doing, has not the 
> contrivance process gone astray?

>  

> What further interested me about these models that the AI provided was that 
> they were in part natural and in part contrived.  So the contrived part is 
> where the investigators mimicked the hierarchical construction of the visual 
> system in setting up the AI; the natural part is the focus on texture by the 
> resulting simulation.  So, in the end, the metaphor generated by the AI 
> turned out to be a bad one – heuristic, perhaps, but not apt.

>  

> Nicholas Thompson

> Emeritus Professor of Ethology and Psychology

> Clark University

> [email protected]

> https://wordpress.clarku.edu/nthompson/

> 

>  

>  


> *From:* Friam <[email protected]> *On Behalf Of *Russ Abbott
> *Sent:* Monday, August 10, 2020 11:04 AM
> *To:* The Friday Morning Applied Complexity Coffee Group <[email protected]>
> *Subject:* Re: [FRIAM] ∄ meaning, only text

>  

> Independent of Kavanaugh, that was a great article. That's the first I have 
> heard of this work. It begins to explain a lot about deep learning and its 
> literal and figurative superficiality.

>  


> -- Russ Abbott                                       
> Professor, Computer Science
> California State University, Los Angeles

>  

>  

> On Mon, Aug 10, 2020 at 7:02 AM uǝlƃ ↙↙↙ <[email protected]> wrote:


>> And to round out another thread, wherein I proposed Brett Kavanaugh *is* 
>> Artificial Intelligence, this article pops up:
>> 
>>   Where We See Shapes, AI Sees Textures
>>   Jordana Cepelewicz
>>   
>> https://www.quantamagazine.org/where-we-see-shapes-ai-sees-textures-20190701/
>> 
>> In the context of "originalism" and reading *through* the text, the question 
>> is: Why does Brett *seem* intelligent [‽] in a different way than your 
>> average zero-shot AI? I like Nick's argument that meaning is higher-order 
>> pattern. The results Cepelewicz cites validate that argument [⸘]. But if we 
>> continue, we'll fall back into the argument about high-order Markovity, free 
>> will, and steganographic [de]coding. And (worse) it dovetails with No Free 
>> Lunch and whether strict potentialists are well-justified in using higher 
>> order operators. Multi-objective constraint solving (aka parallax) seems to 
>> cut a compromise through the whole meta-thread. But, as always, the tricks 
>> lie in composition and modularity. How do the constraints compose? Which 
>> problems can be teased apart from which other problems to create cliques in 
>> the graph or even repurposable anatomical modules? How do we construct 
>> structured memory for saving snapshots of swapped out partial solutions? Etc.
>> 
>> 
>> [‽] If you can't tell, I'm really enjoying using a frat boy political 
>> operative who *pretends* to be a SCOTUS justice in the argument for strong 
>> AI. To use an actual justice like Gorsuch as such just isn't satisfying.
>> 
>> [⸘] Of course, we don't learn from confirmation. We only learn from critical 
>> objection. And the 2nd half of the article does that well enough, I think.
>> 
>> -- 
>> ↙↙↙ uǝlƃ
>> 
>> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
>> FRIAM Applied Complexity Group listserv
>> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
>> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> archives: http://friam.471366.n2.nabble.com/
>> FRIAM-COMIC http://friam-comic.blogspot.com/

> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> archives: http://friam.471366.n2.nabble.com/
> FRIAM-COMIC http://friam-comic.blogspot.com/ 
> 
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

Reply via email to