Glen, 

I wonder if this conversation is an example of itself.  I do not possess the 
knowledge, language, etc to understand what you are saying, here, hence it is 
"hidden".  A color blind person lacks the code to see the numeral hidden within 
the diagram.  Is this a fair oversimplification of the concept you are getting 
at?  

Mind you, I have much less of a problem with "hidden" than I do with "inside", 
which I think launches discussants into an endlessly useless confusion between 
the mind and the brain, the latter being uncontroversially enclosed within the 
skull, the latter being distributed across the environment and actions of the 
person whose mind it is. 

Nick 

Nicholas Thompson
Emeritus Professor of Ethology and Psychology
Clark University
[email protected]
https://wordpress.clarku.edu/nthompson/
 


-----Original Message-----
From: Friam <[email protected]> On Behalf Of u?l? ?
Sent: Monday, May 18, 2020 1:16 PM
To: FriAM <[email protected]>
Subject: [FRIAM] hidden


The best layman's example of what I mean by "hidden X" (where X means a 
category of things ... "states", "behaviors", "information", "spaces", etc.) 
relies on steganography (which SteveS mentions a lot). But in last week's Zoom, 
I mentioned to Jon (in response to his query to Frank about 
RSA-encryption::mind) that I think homomorphic encryption is a better analogy 
(to mind). But my own reliance on the *opacity* of the boundary prevents me 
from communicating my point clearly, especially re: some kind of holographic 
principle for modeling a person's internal world via their observable behavior.

So, even though I still think homomorphic encryption is an excellent analogy 
for the mind, a particular *type* of steganography is an actual example of (not 
a metaphor or analogy for) information hiding. Here are 2 concrete examples: 1) 
hiding one image "inside" [†] another image, and 2) hiding a QR code "inside" 
an image.

Here are eg links:
(1) 
https://towardsdatascience.com/steganography-hiding-an-image-inside-another-77ca66b2acb1
(2) 
https://projet.liris.cnrs.fr/imagine/pub/proceedings/ICPR-2016/media/files/0542.pdf

(1) and (2) are examples (again, NOT metaphor [‡]) of a type I've tried to call 
"thin models". Everything about the model is written right there on the surface 
of it ... plain as day. All you have to do is *read it correctly*. My canonical 
example is a typical (system of) ODE model. Barring something pathological like 
it being "stiff" or whatever and forcing a wise choice of integrator, a typical 
ODE model is very thin, everything you need to know is right there in front of 
you. Of course, the normal form of some expression can *hide* the intent of the 
modeler because the modeler will group terms (bounded by operators like + and 
*) mechanistically ... to communicate some sort of meta- or semantic 
information about the model's components. The normalized form can hide the 
modeler's intent through (e.g. algebric) transformations. But no information is 
actually lost ... that apparent lossage is just an artifact of the way the 
model is read ... just like in (1) and (2) above.

I hope that helps explain my (perhaps perverse, but I don't think so) use of 
the word "hidden". And I'm hoping that the 2 links will help with 
not-really-mathematical-though-it-may-look-mathematical understanding.

To get from this discussion to the one about scale, celery mechanisms, and 
telescopes, all you need is to imagine either (1) or (2) with your phone in 
between you and the image. Without the phone, with the phone, without the 
phone. The hop-distance (in transformations, here the main one being the phone) 
from your eyeball to the image *measures* the hiddenness of the hidden 
information. The embedded information is 1 hop away. Maybe you can imagine the 
hidden information is reversed so not only do you need the phone, but you need 
to look at your phone in the mirror. Then the hidden image would be 2 hops 
away, 2 transforms away. Etc.

If you've read this far, thanks. I value the opportunity to clarify the idea 
(even if I don't believe it).



[†] Please dampen your *trigger* on the word "inside". An example of what I 
mean, here, is something like a string of characters being "inside" a word ... 
e.g. the letter "i" is inside the word "sit". It's only "inside" if you process 
the string in a subset of ways. The same is true of both (1) and (2) above. If 
you reallyreallyreally can't dampen your trigger and are so impulsive that you 
simply can't think without arguing about the meaning of that one little word, 
then change the word "inside" to something like "side-by-side with" or 
"bracketed by" -- thx Jon -- or whatever you need to do to force yourself to 
grok the idea. To boot, with the QR code, example, it's arguable whether the QR 
code is inside the image or the image is inside the QR code ... just like if 
you read "sit" as {s}i{t}, then you could say "i" is outside but "s" and "t" 
are inside ... whatever. 

[‡] Except in the sense of some kind of set closure or equivalence class where 
all the elements of the class are analogs for every other member of the class. 
Extensional equivalence via intensional metaphor?!? [ptouie] >8^D

--
☣ uǝlƃ

-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... 
... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 


-- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... 
... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

Reply via email to