Somehow, and in some way, it seems that all the work being done with swarming 
drones would be informative to this discussion.  I do not know a lot about it, 
IRL, and even that small amount is colored by the fiction in Daniel Suarez's 
*Kill Decision.*

davew


On Fri, Apr 15, 2022, at 5:14 PM, David Eric Smith wrote:
> Great resource, Glen, thank you,
>
> Your summary is indeed just the point I was after.
>
> Also good to know you have worked in this space.
>
> And the PaLM looks like a project it would be fun to be on the inside of.  
>
> I realize, in trying to think what to answer, that there is a(n 
> obvious) meta-question behind the way I asked these things.  When is 
> one thing a model for another, in some way that confers insight?  
>
> I have, off and on, followed some of the communication-bot projects 
> that come up.  The results often look to me like small algorithms 
> running preset courses, with very little they can generate on their own 
> that is a model for anything we have had real trouble understanding 
> about the experience of existing or the resources that language gives 
> us for representing or expressing the nature of that experience.  I 
> have this optimism of the amateur outsider that each new generation of 
> computational tasks will have enough richness that the problems we can 
> push them to solve with blunt reinforcement rewards will start to 
> overlap better with the particulars of discourse.  
>
> Many thanks, 
>
> Eric
>
>
>
>> On Apr 14, 2022, at 9:33 PM, glen <[email protected]> wrote:
>> 
>> Apologies for the violent reduction of your question. But I'm not smart 
>> enough to think with so many words. So I have to reduce it in order to think 
>> about it. 8^D Here's my reduction:
>> 
>>  1a Will bots represent their insides?
>>  2a Would the (social) coordination of bots be linguistic?
>> 
>>  Has anyone tried building a system of Communicating Bots?
>> 
>> I can answer the last with "yes, I have", but not in any sort of academic, 
>> scholarly, industrial, or any other serious context. So, really my "yes" is 
>> a "no, I haven't". But the project I was on did get at the question of what 
>> such things might tell us. I'll skip to the end and render my opinion and 
>> only go into the details if needed. We built 2 bots, both intended to 
>> interact with children in an augmented reality art display. The bots could 
>> talk to both the children and each other. To see how it would go before the 
>> event, we set them loose in an IRC chat room. There were 2 notable effects: 
>> 1) their conversations usually had a relatively short transient, maybe 100 
>> sentences? I forget. Sometimes it seemed to go on for a long time. But 
>> mostly they were short interactions. And 2) they tended to explore the edge 
>> cases of their corpora. Each bot was programmed with a different corpus and 
>> their interaction tended to the corpus "leaves".
>> 
>> Now, our AI was not really AI as we're talking about it, here. It wasn't ML. 
>> We programmed in, manually, an ontology and a grammar by which that ontology 
>> was walked. So, whether it would do the same with one of these inductive MLs 
>> is unlikely. But, to target your sentiment that it seems like the first 
>> thing someone would do, it *was* the first thing we did. 8^D
>> 
>> Also, re the complexity of the gen-phen map, in context learning, etc., this 
>> came across my desk recently:
>> 
>> Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for 
>> Breakthrough Performance
>> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fai.googleblog.com%2f2022%2f04%2fpathways-language-model-palm-scaling-to.html%3fm%3d1&c=E,1,c8E6lUAGCKhk8Yjf3gSMmVDygrPBDHb3EIPADLxAabc9zjDgQOw7b-osl4MdPwwky20YdrMJ5GnfCTgfSbDmksLXOvbUX98JjSLcyHL85Drj&typo=1
>> 
>> I hadn't read it till my insomnia this morning. But it seems to partially 
>> answer the robustness/polyphenism question I asked.
>> 
>> 
>> 
>> On 4/13/22 13:35, David Eric Smith wrote:
>>> So Glen’s line of questioning here prompts a question in me.  Partly 
>>> ignorant (can’t be helped) and partly lazy (hasn’t been helped).
>>> We have two things we have said at various time we would like to understand:
>>> 1. Unpacking the black box of whatever-NN representations of patterns;
>>> 2. Getting to a “theory” of “what language is” and how we should think of 
>>> its tokens and structures and acts in relation to not-clearly-conceived 
>>> notions of “meaning” or “reality”, on which language is supposed to be some 
>>> sort of window, however dirty.
>>> So if we gave a bunch of autonomously-hosted NNs some load of work to keep 
>>> them all busy, and offered them signal-exchange that, if they happened to, 
>>> they could find ways to employ to coordinate each other’s 
>>> autonomously-generated operations:
>>> 1A. Would one of the things their emergent signaling systems do be to 
>>> constitute representations of the “inside” of the “black box” we have been 
>>> saying we want representations of, not simply coextensive with reporting 
>>> the input-output pairs that the black box is producing? and;
>>> 2A. Would their whole coordination system be in any interesting sense an 
>>> instance of “language”, and since we would be able to look at the whole 
>>> thing, and not be restricted to operating through the channel of exchanged 
>>> signals, would there be anything interesting to learn about the inherent 
>>> distortions or artifacts or lacunae of language, as referred to some more 
>>> broadly-anchored senses of “meaning” or “reality”?
>>> I could imagine that we would complain that the coordination-traffic 
>>> purported to be representations of the black box, but that they did a 
>>> terrible job of actually being that (or one that we couldn’t use to satisfy 
>>> what _we_ want from a representation), but perhaps we could see some 
>>> familiar patterns in the disappointments (?).
>>> To cut to a small, concrete dataset, we could try it with a kind of 
>>> bootstrapping partition of the zero-shot translation, in which we 
>>> autonomously host translation-learning on subsets of languages, but we keep 
>>> firewalls between the learners so that for every learner, there are 
>>> languages or language pairs to which it is not given direct access.  We 
>>> have reason to think that each learner would develop a decent version of 
>>> some internal meta-language, and that there would be coherences of 
>>> structure across them, since that is what the zero-shot team claims already 
>>> to have demonstrated.  But with different subsets of languages as inputs 
>>> (and perhaps not less, accidents of the training path just due to noise), 
>>> there should also be differences and things each one misses.  
>>> Cross-platform signaling could at least in principle have some available 
>>> information to convey, as well as much information that it would not need 
>>> to convey because the shared ancestry of the learning algorithms on each 
>>> platform makes talking about it unnecessary.  The target we would be after 
>>> in posing the problem is, to what degree is the cross-platform 
>>> communication insightful to us about the thing we have said we want a 
>>> representation of (the “meta-language” “within” each zero-shot learner, 
>>> which is its version of some patterns in the world).
>>> Has a lot of this already been done by somebody?  It seems like the first 
>>> thing that somebody who knows nothing would propose to do, so I assume it 
>>> has already been done to the point where people lost interest and went on 
>>> to something else.
>> 
>> -- 
>> Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙
>> 
>> .-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
>> FRIAM Applied Complexity Group listserv
>> Zoom Fridays 9:30a-12p Mtn UTC-6  
>> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2f%2f%2fbit.ly%2fvirtualfriam&c=E,1,RM7o6LI41b07O-GtvvgCzthiYrldlcdxVwh-lFbAHXoRe_tSY0xaYhxb2ljkAUl_wueX2qE6cQevn8h8pjaGk7S_zn3d7Yz61H-Yc90yz6FlqI48QvHEDr0,&typo=1
>> un/subscribe 
>> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,e15F6qvW82TSHCZBUQp8Bsw9qiAJaYv4mi-RI7qr19gFmXVhoCgPfey_ibtCW_lKtmM2rxzuuA7teMxJgjlDua5JEob3dpBtWjIBbDx8Kn68NcM6_BOZfWEzFT2c&typo=1
>> FRIAM-COMIC 
>> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,ESyfNx2jvZOYcM4NyHvaTChb9MzsqhNGSe4ZJfObmyi1QykkvLL423xyFUU6czqf0HB24IOL6jCsWDhCXkvo0qEuKKlwanVnp_luPRYvTNnblBu0LBE,&typo=1
>> archives:
>> 5/2017 thru present 
>> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f&c=E,1,l17ymm-LJ-d8hf-uN266y-R1E3CTB02gRNBSRZ4MP5mRk-7MOKs-Eienb2EGV5BDAhlMG7VMkYCKcEcDprAGIMbkr8y7-RvCdiopQrKAkoXRI3aqpjSW_Hnkg1RN&typo=1
>> 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
>
>
> .-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn UTC-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:
>  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
.-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn UTC-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:
 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to