Apologies for the violent reduction of your question. But I'm not smart enough 
to think with so many words. So I have to reduce it in order to think about it. 
8^D Here's my reduction:

  1a Will bots represent their insides?
  2a Would the (social) coordination of bots be linguistic?

  Has anyone tried building a system of Communicating Bots?

I can answer the last with "yes, I have", but not in any sort of academic, scholarly, industrial, or any 
other serious context. So, really my "yes" is a "no, I haven't". But the project I was on did get 
at the question of what such things might tell us. I'll skip to the end and render my opinion and only go into the 
details if needed. We built 2 bots, both intended to interact with children in an augmented reality art display. The 
bots could talk to both the children and each other. To see how it would go before the event, we set them loose in an 
IRC chat room. There were 2 notable effects: 1) their conversations usually had a relatively short transient, maybe 100 
sentences? I forget. Sometimes it seemed to go on for a long time. But mostly they were short interactions. And 2) they 
tended to explore the edge cases of their corpora. Each bot was programmed with a different corpus and their 
interaction tended to the corpus "leaves".

Now, our AI was not really AI as we're talking about it, here. It wasn't ML. We 
programmed in, manually, an ontology and a grammar by which that ontology was 
walked. So, whether it would do the same with one of these inductive MLs is 
unlikely. But, to target your sentiment that it seems like the first thing 
someone would do, it *was* the first thing we did. 8^D

Also, re the complexity of the gen-phen map, in context learning, etc., this 
came across my desk recently:

Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for 
Breakthrough Performance
https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html?m=1

I hadn't read it till my insomnia this morning. But it seems to partially 
answer the robustness/polyphenism question I asked.



On 4/13/22 13:35, David Eric Smith wrote:
So Glen’s line of questioning here prompts a question in me.  Partly ignorant 
(can’t be helped) and partly lazy (hasn’t been helped).

We have two things we have said at various time we would like to understand:

1. Unpacking the black box of whatever-NN representations of patterns;

2. Getting to a “theory” of “what language is” and how we should think of its 
tokens and structures and acts in relation to not-clearly-conceived notions of 
“meaning” or “reality”, on which language is supposed to be some sort of 
window, however dirty.

So if we gave a bunch of autonomously-hosted NNs some load of work to keep them 
all busy, and offered them signal-exchange that, if they happened to, they 
could find ways to employ to coordinate each other’s autonomously-generated 
operations:

1A. Would one of the things their emergent signaling systems do be to 
constitute representations of the “inside” of the “black box” we have been 
saying we want representations of, not simply coextensive with reporting the 
input-output pairs that the black box is producing? and;

2A. Would their whole coordination system be in any interesting sense an 
instance of “language”, and since we would be able to look at the whole thing, 
and not be restricted to operating through the channel of exchanged signals, 
would there be anything interesting to learn about the inherent distortions or 
artifacts or lacunae of language, as referred to some more broadly-anchored 
senses of “meaning” or “reality”?

I could imagine that we would complain that the coordination-traffic purported 
to be representations of the black box, but that they did a terrible job of 
actually being that (or one that we couldn’t use to satisfy what _we_ want from 
a representation), but perhaps we could see some familiar patterns in the 
disappointments (?).

To cut to a small, concrete dataset, we could try it with a kind of 
bootstrapping partition of the zero-shot translation, in which we autonomously 
host translation-learning on subsets of languages, but we keep firewalls 
between the learners so that for every learner, there are languages or language 
pairs to which it is not given direct access.  We have reason to think that 
each learner would develop a decent version of some internal meta-language, and 
that there would be coherences of structure across them, since that is what the 
zero-shot team claims already to have demonstrated.  But with different subsets 
of languages as inputs (and perhaps not less, accidents of the training path 
just due to noise), there should also be differences and things each one 
misses.  Cross-platform signaling could at least in principle have some 
available information to convey, as well as much information that it would not 
need to convey because the shared ancestry of the learning algorithms on each 
platform makes talking about it unnecessary.  The target we would be after in 
posing the problem is, to what degree is the cross-platform communication 
insightful to us about the thing we have said we want a representation of (the 
“meta-language” “within” each zero-shot learner, which is its version of some 
patterns in the world).

Has a lot of this already been done by somebody?  It seems like the first thing 
that somebody who knows nothing would propose to do, so I assume it has already 
been done to the point where people lost interest and went on to something else.

--
Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙

.-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn UTC-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:
5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to