On Thu, Jun 30, 2022 at 2:18 PM Boris Kazachenko <[email protected]> wrote:

> On Thursday, June 30, 2022, at 6:10 AM, Rob Freeman wrote:
>
> what method do you use to do the "connectivity clustering" over it?
>
>
> I design from the scratch, that's the only way to conceptual integrity in
> the algorithm: http://www.cognitivealgorithm.info.
>

I see. So "shared connectivity" not so much in the sense of being connected
together. But in the sense of having the same internal connectivity within
two groups which are not directly connected together.

OK, good. It's good to be clear you are thinking of "connectivity" in
another sense.

That's a valid sense. I'm focused on the shared observed prediction sense.
So your sense didn't occur to me. But that could be a sense of shared
connectivity too.

So, how about this. Could we use both? If two separate clusters share a
prediction, I don't see why you could not then connect them through such
shared predictions which occur in a data set, without doing a direct
comparison of their respective internal connectivity?

You might think of it as somewhat the reverse of what you are doing. I
understand you to be comparing connectivity, and predicting based on that.
I'm suggesting that if two clusters will tend to share predictions, that
might be revealed more directly by such predictions as are observed already.

Now, where my idea might come unstuck, is where there are two clusters
which might be used to predict the same things on the basis of their
internal similarity, as you suggest, but actually they have never been
observed to share any predictions. In which case, yes, you would need to
directly check any similarity in their clustering, and yours would be the
only mechanism.

In language that might equate to two words which "mean" the same thing, but
which have never been observed to be used in the same sequence.

Actually, dredging back... I think we have some evidence for what happens
in that case.

Taking from what I wrote somewhere else:
<<<
I think the evidence from language learning is somewhat the opposite. It
starts with the particular, and only generalizes later. I always found
examples in this study by Peter Howarth some years ago a striking example
of this (all appear to be paywalled these days, unfortunately):

Phraseology and Second Language Proficiency. Howarth, Peter. Applied
Linguistics , v19 n1 p24-44 Mar 1998
...
What interested me was his analysis of two types of collocational
disfluencies he characterized as "blends" and "overlaps".

By "overlaps" he meant an awkward construction which was nevertheless
directly motivated by the existence of an overlapping collocation:

e.g.

"Those learners usually _pay_ more _efforts_ in adopting a new language..."

*pay effort
PAY attention/a call
MAKE a call/an effort

So "*pay efforts" might be motivated by analogy with "pay attention" and
"make an effort" (because of the overlapping collocations "pay a call" and
"make a call".)

In Howarth's words (at least in my pre-print):

"Blends, on the other hand, seem to occur among more restricted
collocations, where the verbs involved are more obviously figurative or
delexical in meaning and the nouns are semantically related, though there
are no existing overlapping collocations.

'*appropriate _policy_ to be _taken_ with regard to inspections'

TAKE steps
ADOPT a policy
...

It is remarkable, firstly, that NNS writers produce many fewer blends than
overlaps and, secondly, that it is the more proficient (by informal
assessment) who produce them."

What I understand Howarth to be saying is that "overlaps" tend to be
produced first, and conceptual "blends" only later. It is the opposite of
what we would expect if language learning started by combining words
according to general concepts.
<<<

So on that evidence, yes, the internal similarity mechanism you suggest
might be a valid one. It might be used. But the similarity based on
observed shared context/prediction mechanism I'm suggesting at least
appears to exist, based on what we observe from Howarth's "overlaps", and
might be a stronger mechanism. For natural language, anyway.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5d6fde768988cb74-M8c23a2e9598669e33bc3f173
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to