On Fri, Feb 22, 2019 at 5:30 PM Linas Vepstas <[email protected]>
wrote:

> ...
> >  The problem was that Chomsky shot it down because it resulted in
> "inconsistent or incoherent ... analyses". This was big. It cracked
> linguistics apart. Linguistics is divided by it to this day:
> >
> > Frederick J. Newmeyer, Generative Linguistics a historical perspective,
> Routledge 1996:
> >
> > "Part of the discussion of phonology in ’LBLT’ is directed towards
> showing that the conditions that were supposed to define a phonemic
> representation (including complementary distribution, locally determined
> biuniqueness, linearity, etc.) were inconsistent or incoherent in some
> cases and led to (or at least allowed) absurd analyses in others."
> >
> > Sydney Lamb:
> >
> > 'For example, perhaps his most celebrated argument concerns the Russian
> obstruents. He correctly pointed out that the usual solution incorporates a
> loss of generality, but he misdiagnosed the problem. The problem was the
> criterion of linearity. He stubbornly holds on to this criterion, although
> it really is faulty, and comes up with a solution for the Russian
> obstruents that obscures the phonological structure. I showed (in accounts
> cited below) that by relaxing the linearity requirement we get an elegant
> solution while preserving "centrality of contrastive function of linguistic
> elements".'
>
> Wow. I did not know that. Interesting, I suppose. Its, well, beats me.
> Science is littered with misunderstandings by brilliant people. Time
> passes. Debates are forgotten.  I don't know what to do with this.
>

Well going back to Goedel's proof, the mathematical incompleteness one, it
is a property of sets. You can make a set say two things, and you can make
those ways contradict. It's pretty tortuous the way he did it. It seems
archane and irrelevant. But that's because he needed to nail it down. To
nail it down he needed to twist two meanings in the sets back on
themselves, to make the contradiction absolutely clear. But having nailed
it down, the idea of sets saying more than one thing has broad
applicability. The origin is in Russell's observation, and that is much
more intuitive.

So this is just a property of sets. That they are able to say more than one
thing. The distributional analysis of language structure also depends on
the properties of sets.

I think these proven inconsistencies in properties derived from sets in
maths, are the same inconsistencies Chomsky was drawing attention to,
practically, in properties learned from sets in linguistics. There's no
reason why something which applies to sets in theory in mathematics, should
not apply to sets in practice, in linguistics.

As to the significance. Well, the significance of a given distribution
(set) being able to say more than one thing is for you to judge. It seems
fairly major to me. Certainly if you're trying to do machine learning it
seems major. It means if you have a learning procedure to discover what a
set is "saying", you may get more than one answer. They can even
contradict. If that's true it doesn't make sense to try and learn one
answer. It makes much more sense to have a procedure to extract relevant
answers as necessary.

Perhaps I was jumping the gun with your reference to category theory.
Perhaps the relevance to the question of multiple interpretation of sets is
not immediately obvious.

In the concrete. I'm still not clear what your jigsaw pieces will look like.

Ben agreed with my "network of observed language sequences", with context
links which "form little diamonds in the network". But what I'm reading
sounds like you are enhancing the nodes and the links of that network far
beyond observed words.

For instance you say, "If you don't know what the pieces are, but are
setting out to discover them by looking for statistical regularities in
language"?

Why would you need to do that? Why would you want to do that?! Given what
I've been saying above about multiple interpretation of sets, why would you
think that is possible? Especially if the "statistical regularities" might
resolve in ways which contradict each other, by the above? Why not just use
observed words, linked in the sequences they are observed to occur, and
find the regularities you need, on demand?

-Rob

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T581199cf280badd7-Mbdec8acc29a4c3c0d93b1356
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to