[snip]...
Richard> This is precisely where I think the false assumption is
Richard> buried.  When I say that grammar learning can be dependent on
Richard> real world knowledge, I mean specifically that there are
Richard> certain conceptual primitives involved in the basic design of
Richard> a concept-learning system.  We all share these primitives,
Richard> and [my claim is that] our language learning mechanisms start
Richard> from those things.  Because both I and a native Swahili
Richard> speaker use languages whose grammars are founded on common
Richard> conceptual primitives, our grammars are more alike than we
Richard> imagine.

Richard> Not only that, but if myself and the Swahili speaker suddenly
Richard> met and tried to discover each other's languages, we would be
Richard> able to do so, eventually, because our conceptual primitives
Richard> are the same and our learning mechanisms are so similar.

Richard> Finally, I would argue that most cognitive systems, if they
Richard> are to be successful in negotiating this same 3-D universe,
Richard> would do best to have much the same conceptual primitives
Richard> that we do.  This is much harder to argue, but it could be
Richard> done.

Richard> As a result of this, evolution would not by any means have
Richard> been making random choices of languages to implement.  It
Richard> remains to be seen just how constrained the choices are, but
Richard> there is at least a prima facie case to be made (the one I
Richard> just sketched) that evolution was extremely constrained in
Richard> her choices.

Richard> In the face of these ideas, your argument that evolution
Richard> essentially made a random choice from a quasi-infinite space
Richard> of possibilities needs a great deal more to back it up.  The
Richard> grammar-from-conceptual-primitives idea is so plausible that
Richard> the burden is on you to give a powerful reason for rejecting
Richard> it.

Richard> Correct me if I am wrong, but I see no argument from you on
Richard> this specific point (maybe there is one in your book .... but
Richard> in that case, why say without qualification, as if it was
Richard> obvious, that evolution made a random selection?).

Richard> Unless you can destroy the grammar-from-conceptual-primitives
Richard> idea, surely these arguments about hardness of learning have
Richard> to be rejected?


The argument, in very brief, is the following. Evolution found a
very compact program that does the right thing. (This is my
hypothesis, not claimed proved but lots of reasons to believe it
given in WIT?.) Finding such programs is NP-hard.

Hold it right there. As far as I can see, you just asserted the result that is under dispute, right there at the beginning of your argument!

Finding a language-understanding mechanism is NP-hard?

That prompts two questions:

1) Making statements about NP-hardness requires a problem to be formalized in such a way as to do the math. But in order to do that formalization you have to make assumptions, and the only assumptions I have ever seen reported in this context are close relatives of the ones that are under dispute (that grammar induction is context free, essentially), and if you have made those assumptions, you have assumed what you were trying to demonstrate!

In other words, if the only way we can get a handle on the way a grammar induction mechanism works is to make (outrageously implausible) assumptions about context-free nature of that mechanism [see my previous comments quote above], how can anyone get a handle on the even more complex process of desiging a grammar induction mechanism (the design prcess that evolution went through)?

I'll be blunt: I simply do not believe that you have formalized the grammar-mechanism *design* process in such a way as to make a precise statement about its NP-hardness, I think you just asserted that it is NP-hard.

2) My second question is: what would it matter anyway, even if the design process were NP-hard, unless you specify the exact sense in which it is NP-hard?

The reason I say that, is that NP-hardness by itself tells us absolutely nothing. NP-hardness tells us about how algorithms scale with changes of input size .... so if you give me a succession of "different-sized" language understanding mechanisms, and if I were to know that building these LUMs was NP-hard, I would know something about how the building process would *scale* as the size of the LUM increased. It would say nothing about the hardness of any given problem unless you specified the exact formula and the scaling variables involved.

I am sure you know what this is about, but just in case, I will illustarte the point.

Suppose that the computational effort that evolution needs to build "different sized" language understanding mechanisms scales as:

2.5 * (N/7 + 1)^^6 planet-years

... where "different sized" is captured by the value N, which is the number of conceptual primitives used in the language understanding mechanism, and a "planet-year" is one planet worth of human DNA randomly working on the problem for one year. (I am plucking this out of the air, of course, but that doesn't matter.)

Here are the resource requirements for this polynomial resource function:

        N       R

        1       2.23E+000
        7       6.40E+001
        10      2.05E+002
        50      2.92E+005
        100     1.28E+007
        300     7.12E+009

(N = Number of conceptual primitives)
(R = resource requirement in planet-years)

I am assuming that the appropriate measure of size of problem is number of conceptual primitives that are involved in the language understanding mechanism (a measure picked at random, and as far as I can see, as likely a measure as any, but if you think something else should be the N, be my guest).

If there were 300 conceptual primitives in the human LUM, resource requirement would be 7 billion planet-years. That would be bad.

But if there are only 7 conceptual primitives, it would take 64 years. Pathetically small and of no consequence.

The function is polynomial, so in a sense you could say this is an NP-hard problem.

It's just that if the actual human mechanism needs only seven primitives, and if this should happen to be the correct formula for the resource dependency, who cares if the thing is NP-hard?

The statement that the design process that evolution undertakes is "NP-hard" is meaningless without precise specification of the terms involved, and you have given nothing like a precise specification. I don't believe ANYONE could give that specification, without making assumptions about the language learning mechanism that are precisely the ones that I said, in my previous post, were extremely contentious.

By making your statement about NP-hardness you are, in effect, assuming the thing that your argument is supposed to be demonstrating, no?


I have seen this kind of computational complexity talk so often, and it is just (if you'll forgive an expression of frustration here) just driving me nuts. It is ludicrous: these concepts are being bandied about as if they make the argument wonderfully rigorous and high-quality .... but they mean nothing without some explicit specification of assumptions.


I have many other issues with what you say below, but I can only deal with one thing at a time.



Richard Loosemore.




The same arguments
indicate, you don't need to find the global optimum, shortest best
program, for it to work, and there's no reason to believe evolution
did. You just need to find a sufficiently good one (which is still
typically NP-hard.)

Lots of experience with analogous such problems (and various
theoretical arguments) shows that there usually are lots (in fact, exponentially many) of locally optimal solutions that don't look like each other in detail.
For example, consider domain structure in crystals. That's a case
where there is a single global optimum-- but you don't actually
find it. If you do it twice, you will find different domain
structures. Cases such as spin glasses, are likely to be even worse.
Evolution picked one conceptual structure, but there are likely
to be many that are just as good. Communication, however, may
well depend on having a very similar conceptual structure.

Also, in addition to getting the conceptual structure right,
I expect that grammar involves lots of other choices that are
essentially just notational choices, purely arbitrary, on top
of the actual computational modules, only concerned with
parsing communication streams between different individuals. Yes English speakers and Swahili's have all these other choices in common, because they are essentially evolved into the genome.
But that does not mean that these choices are in any way determined,
even assuming you get the conceptual structure the same.
This stuff could be purely notational.
Its this stuff that the hardness of grammar learning results pertains most too, this is what Chomsky/Pinker mean when they talk about inborn language instinct, all this literature does ignore
semantics, but that's because (at least in large part) this literature
believes there's a notational ambiguity. Since clearly there could
be such a notational ambiguity, to believe there isn't, you have
to posit a reason why it wouldn't arise. Evolution just short circuits
this by choosing a notation, but figuring out what notation can be
a hard problem, since determining a grammar from examples
is hard.


Richard> Richard Loosemore

Richard> ----- This list is sponsored by AGIRI:
Richard> http://www.agiri.org/email To unsubscribe or change your
Richard> options, please go to:
Richard> http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to