>> 
>> 
>> The argument, in very brief, is the following. Evolution found a
>> very compact program that does the right thing. (This is my
>> hypothesis, not claimed proved but lots of reasons to believe it
>> given in WIT?.) Finding such programs is NP-hard.

Richard> Hold it right there.  As far as I can see, you just asserted
Richard> the result that is under dispute, right there at the
Richard> beginning of your argument!

First, above I was discussing finding an understanding system,
not necessarily a language understanding system-- say a monkey,
not a person. Then I went on to talk about additional problems
coming when you want language.

Richard> Finding a language-understanding mechanism is NP-hard?

Richard> That prompts two questions:

Richard> 1) Making statements about NP-hardness requires a problem to
Richard> be formalized in such a way as to do the math.  But in order
Richard> to do that formalization you have to make assumptions, and
Richard> the only assumptions I have ever seen reported in this
Richard> context are close relatives of the ones that are under
Richard> dispute (that grammar induction is context free,
Richard> essentially), and if you have made those assumptions, you
Richard> have assumed what you were trying to demonstrate!

At this point, I suggest again you read What is Thought?
In emails, I am skipping a lot of corners and not giving caveats
and whatnot, to give 3 paragraph summaries.

But if you don't want to take the time, I'll cut to the chase.
I am not claiming to have proved that building a human is NP-hard.

I am suggesting a theory of thought, for which I think there is 
a lot of evidence and basis. Computational learning theory has more
or less established that generalization (never mind thought) follows
from finding constrained-enough hypothesis. And in every case that has
been studied of sufficient richness to be really interesting, it turns
out that the problems you have to solve are NP-hard. So naturally, in
my extrapolation to thought, I expect that the problems you will have
to solve here are NP-hard as well. This isn't exactly rigorous,
but its the way to bet. Your protests are mostly wishful thinking.

Its also true that proving something is NP-hard doesn't prove its 
insoluble, or even hard in the average case, or hard in any particular
case, or any of that. Hell, it might even be true that P=NP.
But there are a lot of strong reasons to think that all of this is
also wishful thinking, and that NP-hard problems are really hard.
I'll say again, if you don't believe that, you shouldn't be using 
cryptography, because cryptography as practiced not only relies on
P!=NP, but much much stronger assumptions, like its very hard to
factor *random* products of primes, and factoring isn't even NP-hard.

Its clear from what you are writing here that you are not familiar
with computational learning theory, or computational complexity
theory. I strongly suggest you read WIT?. I think you will learn 
a lot. Chapter 4 is a review of the results in computational learning
theory that is meant to be accessible. Chapter 11 is a review of 
Complexity Theory, also meant to be accessible. The book is, I think,
written from a point of view that you will find amenable-- I am very
much focussed on semantics, I see the world a lot like you do,
except I am bringing the conclusions of computational learning theory,
complexity theory, and linguistics to bear. I am not asserting these
conclusions-- I question each one and discuss data pro and con.
Where they seem very well founded and relevant, I discuss their 
implications. I also am not arguing rigorously. I am extracting
from these disciplines the intuition behind rigorous results,
and extending it to give what I argue is a compelling view of 
what cognition is.



Richard> In other words, if the only way we can get a handle on the
Richard> way a grammar induction mechanism works is to make
Richard> (outrageously implausible) assumptions about context-free
Richard> nature of that mechanism [see my previous comments quote
Richard> above], how can anyone get a handle on the even more complex
Richard> process of desiging a grammar induction mechanism (the design
Richard> prcess that evolution went through)?

Richard> I'll be blunt: I simply do not believe that you have
Richard> formalized the grammar-mechanism *design* process in such a
Richard> way as to make a precise statement about its NP-hardness, I
Richard> think you just asserted that it is NP-hard.

Richard> 2) My second question is: what would it matter anyway, even
Richard> if the design process were NP-hard, unless you specify the
Richard> exact sense in which it is NP-hard?

Richard> The reason I say that, is that NP-hardness by itself tells us
Richard> absolutely nothing.  NP-hardness tells us about how
Richard> algorithms scale with changes of input size .... so if you
Richard> give me a succession of "different-sized" language
Richard> understanding mechanisms, and if I were to know that building
Richard> these LUMs was NP-hard, I would know something about how the
Richard> building process would *scale* as the size of the LUM
Richard> increased.  It would say nothing about the hardness of any
Richard> given problem unless you specified the exact formula and the
Richard> scaling variables involved.

Richard> I am sure you know what this is about, but just in case, I
Richard> will illustarte the point.

Richard> Suppose that the computational effort that evolution needs to
Richard> build "different sized" language understanding mechanisms
Richard> scales as:

Richard> 2.5 * (N/7 + 1)^^6 planet-years

Richard> ... where "different sized" is captured by the value N, which
Richard> is the number of conceptual primitives used in the language
Richard> understanding mechanism, and a "planet-year" is one planet
Richard> worth of human DNA randomly working on the problem for one
Richard> year.  (I am plucking this out of the air, of course, but
Richard> that doesn't matter.)

Richard> Here are the resource requirements for this polynomial
Richard> resource function:

Richard>        N R

Richard>        1 2.23E+000 7 6.40E+001 10 2.05E+002 50 2.92E+005 100
Richard> 1.28E+007 300 7.12E+009

Richard> (N = Number of conceptual primitives) (R = resource
Richard> requirement in planet-years)

Richard> I am assuming that the appropriate measure of size of problem
Richard> is number of conceptual primitives that are involved in the
Richard> language understanding mechanism (a measure picked at random,
Richard> and as far as I can see, as likely a measure as any, but if
Richard> you think something else should be the N, be my guest).

Richard> If there were 300 conceptual primitives in the human LUM,
Richard> resource requirement would be 7 billion planet-years.  That
Richard> would be bad.

Richard> But if there are only 7 conceptual primitives, it would take
Richard> 64 years. Pathetically small and of no consequence.

Richard> The function is polynomial, so in a sense you could say this
Richard> is an NP-hard problem.

Richard> It's just that if the actual human mechanism needs only seven
Richard> primitives, and if this should happen to be the correct
Richard> formula for the resource dependency, who cares if the thing
Richard> is NP-hard?

Richard> The statement that the design process that evolution
Richard> undertakes is "NP-hard" is meaningless without precise
Richard> specification of the terms involved, and you have given
Richard> nothing like a precise specification.  I don't believe ANYONE
Richard> could give that specification, without making assumptions
Richard> about the language learning mechanism that are precisely the
Richard> ones that I said, in my previous post, were extremely
Richard> contentious.

Richard> By making your statement about NP-hardness you are, in
Richard> effect, assuming the thing that your argument is supposed to
Richard> be demonstrating, no?


Richard> I have seen this kind of computational complexity talk so
Richard> often, and it is just (if you'll forgive an expression of
Richard> frustration here) just driving me nuts.  It is ludicrous:
Richard> these concepts are being bandied about as if they make the
Richard> argument wonderfully rigorous and high-quality .... but they
Richard> mean nothing without some explicit specification of
Richard> assumptions.


Richard> I have many other issues with what you say below, but I can
Richard> only deal with one thing at a time.



Richard> Richard Loosemore.




>> The same arguments indicate, you don't need to find the global
>> optimum, shortest best program, for it to work, and there's no
>> reason to believe evolution did. You just need to find a
>> sufficiently good one (which is still typically NP-hard.)
>> 
>> Lots of experience with analogous such problems (and various
>> theoretical arguments) shows that there usually are lots (in fact,
>> exponentially many) of locally optimal solutions that don't look
>> like each other in detail.  For example, consider domain structure
>> in crystals. That's a case where there is a single global optimum--
>> but you don't actually find it. If you do it twice, you will find
>> different domain structures. Cases such as spin glasses, are likely
>> to be even worse.
>> 
>> Evolution picked one conceptual structure, but there are likely to
>> be many that are just as good. Communication, however, may well
>> depend on having a very similar conceptual structure.
>> 
>> Also, in addition to getting the conceptual structure right, I
>> expect that grammar involves lots of other choices that are
>> essentially just notational choices, purely arbitrary, on top of
>> the actual computational modules, only concerned with parsing
>> communication streams between different individuals. Yes English
>> speakers and Swahili's have all these other choices in common,
>> because they are essentially evolved into the genome.  But that
>> does not mean that these choices are in any way determined, even
>> assuming you get the conceptual structure the same.  This stuff
>> could be purely notational.  Its this stuff that the hardness of
>> grammar learning results pertains most too, this is what
>> Chomsky/Pinker mean when they talk about inborn language instinct,
>> all this literature does ignore semantics, but that's because (at
>> least in large part) this literature believes there's a notational
>> ambiguity. Since clearly there could be such a notational
>> ambiguity, to believe there isn't, you have to posit a reason why
>> it wouldn't arise. Evolution just short circuits this by choosing a
>> notation, but figuring out what notation can be a hard problem,
>> since determining a grammar from examples is hard.
>> 
>> 
Richard> Richard Loosemore
>>
Richard> ----- This list is sponsored by AGIRI:
Richard> http://www.agiri.org/email To unsubscribe or change your
Richard> options, please go to:
Richard> http://v2.listbox.com/member/?list_id=303
>> 
>> -----
>> This list is sponsored by AGIRI: http://www.agiri.org/email To
>> unsubscribe or change your options, please go to:
>> http://v2.listbox.com/member/?list_id=303
>> 
>> 

Richard> ----- This list is sponsored by AGIRI:
Richard> http://www.agiri.org/email To unsubscribe or change your
Richard> options, please go to:
Richard> http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to