I'm still not quite sure if what I said came across clearly, because some of what you just said is so far away from what I intended that I have to make some kind of response.

For example, it looks like I've got to add "Seed AI" to the list of dumb approaches that I do NOT want to be identified with! At least if you define Seed AI the way you do: trying to bootstrap the whole AI from a small core, without any big effort to encode some structure.

I thought I did deny that approach already: I explained that I was doing a huge reengineering of the existing body of knowledge in cognitive science. Can you imagine how much structure there is in such a thing? There are roughly 1,000 human experiments or AI simulations accounted for in that structure, and all integrated in such a way that it implies one over system framework (at least, that is the goal of the project). That doesn't sound like Seed AI to me: it has both structure in its architecture, and it also allows for some priming of its knowledge base with 'hand-built' knowledge.


As for your comment about "would Google ever have worked if they used things like foo_A1 and foo_A27 for all their variable names?"

Huh?

That sounds like, after all, I communicated nothing whatsoever. I don't know if that is supposed to be a serious point or not. I will assume not.


Richard Loosemore.


Russell Wallace wrote:
Ah! That makes your position much clearer, thanks. To paraphrase to make sure I understand you, the reason you don't regard human readability as a critical feature is that you're of the "seed AI" school of thought that says we don't need to do large-scale engineering, we just need to solve the scientific problem of how to create a small core that can then auto-learn the large body of required knowledge.

I spent a lot of time on every known variant of that idea and some AFAIK hitherto unknown ones, before coming to the conclusion that I had been simply fooling myself with wishful thinking; it's the perpetual motion machine of our field. Admittedly biology did it, but even with a whole planet for workspace it took four billion years and "I don't know about you gentlemen, but that's more time than I'm prepared to devote to this enterprise". When we try to program that way, we find there's an awful lot of prep work to generate a very small special-purpose program A to do one task, then to generate small program B for another task is a whole new project in its own right, and A and B can never be subsequently integrated or even substantially upgraded, so there's a hard threshold on the amount of complexity that can be produced this way, and that threshold is tiny compared to the complexity of Word or Firefox let alone Google let alone anything with even a glimmer of general intelligence.

    One of the arguments against this position, of course, is that We Don't
    Care, because if we went to enough trouble we could 'hand-build' a
    complete system, or get it up above some threshold of completeness
    beyond which it would have enough intelligence to be able pick up the
    learning ball and go on to build new knowledge in a viable way (Doug
    Lenat said this explicitly in his Google lecture, IIRC).


Oh no, I don't believe that. I don't believe a complete system can be hand-built; Google wasn't, after all, most of what it knows was auto-learned (admittedly from other human-generated material, but not as part of the same project or organization). Conversely (depending on how you look at it) either there is no completeness threshold, or it's so far beyond anything we can coherently imagine today that there might as well not be one, so the seed AI approach can't work either.

In reality, both software engineering and (above a minimum adequacy threshold) auto-learning are both going to stay important all the way up so we have to cater for both. And from the software engineering viewpoint (which is what I'm talking about here)... well, would Google ever have worked if they used things like foo_A1 and foo_A27 for all their variable names? No. QED :)
------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to