Mike Dougherty wrote:
On Dec 28, 2007 8:28 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Actually, that would be a serious miusunderstanding of the framework and
development environment that I am building.  Your system would be just
as easy to build as any other.

... considering the proliferation of AGI frameworks, it would appear
that "any other" framework is pretty easy to build, no?  ok, I'm being
deliberately snarky - but if someone wrote about your own work the way
you write about others, I imagine you would become increasingly
defensive.

You'll have to explain, because I am honestly puzzled as to what you mean here.

I mean "framework" in a very particular sense (something that is a "theory generator" but not by itself a theory, and which is complete account of the domain of interest). As such, there are few if any explicit frameworks in AI. Implicit ones, yes, but not explicit. I do not mean "framework" in the very loose sense of "bunch of tools" or "bunch of mechanisms".

And in my comment to Ben, I said "any other" in reference to a particular AI system, not referring to frameworks at all.

As for "the way I write about others' work". I don't understand. I have done a particular body of research in AI/cognitive science, and as a result I have published a paper in which I have explained that there is a very serious problem with the methodological foundations of all current approaches to AI. As a result I am obliged to point out that many things said about AI fall within the scope of that problem. This is not personal nastiness on my part, just a consequence of the research I have done. Should anyone become defensive or offended by that? Not at all. So I am confused.

As for the comment above: because of that problem I mentioned, I have evolved a way to address it, and this approach means that I have to devise a framework that allows an extremely wide variety of Ai systems to be constructed within the framework (this was all explained in my paper). As a result, the framework can encompass Ben's systems as easily as any other. It could even encompass a system built on pure mathematical logic, if need be.

This is not a particularly dramatic statement.

My purpose is to create a description language that allows us to talk
about different types of AGI system, and then construct design
variations autonmatically.

I do believe an academic formalism for discussing AGI would be
valuable to allow different camps to identify their
similarity/difference in approach and implementation.  However, I do
not believe that AGI will arise "automatically" from meta-discussion.

Oh, nobody expects it to arise "automatically" - I just want the system-building process to become more automated and less hand-crafted.

My guess is that any system that is generalized enough to apply across
design paradigms will lack the granular details required for actual
implementation.

On the contrary, that is why I have spent (am still spending) such an incredible amount of effort on building the thing. It is entirely possible to envision a cross-paradigm framework.

Give me about $10 million a year in funding for the next three years, and I will deliver that system to your desk on January 1st 2011.

I applaud the effort required to succeed at your
task, but it does not seem to me that you are building AGI as much as
inventing a lingua franca for AGI builders.

Not really. I don't want a lingua franca as such, I just need the LF as part of the process of addressing the complex systems problem.

I admit in advance that I may be wrong.  This is (after all) just a
friendly discussion list and nobody's livelihood is being threatened
here, right?

No, especially since few people are being paid full time to work on AGI projects.

There is, though, the possibility that a lot of effort could be wasted on yet another AI project that starts out with no clear idea of why it thinks that its approach is any better than anything that has gone before. Given the sheer amount of wasted effort expended over the last fifty years, I would be pretty upset to see it happen yet again.


Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=80020995-5b8a2d

Reply via email to