J. Andrew Rogers wrote:

On Nov 18, 2007, at 10:41 AM, Richard Loosemore wrote:
    An investor will want to know
    what creative ideas you have that *directly* start to solve that
    problem.

These are available! Both Ben and I have detailed plans. Neither of us say "just trust me".

Wait, what? The "that problem" in this case is not AI, from a venture finance standpoint. Understand that you are essentially selling "a non-demonstrable idea on how to do research that may ultimately allow us to solve a problem", which is not the same thing as "solving the problem". You are looking for research money, not venture money.

One is allowed to have some amount of uncertainty in the business side of a venture because such things are always a bit non-deterministic. You can come up with an exceptionally detailed business plan for how you are going to become the next Sausage King of Chicago, but you never really know how the market game will unfold in practice.

Technology, on the other hand, can be very strictly evaluated in considerable detail such that there is little or no risk that it will turn out to be infeasible; it may not be economical or practical, but it will technically work. Furthermore, an acceptably detailed description such that you are not tacitly stating "just trust me" is indistinguishable from a prototype for most purposes. Unless the documentation demonstrates conclusively why the technology *must* work as intended, it is a "just trust me" proposition. And in some cases you can find investors who will find this to be an acceptable proposition if you have the rest of your game in order.

It sounds to me like you are either making a stronger assertion about your design and documentation than I see Ben usually make, or you are implicitly saying "just trust me" to investors and do not realize it.

Well, yes and no.

1) How sewn-up does it have to be?

The majority of VC's do, as you say, want a technology that is sewn up, from the point of view of technical feasibility. But this is not always true. There is always a gray area at the fringe of feasibility where the last set of questions has not been *fully* answered before money is thrown at it. I believe this happened in a number of projects during the dot-com insanity. And I have personally seen, from the inside, business projects that were started without the money-source being clear about feasibility at all! To them, it just sounded so plausible and so great that they felt that it ought to have been possible, so they bought into it. The real world is just not so clean that technology is always "evaluated in considerable detail such that there is little or no risk that it will turn out to be infeasible".

2) How much research is needed in this case?

Speaking for myself, I have increasingly come to believe that the uncertainties in my plan are more controllable than in other approaches to AGI. My approach involves having a framework for the entire design that is derived from cognitive science (and which is remarkably complete) PLUS a search for stable mechanisms within that framework. The second phase sounds like pure "research", but it is semi-automated in such a way that the uncertainties are greatly reduced. For example, it is possible to write out all the issues that have to be resolved and how to go about resolving them, WITHOUT the need for huge amounts of creative input to the process (no "a miracle happens here" steps).

The one question that hangs over this approach is whether the result of all that systematic effort really would cough up a fully functional AGI system. That I cannot say .... but I can also produce technical arguments that strongly indicate that no AGI project will ever be able to eliminate that uncertainty ahead of time (this was the "sorry, no prototype is feasible" argument that I suggested earlier, and which Ben also subscribes to).

If I am right in this last idea, VCs have a stark choice: if they want AGI, they have to relax their insistence on a project that does not have that last "research" step. If they insist on something stronger, they can kiss goodbye to ever getting an AGI.

My claim is that my particular approach reduces the uncertainty as much as possible.



Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66452954-4d45f8

Reply via email to