On Nov 18, 2007, at 3:40 AM, Bob Mottram wrote:

I've heard people on AI forums make this claim many times over the
last 15 years - something like "I have discovered the secret of AI
!... but I'm not going to tell you what it is unless you give me a lot
of money".  I think the thing which makes the difference between
regular charlatanry and an investable project is whether or not you
can show something which might indicate that it's really feasible -
even if that something is less than a fully working prototype.  The
charlatan of course will always flatly refuse to reveal the smallest
detail.


Yet crazy "blue sky" ideas get funded semi-regularly by knowledgeable investors. Some pan out, most don't.

One does not necessarily have to prove a prototype to fund a "blue sky" venture. Indeed, if you have a prototype it is no longer "blue sky". There is at least one other type of asset that can be a sufficient condition to get conventional venture funding, though it may take nearly as much work: reputation. Individuals with a credible reputation for being capable of feats of technological or business wizardry can often raise money entirely on spec because their credibility and reputation makes a result plausible. You still need a thorough business plan, but you don't have to prove the theoretical nature of the product as your mere involvement mostly covers that bit of due diligence since you are presumably more capable of that evaluation than anyone else in the room *and* your competence at that evaluation is trusted based on past performance. Note that having a credible reputation in "AI research" is usually not sufficient on its own since that whole field has the patina of low credibility, you have to have done something concrete in a more real field; Jeff Hawkins, for better or worse, is an example of someone involved in AI who can carry himself on reputation regardless of proven technical competence in that field.

But again, useful reputation in this regard is rarely inexpensive. There are multiple paths to AGI venture funding, and individual situations will vary. This is not just a problem for AI research, you often have to bring reputation and/or a thorough description in other venture areas as well. The reality is that (virtually) no AI research meets the basic level of description and/or credibility that is routinely required in other technology ventures. Any decent AI venture will be able to meet these due diligence thresholds; the howls of protest to the contrary are indistinguishable from those of crackpots and incompetents in every other venture field.


AI researchers don't get singled out for being AI research per se, they simply don't rise to the basic level of due diligence required in the venture funding world, even for "blue sky" ventures. I would make the observation that this is eminently fixable if an AI venture is worth a damn, and some people do raise money when they approach it in a proper venture-funding context. A2I2 would be an example of an AI venture that has been relatively successful in this regard *because* Peter Voss understands the mechanics of venture funding as a practical matter; there is a lot more to it than thinking you have a super-duper AI idea, and competence in execution matters at least as much thinking you have an idea. Proactively minimizing risk in as many areas as possible make a venture much more salable, but most AI ventures tend to be very apparently risky at many levels that have no relation to the AI research per se and the inability of these ventures to minimize all that unnecessary risk is a giant mark against them.


Cheers,

J. Andrew Rogers

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66345307-107ff1

Reply via email to