In general, problems that would lose millions of dollars are noticed very quickly. Quants are constantly analyzing the sources of shortfall in implementing strategies. Also, time to market is generally more important than correctness. It's much better to have a strategy that mostly works sooner than a strategy that's perfect later. The opportunities to extract alpha from the market decay as more people see them (it is a zero sum game after all). Most of this technology has a lot in common with a Formula 1 race car: you can't win the race with a car that doesn't risk breaking down halfway through the race. Of course, there are exceptions (risk systems, systems that handle client orders, etc). The interest I've seen in Haskell and ML in the quant world has been driven by expressiveness.

On Apr 18, 2007, at 1:47 PM, Seth Gordon wrote:

Paul Johnson wrote:
You cannot win over the entrepreneur with promises of "easier and more robust". This translates to "anyone can do it" and the valuable "trade
secret" of arcane wizardry is now devalued.

I suggest reading extracts from "Beating the Averages" by Paul Graham.
Then explain that Graham only wrote in Lisp because his university
didn't teach Haskell.

I think a more powerful argument would be to talk about cases where
Haskell is *actually being used* industrially.  E.g., "these folks at
Credit Suisse are using Haskell for their analytics because in their
line of work, if the implementation of the code doesn't match up
perfectly with the spec, their employer could lose millions of dollars,
and the programmers might not notice the bug until those millions were
long gone".
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to