On Wed, 15 May 2002, Scott Finnie wrote: > As a naive but interested newbie, I'm very keen to understand those > things that FP does well - and just as importantly, those things it > doesn't. (I'm coming at this from use in an industrial context). > Based on (_very_) limited experience so far, I get the feeling that: > > . FP is well suited to transformation-type problems - i.e. those that > can usefully be viewed as transforming some input to a required > output. > . FP - or at least Haskell - is not so well suited to reactive, event > driven, parallel systems: i.e. those that can usefully be viewed as a > CSP system.
In my personal experience it's not really specific things but rather in the more general area that: (1) FP generally makes it less painful to write higher level code (especially generic, reusable code), particularly because it of the ease of combining things using higher order functions, both from the standard prelude and designed for the particular domain the program is working in. Higher level code is generally less difficult (note not `easy') to write in the first place, understand and modify drastically. (2) FP is generally not more helpful than other approaches if you're trying to get close to optimal performance by ensuring that nothing redundant happens. (Redundant in the sense that without optimization "map f . map g" applied to a list will build a new intermediate list, including the `links' when applying the g only to break it apart again to apply the f; I know that the timing is actually more complicated due to laziness but it illustrates the point.) The key to using deciding when it's FP would be an appropriate choice lies in being able to honestly judge whether the program you are working on will benefit more from (1) than it will suffer from (2). I don't work in industry but rather am an academic researcher with a different field of research (i.e., I don't get any `publication credit' for doing things using a new FP-based technique rather than an existing one) and I use Haskell for: (1) Prototyping image-processing algorithms on toy data. When my initial ideas are in heavy flux and I'm trying to figure out if they will work at all, writing Haskell implementations on artificial data means I don't waste time coding in imperative-language detail things that won't work anyway. Performance issues & integration hassles mean I've always written C++ code before being able to try candidate algorithms on real (LARGE) data sets. (2) Scripting and small applications. For example, I've got a script that does some really nasty, tortous processing to build Makefiles that takes a couple of minutes to run under Hugs when I'm sure it would run in a couple of seconds if I rewrote it in C++. But I only run it a couple of times a day, and the ease of writing and modification far outweigh the slower running time. > Please don't flame if this is off-base, I'm trying to get a handle on > things. This is based primarily on Haskell; I realise Erlang's primary > domain is telecoms, which implies it does address the second category. > Assuming that's so, are there extra concepts in Erlang that make it > suitable for such problems? There are certainly some different ideas in Erlang but I think the primary reason it's used is that it was developed in a research lab somewhere in Ericsson. ___cheers,_dave_________________________________________________________ www.cs.bris.ac.uk/~tweed/ | `It's no good going home to practise email:[EMAIL PROTECTED] | a Special Outdoor Song which Has To Be work tel:(0117) 954-5250 | Sung In The Snow' -- Winnie the Pooh _______________________________________________ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell