On Feb 12, 2009, at 8:07 PM, Douglas Roberts wrote:

For some while I've been kind of surprised, in a detached sort of way, at the general disregard that the FRIAMers I talk with hold for C++. [...]

Hi Doug,

Interesting topic! I don't know if I'm a "typical" FRIAMer (there is probably no such thing), but I am a long time geek with 25 years in the software industry (semi-retired/self-employed as of last summer). I started out programming in Lisp in the mid-1980s, just before "AI Winter" (http://en.wikipedia.org/wiki/AI_Winter) set in. In my opinion, it was a knee-jerk reaction to abandon Lisp because of its association with AI, and always felt that the software industry took an incredibly wrong turn when it moved to languages and environments that puts more burden on the programmer, i.e. C/C++ (again, my opinion). Lisp seems to have been making somewhat of a comeback lately, but time will tell.

I think a lot of some FRIAMers' reactions (mine included) to C++ has to do with valuing programmer time and energy over that of the hardware. The need to quickly build prototypes with limited programmer time favors higher level languages like Python. Of course, I've heard similar reactions to Java, which I haven't found that bad (I've been programming in Java since 1997). A good IDE, e.g. Eclipse, helps a lot.

[...] One explanation that has been given me was "Well, C++ is prone to horrible memory management errors."

To which I respond: not if you code properly. And when you do make (the inetivable) memory allocation/deallocation/reference error, there are tools that will help you track the problem down posthaste. Valgrind, for example. Purify, for another.

Well, you could say the same thing about any software tool - if you use it correctly, it is fine (a VERY BIG if), and there are tools to help you cope with the inevitable mistakes. I coded C++ for several years in its early days (late 80s, early 90s), and didn't care much for it. Chasing down memory leaks were bad enough, but holding onto freed memory was even worse, and nearly impossible (in those days) to find, since it usually resulted in core dumps long after the original error had occurred. No doubt the tools have improved, but is that how I really want to spend my time? I'd much rather concentrate on the problem, not arcane details like manual memory management. I like to think that my mental "CPU cycles" are worthy of something more interesting like the problem I'm trying to solve.

Another reason that has been repeatedly given for C++'s disfavor is, "It takes too long to develop C++ expertise." I guess I don't buy that either. I don't really think the level of expertise necessary to gain proficiency in C++ is all that much different that FORTRAN, Java, Pascal, LISP, or any other major language.

I haven't actually heard anyone say that it takes too long to learn, but I can believe it. C++ has always seemed to be an overly complex language, what with multiple ways of passing and dereferencing pointers, passing complex structures by value with bitwise copy, multiple inheritance, templates...

I suppose I understand the FRIAM community's interest in Netlogo, but it still seems to me to be a "toy" language, in which you can develop small prototypes, but you can't scale a Netlogo application beyond serial computing environments. Translated: you can't develop interesting real world problem solutions in Netlogo.

I haven't used NetLogo personally, so someone else can comment on whether or not it would be practical to do real world systems with it, but as I understand it, NetLogo makes it easier to build agent based models. But again, the main point for a small group is that getting proof of concept solutions up and running quickly is very important, and with limited staff, it makes no sense to waste valuable programming time on something like memory management, which a garbage collector is perfectly capable of doing very well. And when your bread and butter (ideally) is complex systems, why not at least start with a tool that closely matches one of its problem domains, i.e. agent based modeling?

So, I guess it really doesn't surprise me much that Google picked the language set that they did, given the company's technology focus, and the collective power provided by that selection of languages.

What they picked probably makes sense. Java is industrial strength enough to do the server side of a lot of application software; Javascript capable and enabled browsers are safe to take for granted for the client side; C++ provides low level access to hardware. Python seems to be at least holding its own for a general purpose language and as a "scripting" language in competition with Perl and Ruby, and in a lot of cases can supplant Java for the server side application software. See for example reddit.com, which started out written in Lisp and was later migrated to Python (much to the consternation of a lot of the Lisp community).

By the way, Peter Norvig (http://www.norvig.com) is director of Research at Google, and he has a really nice page on Lisp and Python (http://www.norvig.com/python-lisp.html ). I noticed that in the blog entry referenced, the blogger stated that "productive work" at Google had to be done in the four languages. I suspect that the same constraints don't carry over into Google Research, but couldn't say for sure.

--Doug
--
Doug Roberts, RTI International
drobe...@rti.org
d...@parrot-farm.net
505-455-7333 - Office
505-670-8195 - Cell

;; My .02
;; Gary

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to