Hi All, As a newbie I try to read up on Clojure whenever I can. One of the common things that I read is that Clojure is really good for operating on large data sets, but I haven't seen anyone articulate why that is aside from alluding to lazy evaluation. So I assume lazy evaluation is the primary reason, but what are others? A few searches didn't turn up any obvious results. If there are posts/articles that discuss this, I'd love to read them.
My primary reason for asking is that a project I'll be moving to in the future will be working with large scale data sets. At the moment, Clojure is something that I'm drawn to out of intellectual curiosity, as I haven't been able to put to use in my day to day work (Java & C+ +). Given this project is more or less brand new, I'd like to think now is as good a time as any to branch out to a new language if it fits. So could anyone help me understand more about why Clojure works well on large data sets? Thanks! --- Chris -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to [email protected] Note that posts from new members are moderated - please be patient with your first post. To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/clojure?hl=en
