On 2009-10-17 22:11:56 +0200, "Nick Sabalausky" <[email protected]> said:
Only on 64-bit systems. Which are already ridiculously fast anyway. So what if they get some more performance? They already have gobs of performance to spare. On a 32-bit system it changes the programs performance down to "It don't f** work at all", which is the mark of an incredibly arrogant developer who likes to shoot themself in the foot by arbitrarily shrinking their own potential user base.
Well, for some it is a necessity. In our field (NLP), a theoretical maximum of 4GB of memory is just too little for anything but some scripting. Just to give some numbers: with the current size of corpora we need ~20GB of memory for error mining in parsing results, and at least ~10GB of memory for serious natural language generation.
Until there is a good D2 compiler that can compile x86_64 on Linux, I'll have to continue using C++. After writing some small programs, I have come to the conclusion that the use of D would be far more comfortable, although some of the C++0x extensions that are already implemented in g++ help.
Not that I am complaining, I understand that the first priority is getting the D2 specification finished, but I think it would be far more productive (and easier to support x86_64) if LLVM became the default backend. LLVM will be the future of non-managed compiled languages anyway...
Take care, Daniel
