dmitrey wrote: > Hi all, > the url http://torquedev.blogspot.com/2008/02/changes-in-air.html > (blog of a game developers) > says IronPython is faster than CPython in 1.6 times. > Is it really true? > On certain platforms, I believe so, for certain types of operations. Not sure if Mono also provides a speedup. Most of the speedup is due to large amounts of (paid) effort being spent creating a high-speed ILM optimizer. Because IronPython can make use of the work that MS has been poring into Dynamic language compilation, it can get quite a few speedups that CPython just doesn't get because they don't have the people to do the work. Optimising code automatically is a reasonably complex process that tends to introduce lots of potential errors. The CPython devs are not AFAIK working on performance much these days, so likely CPython won't improve any time soon, i.e. 3.0 will likely not be any faster than 2.5 from anything I've heard.
PyPy is attempting to address this issue via a separate interpreter, but it's currently just playing catch-up on performance most of the time. It does have a JIT, and might one day be fast enough to be a usable replacement for CPython, but it will require a lot of developer-years to get it there, most likely. It would be really nice if PyPy could get Python 2.5 running say 5x faster and then run with that. With that Python would open out into entire new areas of applicability, becoming reasonable as an embedded language, or a systems language. Only 2x slower than C would make Python pretty close to a perfect language... (far more attractive than a slightly tweaked syntax IMO). That's probably 5-10 developer years out, though, not counting any distractions from trying to support Python 3.x. > If yes, what are IronPython drawbacks vs CPython? > Mostly library access from what I understand. Numpy and SciPy, for instance, are not AFAIK ported to IronPython. Those are the people who *really* need speed, and without those APIs having "Python" available faster doesn't really mean much. IronPython has access to the Win32 API, so if you want to use Win32 APIs, rather than the CPython ones, you're golden, but Numpy/SciPy's interface is *really* elegant for working with large arrays of data. If you're trying to write tight numeric loops for gigabyte arrays in raw Python, 1.6 times performance isn't really usable... even 5x is just barely usable. Numpy lets you use the optimized (C) libraries for the heavy lifting and Python friendliness where you interact with humans. If Python were 10x faster you *might* completely rewrite your Numpy in Python code, but I'd expect that naive Python code would still be beat handily by BLAS or what have you under the covers in Numpy. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. are the two lines that tend to preclude CPython ever becoming *really* fast. Optimizing code is almost always complex and hard to explain. You need lots and lots of thought to make a compiler smart enough to wring performance out of naive code, and you need a lot of thought to reason about what the compiler is going to do under the covers with your code. IronPython (and Jython, and Parrot) can use the underlying system's complexity without introducing it into their own project. PyPy is trying to create the complexity itself (with the advantage of a smaller problem domain than optimising *every* language). > And is it possible to use IronPython in Linux? > Yes, running on Mono, though again, I don't believe Mono has had the optimisation effort put in to make it competitive with MS's platforms. Just my view from out in the boonies, Mike -- http://mail.python.org/mailman/listinfo/python-list