On Fri, 16 Feb 2018 21:22:48 -0600, boB Stepp wrote: > This article is written by Nathan Murthy, a staff software engineer at > Tesla. The article is found at: > https://medium.com/@natemurthy/all-the-things-i-hate-about- python-5c5ff5fda95e > > Apparently he chose his article title as "click bait". Apparently he > does not really hate Python (So he says.). His leader paragraph is: > > "Python is viewed as a ubiquitous programming language; however, its > design limits its potential as a reliable and high performance systems > language. Unfortunately, not every developer is aware of its > limitations."
I haven't (yet) read the article, but the above seems reasonable to me. Python's design does put rather harsh limits on how far you can push the compiler to generate high-performance code. I'm not so sure about the claim for "reliable" -- I suspect that's actually *not* a reasonable conclusion to draw (but I shall keep an open mind until I've read the article). If high-performance is important to (generic) you, the right way to use Python is: - use Python for the parts of the application which are not performance critical, e.g. the user interface; - prototype the rest of the application in Python as a first iteration; - profile the application to find the *actual* bottlenecks, not the parts of the code you *imagined* would be bottlenecks; - re-write those parts in a higher performance language; - and use Python as the glue to hold those pieces together. This is why Python is prodding buttock in the field of scientific computing. That's precisely how numpy works: the back-end is written in C and Fortran for speed, and the front-end user-interface and non-critical code is written in Python. Python is really good for gluing together high-performance but user- and programmer-hostile scientific libraries written in C and Fortran. You wouldn't write a serious, industrial-strength neural network in pure Python code and expect to process terabytes of data in any reasonable time. But you might write a serious, industrial-strength neural network in C or Rust, and then use your Python layer as the front-end to it, feeding data in and out of the neural network from easy-to-write, easy-to-debug, easy- to-maintain Python code. And you can write a prototype neural network in Python. It won't process gigabytes of data per second, but it will probably process megabytes. And with tools like cython, you can often turn your Python code into Python- like code that is compiled into efficient machine code, giving you almost C-like speed with almost Python-like simplicity. Remember that Python's original purpose was to act as a minimalist glue language between C and Fortran libraries. Over the years Python has developed enough features and power to be far more than that: its an excellent generalist, a high-performance scripting language and low to medium performance application language[1]. But acting as a glue between components written in other languages is still a very powerful technique for Python programmers. The standard library works like this too: non-critical libraries, like those for manipulating command line arguments, are written in pure Python. Critical components, like dicts and sets, are written in C. And then there are libraries that were prototyped in Python, then re-written in C for speed once they were mature, like the decimal and fractions modules. [1] Pedants will (rightly) observe that computer languages aren't themselves high- or low-performance. Languages are abstract entities that cannot be described as "fast" or "slow", as they run at the speed of imagination. Only specific interpreters/compilers can be described as fast or slow. Here I am talking about the standard CPython interpreter. There are five mainstream, mature Python interpreters: CPython, Jython, IronPython, PyPy and Stackless. They all have their individual strengths and weaknesses. -- Steve -- https://mail.python.org/mailman/listinfo/python-list