On Sun, May 25, 2014 at 6:17 PM, bookaa bookaa <rors...@gmail.com> wrote:
>> 1) How compatible is your Python-to-Golang converter with all the
>> nuances of Python code? Does it work perfectly on any arbitrary Python
>> script? And, what version of Python is it aimed at?
> I try to support all Python syntax, any arbitray script. From the example
> attached,you can see I translate all import system libraries needed and
> produce up to 170000 lines of Go. Up to now, only support Python 2.7.6.
> Maybe I will work on Python 3 later.
>> 2) What's performance like? Presumably significantly better than
>> CPython, as that's what you're boasting here. Have you run a
>> standardized benchmark? How do the numbers look?
> I must admit that after automaticaly convert Python source to Go, compile
> to EXE, the running speed is just as before. For compatible reason, I must
> make it behave just as it before, support any Python feathers. Take a
> example, if we find a function call func1(2), I can not simply convert it
> to Go function call as func1_in_go(2), but something like this:
> because func1 maybe overwrited.
> I think the significance of Python to Go, is it give us opportunity to
> make Python project run fast. We may edit the output Go source. or We
> may add some Python decorator to tell the converter its safe to convert
> it in simple form.
This is extremely unsurprising. Everyone who says "Python is so slow"
is comparing against a less dynamic language. Python lets you change
any name *at any time*, so all lookups must be done at the time
they're asked for. (By comparison, Pike binds all global names at
compile time - effectively, when you import the module. If you want to
change one, you need to reimport code that's using it. C, of course,
binds everything early, and that's that.)
There have been a variety of proposals to remove some of Python's
dynamism with markers saying "This is read-only". Victor Stinner
started a thread on python-ideas this week with some serious proposals
and decent argument (backed by a POC fork of CPython 3.5). Also, I'm
not 100% sure but I suspect that PyPy quite possibly optimizes on the
basis of "this probably hasn't changed the meaning of len()", and does
a quick check ("if len has been rebound, go to the slow path,
otherwise run the fast path") rather than checking each time. Both of
these options are viable, both have their trade-offs... and neither
requires actually compiling via an unrelated language.
I have never liked any system that involves converting code from one
language to another and then hand-editing the resulting code. There is
a reason the languages are different; they have different strengths
and different weaknesses. There's always going to be something that's
messy in the target language. Sometimes you don't care (you can
probably write an 8086 Assembler interpreter in Python and have it run
at 4.77MHz as long as the Python interpreter is running on fast enough
hardware), but if your goal is an overall speed improvement, you're up
against a number of Python interpreters that have specifically looked
at performance (I know CPython may be considered slow, but the devs do
care about running time; and PyPy touts running times of 16% of
CPython's), so you're going to have to boast some pretty good numbers.
Strong recommendation: If you want to move forward with this, compare
against Python 3.x. New projects want to be written for Py3 rather
than Py2, and you're limiting your project's usefulness if it's
compatible only with Py2. As an added "bonus", Py3 is currently a bit
slower than Py2 in a lot of benchmarks, so you get yourself a slightly
easier target :)