Hi,

We are doing some study on code duplication for various open source
projects and we found PLY as an interesting case. In PLY, there's a
high rate of code duplication, which is mostly due to performance
optimization - there's a method parsedebug(), and an optimized version
parseopt() by remove the debug statements , and a more optimized
version parseopt_notrack() by remove more tracking statements . The
three parsing function with different optimization level produced  a
lot of code duplication. This optimization technique is quite
reasonable, since Python is an interpreted language and there's no
mechanism like macro. But I still  have some question regarding this:

1. Do you have some comparison data about how performance get improved
by using this optimization technique?

2. Is the parseopt() and parseopt_notrack() be manually maintained?
eg. do we PLY developers have some smart script to strip the "# --!
DEBUG" sections and generate parseopt() and parseopt_notrack()? If it
is manually maintained, is it introduced any maintained burden during
practice?

3. Have you considered other techniques (hacking), for example, have
some code executed in the class definition to generated code for
parseopt() and parseopt_notrack() as string, and use exec to execute
them? It will be some extra work during module importing, but would be
have same performance when calling these method.

Thank you!

-- Haoyu Bai

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"ply-hack" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/ply-hack?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to