It is too soon to conclude that POE is the cause.

The difference in CPU time between your Python and Perl tests is 0.0002 seconds per article (user+sys). It's not a significant difference at this scale. It could be caused by any number of low- level variations between POE and Twisted, or Perl and Python.

The wall times are a better place to look for inefficiencies. They are three orders of magnitude larger than the total CPU times.

The large difference between wall and CPU times indicates a non-CPU bottleneck. In your test case, it's probably the network.

What could account for network differences?

You are comparing two different NNTP client libraries. They may interact differently with the server, which is a likely place to look. To investigate this, I would scale the tests down to a small number of articles (2 < count <= 10). I would dump the resulting network traffic for each test. I would use the dumps to compare interaction patterns between the two libraries. I would also look at timings: the times between command & response, the times between one command and the next, the length of time to receive an article, and so on. tcpdump and tshark (or wireshark) are very good for this sort of thing.

If network interaction is comparable, we should revisit the points I outlined in my last message. They don't seem to have made much of an impression, but they may still be relevant.

Thank you.

--
Rocco Caputo - [email protected]

On Mar 25, 2009, at 13:18, howard chen wrote:

But anyway, now I re-run the test, instead of downloading 100 posts,
now I download 1K articles.

python
real    3m6.007s
user    0m0.088s
sys     0m0.016s

perl
real    6m13.730s
user    0m0.268s
sys     0m0.024s

Reply via email to