Hi,

I've done some basic testing of my patches using ab. I tested both a single XSLT transform and a double transform (XSLT => XSLT => result). The result tables are at the end of this email, but in summary:

- Overall

I found that my patches don't introduce any significant performance issues, and in fact increase performance in some situations. There is an extremely slight degradation in performance due to reading multiple dependancy files per pipeline (rather than just 1), but this can be alleviated via in-memory dependancy caching.

[ Note: My initial tests were much worse, but then I realised I was doing something stupid in the code that was slowing things down. I'll update my patches to fix that issue shortly. ]

- Non-caching performance.

I found a very slight increase in performance when AxKit is not caching content. This makes sense to me since the restructuring of AxKit.pm results in a more efficient handling of data.

- Caching performance.

I found a very slight decrease in performance when AxKit is caching content. In the single XSLT case, the decrease was extremely small (-0.46%) and the double XSLT case was slightly larger (-2.55%). I assumed that this is due to the slightly larger overhead in reading the cache attributes file (containing dependancies, etc). The file is slightly more complex than the previous implementation, resulting in the very small increase in the single XSLT case. And in the double XSLT case we now need to read 2 attribute files (one for each stage of processing), hence the larger delay.

To test my assumption, I implemented in-memory caching of the attributes files. The mtime of the attribute file is still checked to ensure the cache is valid, but the files no longer need to be read and parsed. This resulted in a a significant increase in performance above the normal (+6.35% in the single XSLT case, and +2.35% in the double).

Obviously my tests haven't tested the performance difference when the pipeline contains non-cacheable transforms (such as XSP) - but in general it would be obvious that these could benefit significantly as at least some of the pipeline will now be cached instead of none.


Here's the actual result data, collected on my powerbook 400 using the loopback interface.

- Single XSLT without caching. Collected via 'ab -k -n 2000 <url>'

      |    orig   |   incr    |    diff
----------------------------------------
RPS   |   23.03   |   23.97   |   +4.08%
TPR   |   43.42   |   41.72   |   +4.07%

- Double XSLT without caching. Collected via 'ab -k -n 2000 <url>'

      |    orig   |   incr    |    diff
----------------------------------------
RPS   |   15.01   |   15.34   |   +2.20%
TPR   |   66.60   |   65.21   |   +2.13%

- Single XSLT with caching. Collected via 'ab -k -i -n 2000 <url>'

      |    orig   |   incr    |    diff
----------------------------------------
RPS   |   67.98   |   67.67   |   -0.46%
TPR   |   14.71   |   14.78   |   -0.47%

- Double XSLT with caching. Collected via 'ab -k -i -n 2000 <url>'

      |    orig   |   incr    |    diff
----------------------------------------
RPS   |   60.76   |   59.21   |   -2.55%
TPR   |   16.46   |   16.89   |   -2.55%

- Single XSLT with caching and in-memory dependency caching.

      |    orig   |   incr    |    diff
----------------------------------------
RPS   |   67.98   |   72.30   |   +6.35%
TPR   |   14.71   |   13.83   |   +6.36%

- Double XSLT with caching and in-memory dependency caching.

      |    orig   |   incr    |    diff
----------------------------------------
RPS   |   60.76   |   62.19   | +2.35%
TPR   |   16.46   |   16.08   | +2.37%


Regards, Chris

Attachment: PGP.sig
Description: This is a digitally signed message part



Reply via email to