Dean wrote:
>>I would be a little leary of believing those numbers too much. Perhaps the
>>particular test you ran might have skewed the results a little towards that
>>particular code, but we've run extensive tests here with some pretty
>>powerful tools and they don't show up that  code as being very high on the
>>list, as far as I know. 

Of course, if a document doesn't have any attributes that XMLAttr::set would 
take up 0% of the time.  The files that I was using for benchmarking were very 
attribute-centric which excerbated the
problem.

The structure of one was:

<timeSeries origin="0" timeInterval="1">
        <timePoint value="1.444"/>
        <!--   20000 to 500000 other time points  -->
</timeSeries>

These may not be typical, but they weren't manufactured because I didn't like 
the code.

When I was trying to parse very large files (10 MB), there was an enormous 
amount of variance in the times for multiple iterations so it was hard to put a 
number on the difference with a code change.
However, the modified version tended to be faster.

Using a smaller 3.60 MB file, the times became consistent for repeated runs.  
Running SAXCount with the existing XMLAttr code gave times between 2163 and 
2203 ms.  Running SAXCount with a modified
version of XMLAttr gave times between 1732 and 1743 which is roughly a 20% 
speed up.  (Times for VC6 release code on Pentium II and Win2K).

Again, the relative gain will depend on the frequency of attributes vs other 
time consuming content.

>>Anyway, we have a big performance pow-wow scheduled after we get this new
>>release out. Roger has been doing considerable profiling and has a lot of
>>numbers we need to look at.

If XMLAttr::set hasn't shown up in Roger's profiling, I'd suggest that he 
create some attribute-rich files in his test suite.  In fairness to those who 
like to use element content (the classic
attribute vs element debate), he should have equivalent files where everything 
is expressed as element content.

Reply via email to