|
||||||||
|
This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators. For more information on JIRA, see: http://www.atlassian.com/software/jira |
||||||||
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
For more options, visit https://groups.google.com/d/optout.

After code analysis, I've found the following issues:
Issue with data structures
The first thing that I looked at was the structure of UriReport and PerformanceReport. Both work in a similar manner: they first build up a collection of items, and then iterate over that collection of items each time that data is needed. This is a simplified example of what is happening:
class UriReport { private Map<String, HttpSample> samples = // ... ; public void addSample(HttpSample sample) { samples.add(sample.getUri(), sample); }This style has two drawbacks:
I have applied code changes that change the above pattern to the following (again, simplified):
class UriReport { int errors = 0; public void addSample(HttpSample sample) { if ( !sample.isSuccessful() ) { errors +=1; } } public countErrors() { return errors; } // etcetera. }Note that there is no longer a need for the samples Map in the simplified example above, it's entire content is replaced by one primitive type. In the actual refactored code, it remains needed to keep some data (the date and duration of each sample), but all of the other data was removed from memory and replaced by just a few small primitives. As a result, memory consumption improved a lot, and the CPU does not need to be bothered with iterating over all map entries all the time. The .serialized files where significantly reduced in size as well (which is a good indication of JVM memory consumption, as those files contain raw object data).
In our setup, this fix caused graph generation to take a few minutes. Not (yet) to a workable degree, but at least one that eventually resulted in a graph.
Issue with file cache
The cache that is used to keep serialized files in memory (in JMeterParser) is not used properly. Notably: when there is no hit in the cache, the .serialized file is read from disc but not added to that cache. Only new files (files for which no .serialized data exists are added to the cache, but after a JVM restart, even that data will remain uncached.
By adding the file content to the cache, disc IO is reduced dramatically, as most invocations of graph generation will no longer need to do disc-IO.
The snippet below illustrates the required change to JMeterParser:
Additionally, the cache size was pretty small (100 elements). Given the memory improvements that were made (as described below), the size of the cache can safely be enlarged from 100 to 1000 entities. A future improvement should make this number configurable instead of hard-coded.
In our setup, this fix (combined with the previous fix) caused graph generation to take slightly over one minute, instead of a few minutes. Additionally, this fix made any subsequent generation (after the first one) a lot faster (roughly half a second).
Issue with non-buffered disc IO
Disc IO happens without buffering. By wrapping all input and output streams in buffered streams, the operation is much faster.
The snippet below illustrates one of the required changes (but this needs to be applied in more than one place)
In our setup, this fix (combined with the previous fixes) caused graph generation to take slightly over 10 seconds, instead of slightly over one minute.
For us, all of the above pretty much removed all issues regarding working with the performance plugin. I'd be grateful if my changes could be considered for merging back into the original source code.