[Zope3-dev] Re: Performance Testing
Tarek Ziadé wrote: Benji York wrote: Tarek Ziadé wrote: Maybe we can list in the proposal a list of utilities ? That would be good, the proposal could use some more details. Would you mind adding to it some point to complete it ? Oh, sure. I just considered you king of the proposal, but I'd be glad to pitch in. :) -- Benji York Senior Software Engineer Zope Corporation ___ Zope3-dev mailing list Zope3-dev@zope.org Unsub: http://mail.zope.org/mailman/options/zope3-dev/archive%40mail-archive.com
[Zope3-dev] Re: Performance Testing
Benji York wrote: Tarek has started some very interesting work on adding performance testing to the Zope 3 testing infrastructure and it so happens that Jim and I were discussing something very similar last week, so I'd like to suggest some functionality we might want to have (which I should be able to help implement). I have started the proposal here: http://www.zope.org/Wikis/DevSite/Projects/ComponentArchitecture/PerformanceRegressionTool Tarek. -- Tarek Ziadé | Nuxeo RD (Paris, France) CPS Plateform : http://www.cps-project.org mail: tziade at nuxeo.com | tel: +33 (0) 6 30 37 02 63 You need Zope 3 - http://www.z3lab.org/ ___ Zope3-dev mailing list Zope3-dev@zope.org Unsub: http://mail.zope.org/mailman/options/zope3-dev/archive%40mail-archive.com
[Zope3-dev] Re: Performance Testing
Benji York wrote: Tarek Ziadé wrote: http://www.zope.org/Wikis/DevSite/Projects/ComponentArchitecture/PerformanceRegressionTool The unit test decorator looks good; I might prefer a different name than timedtest, but because I don't write old-style unit tests I'm just -0. The fact that Zope 3 discourages old-style unit tests might also be a good reason not to include such a decorator. The function wrapper for use in doctests doesn't seem like the optimal solution. I almost never create functions in my doctests, so a function wrapper would either be useless or I'd have to force the doctest into creating functions just so I can make performance assertions about them. Perhaps instead a helper object or function(s). Maybe something like this: resetPystoneTimer() ... elapsedPystones() 1000 True You are right the function wrapper is kind of overkill for functional tests most of the time. This looks better. I think it could be nice though, to provide a set of utilities, such as the decorator and the elapsedPystones() thing, because i can think of tests where we can use the wrapping approach to facilitate test writings. (that's all about wrappers anyway) For example, when a given function is called in some loops in a test, i might want to mark it in the beginning, and go ahead in my test, where it can be called several times, just to avoid adding pystone size checks all over the place, because this is not in my mind then. But this is not the most important thing, as we can probably just get ridd of never-used writing style afterwards, the idea is there. Maybe we can list in the proposal a list of utilities ? [cut] Tarek -- Tarek Ziadé | Nuxeo RD (Paris, France) CPS Plateform : http://www.cps-project.org mail: tziade at nuxeo.com | tel: +33 (0) 6 30 37 02 63 You need Zope 3 - http://www.z3lab.org/ ___ Zope3-dev mailing list Zope3-dev@zope.org Unsub: http://mail.zope.org/mailman/options/zope3-dev/archive%40mail-archive.com
[Zope3-dev] Re: Performance Testing
Benji York wrote: Tarek has started some very interesting work on adding performance testing to the Zope 3 testing infrastructure and it so happens that Jim and I were discussing something very similar last week, so I'd like to suggest some functionality we might want to have (which I should be able to help implement). 1) warn about regressions: the test runner will keep per-test, machine-independent records of how long tests take and will report regressions larger than a predefined percentage. These records will be checked in so that if someone else makes changes (in a fresh checkout) that causes a particular test to slow down drastically, they will be warned. - About machine-independence: Stephan brought the pystone idea to take care of it - some thaught about percentage this percentage (let's call it the tolerance), may vary a lot, depending on the test complexity. So i think we can do the performance regression testing in two steps i would like to suggest: 1/ a first step would be to run the tester in a special mode to render an ordered list for each non-marked test: + a measure of the average complexity, including the number of calls and when possible the type of complexity (linear, exponential, logarithmic, etc...) + a measure of pystones 2/ This stats would then be used to mark all hot spots with a max pystones allowed. The decorator we implemented last week fits well for this. 2) testbrowser should keep up with a (machine-independent) metric of how long the previous request took so performance assertions can be made inside tests. E.g. browser.open('http://localhost/foo') browser.last_request_time 0.5 3) the functional testing framework should be extended to allow the collection of total time (again, machine-independent) per request and the test runner should have an option to display the top n slowest requests. Comments? For the same reasons, it would be nice to have the same kind of regression test for memory taken by objects: In the webmail i am coding, if i suddenly change the code and by doing this, i double the size of a mail object in the ZODB, that can be quite bad, as i have thousands of instances of them. So i would like to be able to do the same kind of marker for memory. Regards, Tarek ___ Zope3-dev mailing list Zope3-dev@zope.org Unsub: http://mail.zope.org/mailman/options/zope3-dev/archive%40mail-archive.com