Thank you for taking the initiative.
Just reminding you that the test framework is owned
by infra so don't forget to put Yaniv and I in the CC
for all future correspondence regarding this feature.

As I will be the one responsible for the final
approval.

Ignore http://www.ovirt.org/Vdsm_Developers#Performance_and_scalability

Also we don't want to do it per test since it's meaningless for most tests
since they only run through the code once.

I started investigating how we want to solve this issue in the past and this
is what I can up with.

What we need to do is create a decorator that wraps the test with cProfile.
We also want to create a generator that using configuration from nose.

def BenchmarkIter():
    start = time.time()
    i = 0
    while i < MIN_ITERATIONS or (time.time() - start) < MIN_TIME_RUNNING:
        yield i
        i += 1

So that writing a benchmark is just:

@benchmark([min_iter[, min_time_running]])
def testSomething(self):
    something()

That way we are sure we have a statistically significant sample for all tests.

There will need to be a plugin created for nose that skips @benchmark if
benchmarks are not turned on and can generate output for the Jenkins
performance plugin[1]. That way we can run it every night as the benchmarks
will be slow to run since they will intentionally take a few seconds each
and try and hammer the CPU\disk so people would probably not run the entire
suite themselves.

[1] https://wiki.jenkins-ci.org/display/JENKINS/Performance+Plugin
----- Original Message -----
> From: "ybronhei" <ybron...@redhat.com>
> To: "Francesco Romani" <from...@redhat.com>, "vdsm-devel" 
> <vdsm-devel@lists.fedorahosted.org>
> Sent: Monday, March 17, 2014 1:57:34 PM
> Subject: Re: [vdsm] Profiling and benchmarking VDSM
> 
> On 03/17/2014 01:03 PM, Francesco Romani wrote:
> > ----- Original Message -----
> >> From: "Francesco Romani" <from...@redhat.com>
> >> To: "Antoni Segura Puimedon" <asegu...@redhat.com>
> >> Cc: "vdsm-devel" <vdsm-devel@lists.fedorahosted.org>
> >> Sent: Monday, March 17, 2014 10:32:40 AM
> >> Subject: Re: [vdsm] Profiling and benchmarking VDSM
> >
> >> next immediate steps will be
> >>
> >> - have a summary page to collect all performance/profiling/benchmarking
> >> page
> >
> > Links added at the bottom of the VDSM developer page:
> > http://www.ovirt.org/Vdsm_Developers
> > see item #15
> http://www.ovirt.org/Vdsm_Developers#Performance_and_scalability
> 
> >
> >> - document and detail the scenarios the way you described (which I like)
> >> the benchmark templates will be attached/documented on this page
> >
> > Started to sketch our "Monday Morning" test scenario here
> > http://www.ovirt.org/VDSM_benchmarks
> >
> > (yes, looks quite ugly, no attached template yet. Will add).
> >
> > I'll wait a few hours to let things cool down a bit and see if something
> > is missing, then start with the benchmarks using the new, proper
> > definitions
> > and a more structured approach like the one documented on the wiki.
> >
> > http://gerrit.ovirt.org/#/c/25678/ is the first in queue.
> >
> can we add the profiling decorator on each nose test function and share
> results link with each push to gerrit?
> the issue is that it collects profiling only for one function in a file.
> we need somehow to integrate all outputs..
> 
> the nose tests might be good to check the profiling status. it should
> cover most of the flows specifically (especially if we'll enforce adding
> unit tests for each new change)
> 
> --
> Yaniv Bronhaim.
> _______________________________________________
> vdsm-devel mailing list
> vdsm-devel@lists.fedorahosted.org
> https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
> 
_______________________________________________
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel

Reply via email to