----- Original Message -----
> From: "Francesco Romani" <from...@redhat.com>
> To: "vdsm-devel" <vdsm-devel@lists.fedorahosted.org>
> Cc: "ybronhei" <ybron...@redhat.com>, "Saggi Mizrahi" <smizr...@redhat.com>
> Sent: Tuesday, March 18, 2014 12:47:55 PM
> Subject: Re: [vdsm] Profiling and benchmarking VDSM
>
>
> ----- Original Message -----
> > From: "Saggi Mizrahi" <smizr...@redhat.com>
> > To: "Francesco Romani" <from...@redhat.com>
> > Cc: "vdsm-devel" <vdsm-devel@lists.fedorahosted.org>, "ybronhei"
> > <ybron...@redhat.com>
> > Sent: Tuesday, March 18, 2014 10:18:16 AM
> > Subject: Re: [vdsm] Profiling and benchmarking VDSM
> >
> > Thank you for taking the initiative.
> > Just reminding you that the test framework is owned
> > by infra so don't forget to put Yaniv and I in the CC
> > for all future correspondence regarding this feature.
> >
> > As I will be the one responsible for the final
> > approval.
>
> Yes, of course I will.
> At the moment I'm using "unofficial"/out of tree decorators and support code
> just because I just started the exploration and the work.
> In the meantime, we can and should discuss the better/long term/official
> approach to measure performance and benchmark things.
>
> > Ignore http://www.ovirt.org/Vdsm_Developers#Performance_and_scalability
>
> Not sure I understood correctly. You mean I should drop my additions to the
> Vdsm_Developers page?
Don't drop it, just don't have it as a priority over actual work.
I'd much rather have benchmarks and no WIKI than the other way around. :)
>
> > Also we don't want to do it per test since it's meaningless for most tests
> > since they only run through the code once.
> >
> > I started investigating how we want to solve this issue in the past and
> > this
> > is what I can up with.
> >
> > What we need to do is create a decorator that wraps the test with cProfile.
> > We also want to create a generator that using configuration from nose.
> >
> > def BenchmarkIter():
> > start = time.time()
> > i = 0
> > while i < MIN_ITERATIONS or (time.time() - start) < MIN_TIME_RUNNING:
> > yield i
> > i += 1
> >
> > So that writing a benchmark is just:
> >
> > @benchmark([min_iter[, min_time_running]])
> > def testSomething(self):
> > something()
> >
> > That way we are sure we have a statistically significant sample for all
> > tests.
>
> Agreed
>
> > There will need to be a plugin created for nose that skips @benchmark if
> > benchmarks are not turned on and can generate output for the Jenkins
> > performance plugin[1]. That way we can run it every night as the benchmarks
> > will be slow to run since they will intentionally take a few seconds each
> > and try and hammer the CPU\disk so people would probably not run the entire
> > suite themselves.
> >
> > [1] https://wiki.jenkins-ci.org/display/JENKINS/Performance+Plugin
>
> This looks very nice.
>
> Thanks and bests,
>
> --
> Francesco Romani
> RedHat Engineering Virtualization R & D
> Phone: 8261328
> IRC: fromani
>
_______________________________________________
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel