Wed, 17 Apr 2013 09:15:37 +0200
Cédric Krier <[email protected]>:
>On 16/04/13 20:54 -0700, Ian Wilson wrote:
>> Is tryton tested against large datasets?  I can imagine datasets
>> might have 10000-10000 parties as well.  
>> Maybe if a test script was setup for performance testing (only
>> against official modules) it would be easier for me to tell if my
>> custom modules are causing a problem or not.  And easier to catch
>> tryton changes that introduce significant performance problems.
>I don't see how it is possible to do so. Result will depend on the
>machine.
We could try a test script and measuring the pystones.
It is a whetstone[1] implementation for python and found in the stdlib
*test*.
The idea is to run a fixed script on an individual computer hardware and
measures the run time and a ratio compared to a reference computer:

>>> from test import pystone
>>> time_running, mypystones = pystone.pystones()
>>> mypystones
125000.0

My computer has 125000 pystones or 125 kpystones.

If I have a test which takes 1.4 seconds to run on my hardware, I can
calculate the kilo pystones for the test:

>>> kpystones = mypystones * 1.4 / 1000
>>> kpystones
175.0

The test with 175 kpystones are comparable on different computers - in
theory.

On the other hand is the performance of the Tryton server affected by
many different technologies, not only cpython.
So I don't know if we have a chance to compare the timings of the test
results at all.

Cheers Udo

[1]http://en.wikipedia.org/wiki/Whetstone_%28benchmark%29
--
virtual things
Preisler & Spallek GbR
München - Aachen

Windeckstr. 77
81375 München
Tel: +49 (89) 710 481 55
Fax: +49 (89) 710 481 56

[email protected]
http://www.virtual-things.biz

Attachment: signature.asc
Description: PGP signature

Reply via email to