On Thu, Jan 5, 2012 at 2:43 AM, Dawid Weiss
<[email protected]> wrote:
>> 15 cpus:
>>
>>   [junit4] Slave 0:     0.29 ..     5.16 =     4.87s
> ...
>>   [junit4] Slave 3:     0.29 ..    24.20 =    23.92s
>>   [junit4] Slave 4:     0.26 ..    27.00 =    26.74s
>
> This is weird -- such discrepancy shouldn't happen after it has some
> initial timings unless there was a really skewed test case inside. I
> do all per-vm suite balancing beforehand and don't adjust once the
> execution is in progress (probably using job stealing); maybe this is
> a mistake that should be corrected. Then the order of suites should be
> reported in case of a failure and if you have 20 slaves this would be
> a fairly large log ;)

It is strange... because I'm running w/ fixed seed, RAMDir and
Lucene40 codec.  There shouldn't be much variance...

The Python runner pre-aggregates the tests into a JVM run, but, it
tries to put ~ 30 seconds worth of tests per JVM, and then front-loads
for any tests that take > 30 seconds (that test runs alone in the
JVM).  So then it's just pulling from that priority queue...

This is somewhat wasteful in that the Python runner is running more
JVMs than the new ant runner, but I do this because the tests can have
such variability on run time... so I think the net effect is just like
job stealing except the Python runner is launching new JVMs to
"steal".

Mike McCandless

http://blog.mikemccandless.com

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to