Jeff, Nigel,

We do get time a test takes to run, as a part of the output of the test runs (at least on CentOS runs as I checked here [1]).

To avoid a DHT problem ;) it maybe better to take this sorted list and assign tests in a cyclic fashion so that all chunks relatively take the same amount of time to complete, than it being skewed due to the hash?

Shyam
[1] Test run chosen at random: https://build.gluster.org/job/centos6-regression/1350/consoleFull

On 01/10/2017 05:26 PM, Jeff Darcy wrote:

With regard to assigning files to chunks, I suggest we start by using an 
algorithm similar to that we use in DHT.

   hash=$(cat $filename | md5sum) # convert from hex to decimal?
   chunk=$((hash % number_of_chunks))
   if [ x"$chunk" = x"$my_chunk_id" ]; then
      bash $filename # ...and so on
   fi

This is completely automatic, robust as the test set or directory structure 
changes (or as the number of workers changes), and should give us an 
approximately equal distribution among chunks.
_______________________________________________
Gluster-devel mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-devel

_______________________________________________
Gluster-devel mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to