Dear Kawashima-san,

the mimimum test on each PR is to run configure/make

if you are not cross compiling, you can also run make check

a bit more sophisticated test is to run make distcheck

ideally, if you have several nodes available, you can run some mpi tests in a custom script

(see https://github.com/mellanox-hpc/jenkins_scripts/blob/master/jenkins/ompi/ompi_jenkins.sh)


out of curiosity, do you use Fujitsu or GNU compilers ? if Fujitsu compilers, do you compile on Sparc or cross compile on x86_64 ?


Cheers,


Gilles


On 5/19/2016 10:05 AM, Kawashima, Takahiro wrote:
Jeff,

I hope we do in the future. But currently we don't have enough
machine time and direct Internet connectivity (especially from
testing machines).

What type of test do you expect? Building Open MPI binary and
running some short test programs on some x86-64 machines are
realistic if the connectivity problem is resolved.
But running many or long test programs on many SPARC machines
per GitHub pull request is not realistic for us.
(daily or weekly run may be realistic)

Great!

Will you be able to also do some continuous integration type testing (i.e., run 
some basic tests for each Github pull request)?  Josh/IBM is going to post some 
information about their Jenkins/Github pull request setup shortly.


On May 18, 2016, at 9:50 AM, Kawashima, Takahiro <t-kawash...@jp.fujitsu.com> 
wrote:

Jeff,

Thank you. It's very useful information.
I'll plan our run based on your information.

Once we (Fujitsu) come to be able to run the test suites regularly,
we'll prepare to upload the reports to the server and push our test suites.

Thanks,
Takahiro Kawashima,
MPI development team,
Fujitsu

Fujitsu started to try MTT + ompi-tests on our machines.
With the sample .ini file, we wrote our .ini file and some
test suites are run.

I have two questions.

(a) There are many test suites (directories) in ompi-tests.
   ibm, onesided, sun, ...
   Which test suites should I use to participate in
   OMPI MTT daily/weekly run?
The general guidance is: run as many tests as you have resources for.  Meaning: 
we'll take any testing you can give.  :-)

Have a look in ompi-tests:cisco/mtt/community/*.ini and cisco/mtt/usnic/*.ini.  
Those are the ini files I use every night for Cisco usNIC-specific testing and 
community-wide testing. You can see the results of them in the MTT community 
reporter:

    http://mtt.open-mpi.org/

I generally aim for about 20-24 hours of testing.  It's a little fuzzy, because 
Cisco's MTT will only fire for a given version (I'm currently testing the 
master, v1.10, and v2.x branches) if there were new commits that day (i.e., if 
there's a new nightly snapshot tarball since the last run).

If you run too many tests such that your testing is more than 24 hours, then 
your resources quickly get behind and you're testing tarballs from days ago -- 
and that's not too useful.

(b) What is the recommended `np` value (number of processes)?
   Should I use the largest `np` I can run?
Yes, subject to what I mentioned above: you want to aim for a total of ~24 
hours of testing so that you can start the next cycle with the next night's 
snapshot tarball.

You can pack this in however you want -- do lots of small-np-value tests and a 
few large-np-value tests (just to sanity test large np values, etc.), etc.

You can also take into account that little development is done on the weekends. 
 For example, you might want to aim for ~24 hours of testing on Monday-Thursday 
evenings, and then aim for a 3-day run on Friday evening (because there might 
not be new tarballs generated over the weekend).

   Does it depend on test suites?
Yes.  Some test suites have upper-bounds on the number of processes they can 
run.  IIRC, the Intel test suite, for example, can only run up to 64 processes 
(because of some hard-coded array sizes) unless you use a specific -D to 
compile it (that increases the size of these arrays).
_______________________________________________
mtt-users mailing list
mtt-us...@open-mpi.org
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
Link to this post: 
http://www.open-mpi.org/community/lists/mtt-users/2016/05/0861.php

Reply via email to