I did some more testing when varying the stripe count, on two
different filesystems :
- One has 8 targets and is idle (our test filesystem)
- One has 64 targets and is more or less busy.
I was writing with 16 nodes, 128 MPI ranks. With collective IOs, I
obtained the following rates :
FS with 64 targets :
sc = 1 : 171 ± 13 MB/s
sc = 8 : 937 ± 34 MB/s
sc = -1 : 1102 ± 19 MB/s
what is -1 here?
-1 is all targets (i.e. sc = 64 in this case).
FS with 8 targets :
sc = 1 : 249 ± 4 MB/s
sc = 8 : 1218 ± 47 MB/s
ok this sounds more reasonable now (with a larger sc).
With independent IO, I obtained the rates :
FS with 64 targets :
sc = 1 : 240 ± 12 MB/s
sc = 8 : 1362 ± 79 MB/s
sc = -1 : 948 ± 48 MB/s
FS with 8 targets :
sc = 1 : 581 ± 7 MB/s
sc = 8 : 2700 ± 200 MB/s
The error bar that I give is the standard deviation over 3 runs. The
stripe size was left to 1 MB, which is aligned with our raid blocks.
I also did testing with 8 nodes (64 MPI ranks) and obtained very
similar rates.
What puzzles me is that independent IOs perform either as good or
much better than collective ones. Maybe this has to do with the
cb_nodes parameter.
yes, If only 1 rank is chosen as an aggregator in ROMIO for collective
I/O, this is definitely the issue you are seeing. Increasing that
should get you better results.
Thanks,
Mohamad
Thanks again for your reply.
Best regards,
--
---------------------------------
Maxime Boissonneault
Analyste de calcul - Calcul Québec, Université Laval
Ph. D. en physique
_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org