to use ompio
mpirun --mca io ompio ...
to use romio (v2.x)
mpirun --mca io romio314 ...
to use romio (v1.10)
mpirun --mca io romio ...
Cheers,
Gilles
On Wednesday, May 11, 2016, Michael Rezny wrote:
> Hi Sreenidhi,
> you need to specify --collective as an input
Hi Sreenidhi,
you need to specify --collective as an input parameter to mpi_tile_io
kindest regards
Mike
On 11 May 2016 at 12:01, Sreenidhi Bharathkar Ramesh <
sreenidhi-bharathkar.ram...@broadcom.com> wrote:
>
> Thank you so much for the details.
>
>
> 1. while running the "Tile I/O"
Thank you so much for the details.
1. while running the "Tile I/O" benchmark, I see the following message:
$ mpirun -np 28 ./mpi-tile-io --nr_tiles_x 7 --nr_tiles_y 4 --sz_tile_x 100
--sz_tile_y 100 --sz_element 32 --filename file1g
...
# collective I/O off
How do I enable collective I/O ?
2.
in the 1.7, 1.8 and 1.10 series ROMIO remains the default. In the
upcomgin 2.x series, OMPIO will be the default, except for Lustre file
systems, where we will stick with ROMIO as the primary resource.
Regarding performance comparison, we ran numerous tests late last year
and early this year.
Hi,
1. During default build of OpenMPI, it looks like both ompio.la and romio.la
are built. Which I/O MCA library is used and based on what is the decision
taken ?
2. Are there any statistics available to compare these two - OMPIO vs ROMIO
?
I am using OpenMPI v1.10.1.
Thanks,
- Sreenidhi.