[OMPI users] SLURM seems to ignore --output-filename option of OpenMPI

2019-09-30 Thread Eric Chamberland via users
Hi, I am using OpenMPI 3.1.2 with slurm 17.11.12 and it looks like I can't have the "--output-filename" option taken into account.  All my outputs are going into slurms output files. Can it be imposed or ignored by a slurm configuration? How is it possible to bypass that? Strangely, the

Re: [OMPI users] SLURM seems to ignore --output-filename option of OpenMPI

2019-10-10 Thread Eric Chamberland via users
nks, Eric Le 2019-09-30 à 3:34 p.m., Eric Chamberland via users a écrit : Hi, I am using OpenMPI 3.1.2 with slurm 17.11.12 and it looks like I can't have the "--output-filename" option taken into account.  All my outputs are going into slurms output files. Can it be imposed o

[OMPI users] Error code for I/O operations

2021-06-30 Thread Eric Chamberland via users
Hi, I have a simple question about error codes returned by MPI_File_*_all* and MPI_File_open/close functions: If an error is returned will it be the same for *all* processes? In other worlds, are error codes communicated under the hood so we, end users, can avoid to add "reduce" on those

[OMPI users] Status of pNFS, CephFS and MPI I/O

2021-09-23 Thread Eric Chamberland via users
Hi, I am looking around for information about parallel filesystems supported for MPI I/O. Clearly, GFPS, Lustre are fully supported, but what about others? - CephFS - pNFS - Other? when I "grep" for "pnfs\|cephfs" into ompi source code, I found nothing... Otherwise I found this into

Re: [OMPI users] Status of pNFS, CephFS and MPI I/O

2021-09-23 Thread Eric Chamberland via users
From: users On Behalf Of Eric Chamberland via users Sent: Thursday, September 23, 2021 9:28 AM To: Open MPI Users Cc: Eric Chamberland ; Vivien Clauzon Subject: [OMPI users] Status of pNFS, CephFS and MPI I/O Hi, I am looking around for information about parallel filesystems supported for M

[OMPI users] Segfault in ucp_dt_pack function from UCX library 1.8.0 and 1.11.2 for large sized communications using both OpenMPI 4.0.3 and 4.1.2

2022-06-01 Thread Eric Chamberland via users
Hi, In the past, we have successfully launched large sized (finite elements) computations using PARMetis as mesh partitioner. It was first in 2012 with OpenMPI (v2.?) and secondly in March 2019 with OpenMPI 3.1.2 that we succeeded. Today, we have a bunch of nightly (small) tests running

Re: [OMPI users] Segfault in ucp_dt_pack function from UCX library 1.8.0 and 1.11.2 for large sized communications using both OpenMPI 4.0.3 and 4.1.2

2022-06-02 Thread Eric Chamberland via users
problems test with OMPI-5.0.x? Regarding the application, at some point it invokes MPI_Alltoallv sending more than 2GB to some of the ranks (using derived dt), right? //WBR, Mikhail *From:* users *On Behalf Of *Eric Chamberland via users *Sent:* Thursday, June 2, 2022 5:31

Re: [OMPI users] Segfault in ucp_dt_pack function from UCX library 1.8.0 and 1.11.2 for large sized communications using both OpenMPI 4.0.3 and 4.1.2

2022-06-02 Thread Eric Chamberland via users
pecific call, but I am not sure it is sending 2GB to a specific rank but maybe have 2GB divided between many rank.  The fact is that this part of the code, when it works, does not create such a bump in memory usage...  But I have to dig a bit more... Regards, Eric //WBR, Mikhail *From:* users

Re: [OMPI users] Segfault in ucp_dt_pack function from UCX library 1.8.0 and 1.11.2 for large sized communications using both OpenMPI 4.0.3 and 4.1.2

2022-06-10 Thread Eric Chamberland via users
Eric On 2022-06-01 23:31, Eric Chamberland via users wrote: Hi, In the past, we have successfully launched large sized (finite elements) computations using PARMetis as mesh partitioner. It was first in 2012 with OpenMPI (v2.?) and secondly in March 2019 with OpenMPI 3.1.2 that we succeed

Re: [OMPI users] MPI I/O, Romio vs Ompio on GPFS

2022-06-11 Thread Eric Chamberland via users
module ompio What else can I do to dig into this? Are there parameters ompio is aware of with GPFS? Thanks, Eric -- Eric Chamberland, ing., M. Ing Professionnel de recherche GIREF/Université Laval (418) 656-2131 poste 41 22 42 On 2022-06-10 16:23, Eric Chamberland via users wrote: Hi,

[OMPI users] MPI I/O, ROMIO and showing io mca parameters at run-time

2022-06-10 Thread Eric Chamberland via users
Hi, I want to try romio with OpenMPI 4.1.2 because I am observing a big performance difference with IntelMPI on GPFS. I want to see, at *runtime*, all parameters (default values, names) used by MPI (at least for the "io" framework). I would like to have all the same output as "ompi_info

[OMPI users] CephFS and striping_factor

2022-11-28 Thread Eric Chamberland via users
Hi, I would like to know if OpenMPI is supporting file creation with "striping_factor" for CephFS? According to CephFS library, I *think* it would be possible to do it at file creation with "ceph_open_layout". https://github.com/ceph/ceph/blob/main/src/include/cephfs/libcephfs.h Is it a

[OMPI users] How to force striping_factor (on lustre or other FS)?

2022-11-25 Thread Eric Chamberland via users
Hi, In 2012 we wrote and tested our functions to use MPI I/O to have good performances while doing I/O on a Lustre filesystem. Everything was fine about "striping_factor" we passed to file creation. Now I am trying to verify some performance degradation we observed and I am surprised