Hi,
I am using OpenMPI 3.1.2 with slurm 17.11.12 and it looks like I can't
have the "--output-filename" option taken into account. All my outputs
are going into slurms output files.
Can it be imposed or ignored by a slurm configuration?
How is it possible to bypass that?
Strangely, the
nks,
Eric
Le 2019-09-30 à 3:34 p.m., Eric Chamberland via users a écrit :
Hi,
I am using OpenMPI 3.1.2 with slurm 17.11.12 and it looks like I can't
have the "--output-filename" option taken into account. All my
outputs are going into slurms output files.
Can it be imposed o
Hi,
I have a simple question about error codes returned by MPI_File_*_all*
and MPI_File_open/close functions:
If an error is returned will it be the same for *all* processes? In
other worlds, are error codes communicated under the hood so we, end
users, can avoid to add "reduce" on those
Hi,
I am looking around for information about parallel filesystems supported
for MPI I/O.
Clearly, GFPS, Lustre are fully supported, but what about others?
- CephFS
- pNFS
- Other?
when I "grep" for "pnfs\|cephfs" into ompi source code, I found nothing...
Otherwise I found this into
From: users On Behalf Of Eric Chamberland
via users
Sent: Thursday, September 23, 2021 9:28 AM
To: Open MPI Users
Cc: Eric Chamberland ; Vivien Clauzon
Subject: [OMPI users] Status of pNFS, CephFS and MPI I/O
Hi,
I am looking around for information about parallel filesystems supported for
M
Hi,
In the past, we have successfully launched large sized (finite elements)
computations using PARMetis as mesh partitioner.
It was first in 2012 with OpenMPI (v2.?) and secondly in March 2019 with
OpenMPI 3.1.2 that we succeeded.
Today, we have a bunch of nightly (small) tests running
problems test with OMPI-5.0.x?
Regarding the application, at some point it invokes MPI_Alltoallv
sending more than 2GB to some of the ranks (using derived dt), right?
//WBR, Mikhail
*From:* users *On Behalf Of
*Eric Chamberland via users
*Sent:* Thursday, June 2, 2022 5:31
pecific call, but I am not sure it is
sending 2GB to a specific rank but maybe have 2GB divided between many
rank. The fact is that this part of the code, when it works, does not
create such a bump in memory usage... But I have to dig a bit more...
Regards,
Eric
//WBR, Mikhail
*From:* users
Eric
On 2022-06-01 23:31, Eric Chamberland via users wrote:
Hi,
In the past, we have successfully launched large sized (finite
elements) computations using PARMetis as mesh partitioner.
It was first in 2012 with OpenMPI (v2.?) and secondly in March 2019
with OpenMPI 3.1.2 that we succeed
module ompio
What else can I do to dig into this?
Are there parameters ompio is aware of with GPFS?
Thanks,
Eric
--
Eric Chamberland, ing., M. Ing
Professionnel de recherche
GIREF/Université Laval
(418) 656-2131 poste 41 22 42
On 2022-06-10 16:23, Eric Chamberland via users wrote:
Hi,
Hi,
I want to try romio with OpenMPI 4.1.2 because I am observing a big
performance difference with IntelMPI on GPFS.
I want to see, at *runtime*, all parameters (default values, names) used
by MPI (at least for the "io" framework).
I would like to have all the same output as "ompi_info
Hi,
I would like to know if OpenMPI is supporting file creation with
"striping_factor" for CephFS?
According to CephFS library, I *think* it would be possible to do it at
file creation with "ceph_open_layout".
https://github.com/ceph/ceph/blob/main/src/include/cephfs/libcephfs.h
Is it a
Hi,
In 2012 we wrote and tested our functions to use MPI I/O to have good
performances while doing I/O on a Lustre filesystem. Everything was fine
about "striping_factor" we passed to file creation.
Now I am trying to verify some performance degradation we observed and I
am surprised
13 matches
Mail list logo