On Mon, Feb 24, 2020 at 4:57 PM Gabriel, Edgar
wrote:
> I am not an expert for the one-sided code in Open MPI, I wanted to comment
> briefly on the potential MPI -IO related item. As far as I can see, the
> error message
>
>
>
> “Read -1, expected 48, errno = 1”
>
> does not stem from MPI I/O,
Gabriel, Edgar via users <
> users@lists.open-mpi.org> wrote:
>
>
>
> I am not an expert for the one-sided code in Open MPI, I wanted to comment
> briefly on the potential MPI -IO related item. As far as I can see, the
> error message
>
>
>
> “Read -1, expec
ter fixing the above you may need to
> check yama on the host. You can check with "sysctl -w
> kernel.yama.ptrace_scope", if it returns a value other than 0 you may
> need to disable it with "sysctl -w kernel.yama.ptrace_scope=0".
>
> Adam
>
> ---
://www.open-mpi.org/faq/?category=sm
>
> for more info.
>
> Prentice
>
> On 3/18/21 12:28 PM, Matt Thompson via users wrote:
>
> All,
>
> This isn't specifically an Open MPI issue, but as that is the MPI stack I
> use on my laptop, I'm hoping someone here might have a
All,
This isn't specifically an Open MPI issue, but as that is the MPI stack I
use on my laptop, I'm hoping someone here might have a possible solution.
(I am pretty sure something like MPICH would trigger this as well.)
Namely, my employer recently did something somewhere so that now *any* MPI
gt; mpirun ...
>
> Cheers,
>
> Gilles
>
> On Fri, Mar 19, 2021 at 5:44 AM Matt Thompson via users
> wrote:
> >
> > Prentice,
> >
> > Ooh. The first one seems to work. The second one apparently is not liked
> by zsh and I had to do:
> > ❯ mpirun
or some / all of the mpi_f08 module. That's why the
> configure tests are... complicated.
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ________
> From: users on behalf of Matt Thompson
> via users
> Sent: Thursday, December 23, 2021
Squyres
> jsquy...@cisco.com
>
>
> From: users on behalf of Matt Thompson
> via users
> Sent: Thursday, December 30, 2021 9:56 AM
> To: Open MPI Users
> Cc: Matt Thompson; Christophe Peyret
> Subject: Re: [OMPI users] Mac OS + openmp
All,
When I build Open MPI with NAG, I have to pass in:
FCFLAGS"=-mismatch_all -fpp"
this flag tells nagfor to downgrade some errors with interfaces to warnings:
-mismatch_all
Further downgrade consistency checking of procedure
argument lists so that calls to routines
gards,
>
> On Thu, 23 Dec 2021, 13:18 Matt Thompson via users, <
> users@lists.open-mpi.org> wrote:
>
>> Oh. Yes, I am on macOS. The Linux cluster I work on doesn't have NAG 7.1
>> on it...mainly because I haven't asked for it. Until NAG fix the bug we are
>> seein
Oh. Yes, I am on macOS. The Linux cluster I work on doesn't have NAG 7.1 on
it...mainly because I haven't asked for it. Until NAG fix the bug we are
seeing, I figured why bother the admins.
Still, it does *seem* like it should work. I might ask NAG support about it.
On Wed, Dec 22, 2021 at 6:28
Oh yeah. I know that error. This is due to a long standing issue with Intel
on macOS and Open MPI:
https://github.com/open-mpi/ompi/issues/7615
You need to configure Open MPI with "lt_cv_ld_force_load=no" at the
beginning. (You can see an example at the top of my modulefile here:
enerated libtool file:
>> look a line like
>>
>> CC="nagfor"
>>
>> and then edit the next line
>>
>>
>> # Commands used to build a shared archive.
>>
>> archive_cmds="\$CC -dynamiclib \$allow_undef ...&quo
shared archive.
>
> archive_cmds="\$CC -dynamiclib \$allow_undef ..."
>
> simply manually remove "-dynamiclib" here and see if it helps
>
>
> Cheers,
>
> Gilles
> On Fri, Oct 29, 2021 at 12:30 AM Matt Thompson via users <
> users@lists.open-m
libraries, I would try to run configure,
>>> and then edit the generated libtool file:
>>> look a line like
>>>
>>> CC="nagfor"
>>>
>>> and then edit the next line
>>>
>>>
>>> # Commands used to build a
Dear Open MPI Gurus,
This is a...confusing one. For some reason, I cannot build a working Open
MPI with NAG 7.0.7062 and clang on my MacBook running macOS 11.6.1. The
thing is, I could do this back in July with NAG 7.0.7048. So my fear is
that something changed with macOS, or clang/xcode, or
Open MPI List,
Recently in trying to build some libraries with NVHPC + Open MPI, I hit an
error building HDF5 where it died at configure time saying that the zlib
that Open MPI wanted to link to (my system one) was incompatible with the
zlib I built in my libraries leading up to HDF5. So, in the
I have built Open MPI 5 (well, 5.0.0rc12) with Intel oneAPI under Rosetta2
with:
$ lt_cv_ld_force_load=no ../configure --disable-wrapper-rpath
--disable-wrapper-runpath \
CC=clang CXX=clang++ FC=ifort \
--with-hwloc=internal --with-libevent=internal --with-pmix=internal
I'm fairly sure
On my Mac I build Open MPI 5 with (among other flags):
--with-hwloc=internal --with-libevent=internal --with-pmix=internal
In my case, I should have had libevent through brew, but it didn't seem to
see it. But then I figured I might as well let Open MPI build its own for
convenience.
Matt
On
19 matches
Mail list logo