Hi Jeff,
> Am 09.08.2022 um 16:17 schrieb Jeff Squyres (jsquyres) via users
> :
>
> Just to follow up on this thread...
>
> Reuti: I merged the PR on to the main docs branch. They're now live -- we
> changed the text:
> • here:
> https://docs.open-mpi
`ldd`.)
Looks like I can get the intended behavior while configuring Open MPI on this
(older) system:
$ ./configure … LDFLAGS=-Wl,--enable-new-dtags
-- Reuti
Hi,
what about putting the "-static-intel" into a configuration file for the Intel
compiler. Besides the default configuration, one can have a local one and put
the path in an environment variable IFORTCFG (there are other ones for C/C++).
$ cat myconf
--version
$ export IFORTCFG=/
lved and/or
replace vader? This was the reason I found '-mca btl ^openib' more appealing
than listing all others.
-- Reuti
> Prentice
>
> On 7/23/20 3:34 PM, Prentice Bisbal wrote:
>> I manage a cluster that is very heterogeneous. Some nodes have InfiniBand,
>> while
tell the open-mpi where it is
> installed?
There is OPAL_PREFIX to be set:
https://www.open-mpi.org/faq/?category=building#installdirs
- -- Reuti
-BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org
iEYEARECAAYFAl8bIa0ACgkQo/GbGkBRnRrGywCgj5PHSKdMRwSx3jVB4en+wbmV
yG8Ani
ch node only once for sure. AFAIR
there was a setting in Torque to allow or disallow mutiple elections of the
fixed allocation rule per node.
HTH -- Reuti
ing all necessary environment
variable inside the job script itself, so that it is self contained.
Maybe they judge it a security issue, as this variable would also be present in
case you run a queue prolog/epilog as a different user. For the plain job
itself it wouldn't matter IMO.
And for any further investigation: which problem do you face in detail?
-- Reuti
> that the feature was already there!)
>
> For the most part, this whole thing needs to get documented.
Especially that the colon is a disallowed character in the directory name. Any
suffix :foo will just be removed AFAICS without any error output about foo
being an unknown option.
--
e.test/bin/grid-sshd -i
> rlogin_command builtin
> rlogin_daemonbuiltin
> rsh_command builtin
> rsh_daemon builtin
That's fine. I wondered whether rsh_* would contain a redirection to
he length of the hostname
where it's running on?
If the admin are nice, the could define a symbolic link directly as /scratch
pointing to /var/spool/sge/wv2/tmp and setup in the queue configuration
/scratch as being TMPDIR. Effect and location like now, but safes some
characters
-- Reuti
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
ing the
applications.
Side note: Open MPI binds the processes to cores by default. In case more than
one MPI job is running on a node one will have to use `mpiexec --bind-to none
…` as otherwise all jobs on this node will use core 0 upwards.
-- Reuti
> Thanks!
>
> -David Laidlaw
>
ing under the
control of a queuing system. It should use `qrsh` in your case.
What does:
mpiexec --version
ompi_info | grep grid
reveal? What does:
qconf -sconf | egrep "(command|daemon)"
show?
-- Reuti
> Cheers,
>
> -David Laidlaw
>
>
>
>
> He
Hi,
Am 17.04.2019 um 11:07 schrieb Mahmood Naderan:
> Hi,
> After successful installation of v4 on a custom location, I see some errors
> while the default installation (v2) hasn't.
Did you also recompile your application with this version of Open MPI?
-- Reuti
> $ /sha
s? For me using export
OPAL_PREFIX=… beforehand worked up to now.
-- Reuti
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
> Am 09.04.2019 um 14:52 schrieb Dave Love :
>
> Reuti writes:
>
>> export OPAL_PREFIX=
>>
>> to point it to the new location of installation before you start `mpiexec`.
>
> Thanks; that's now familiar, and I don't know how I missed it with
>
to run it from home without containers etc.) I thought that was
> possible, but I haven't found a way that works. Using --prefix doesn't
> find help files, at least.
export OPAL_PREFIX=
to point it to the new location of instal
ies and installed the runtime environment also
with the package manager of your distribution, I would suggest to install
"libopenmpi-dev" (and only this one to avoid conflicts with wrappers from other
MPI implementations).
-- Reuti
PS: Interesting that t
communications.
>
> Is my final statement correct?
In my opinion: no.
A job scheduler can serialize the workflow and run one job after the other as
free resources provide. Their usage may overlap in certain cases, but MPI and a
job scheduler don't compete.
-- Reuti
> Thanks a lot
>
Spectrum MPI show here? While Platform-MPI was something unique, I
thought Spectrum MPI is based on Open MPI.
How does this effect manifests in Spectrum MPI? It changes between each
compilation of all your source files, i.e. foo.c sees other values than baz.c,
despite the fact that th
submit.c
openmpi-3.1.2/orte/orted/.deps/liborted_mpir_la-orted_submit.Plo
-- Reuti
> Note that we are (gradually) replacing orte-dvm with PRRTE:
>
> https://github.com/pmix/prrte
>
> See the “how-to” guides for PRRTE towards the bottom of this page:
> https://pmix.org/supp
Hi,
Should orte-submit/ompi-submit still be available in 3.x.y? I can spot the
source, but it's neither build, nor any man page included.
-- Reuti
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
Hi,
Should orte-submit/ompi-submit still be available in 3.x.y? I can spot the
source, but it's neither build, nor any man page included.
-- Reuti
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
ication at the same time in a
cluster, which might be referred to "running in parallel" – but depending on
the context such a statement might be ambiguous.
But if you need the result of the first image resp. computation to decide how
to proceed, then it's advantageous to paral
> Am 10.08.2018 um 17:24 schrieb Diego Avesani :
>
> Dear all,
> I have probably understood.
> The trick is to use a real vector and to memorize also the rank.
Yes, I thought of this:
https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
-- Reuti
> Have I un
following:
>
> CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX,
> MPI_MASTER_COMM, MPIworld%iErr)
Would MPI_MAXLOC be sufficient?
-- Reuti
> However, I would like also to know to which CPU that value belongs. Is it
> possible?
>
> I have set-up a s
> Am 10.04.2018 um 13:37 schrieb Noam Bernstein :
>
>> On Apr 10, 2018, at 4:20 AM, Reuti wrote:
>>
>>>
>>> Am 10.04.2018 um 01:04 schrieb Noam Bernstein :
>>>
>>>> On Apr 9, 2018, at 6:36 PM, George Bosilca wrote:
>>>>
with the 3.0.0).
>
> Correct.
>
>> Also according to your stacktrace I assume it is an x86_64, compiled with
>> icc.
>
> x86_64, yes, but, gcc + ifort. I can test with gcc+gfortran if that’s
> helpful.
Was there any reason not to choose icc + ifort?
-- Reuti
pen MPI in $MKLROOT/interfaces/mklmpi with identical results.
-- Reuti
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
a hint about an "EIEIO" command only. Sure,
in-order-execution might slow down the system too.
-- Reuti
>
> * containers and VMs don’t fully resolve the problem - the only solution
> other than the patches is to limit allocations to single users on a node
>
> HTH
> Ralp
> Am 28.11.2017 um 17:19 schrieb Reuti :
>
> Hi,
>
>> Am 28.11.2017 um 15:58 schrieb Vanzo, Davide :
>>
>> Hello all,
>> I am having a very weird problem with mpifort that I cannot understand.
>> I am building OpenMPI 1.10.3 with GCC 5.4.0 with Easy
on advisable,
the culprit seems to lie in:
> cannot open /usr/lib64/libgfortran.so: No such file or directory
Does this symbolic link exist? Does it point to your installed one too?
Maybe the developer package of GCC 5.4.0 is missing. Hence it looks for
libgfortran.so somewhere else and finds only a
nux/bin/intel64 reveals:
ARGBLOCK_%d
ARGBLOCK_REC_%d
So it looks like the output is generated on-the-fly and doesn't point to any
existing variable. But to which argument of which routine is still unclear.
Does the Intel Compile have the feature to output a cross-refernce of all used
va
obscript to feed
an "adjusted" $PE_HOSTFILE to Open MPI and then it's working as intended: Open
MPI creates forks.
Does anyone else need such a patch in Open MPI and is it suitable to be
included?
-- Reuti
PS: Only the headnodes have more than one network interface in our cas
IBRARY_PATH
>
> this is the easiest option, but cannot be used if you plan to relocate the
> Open MPI installation directory.
There is the tool `chrpath` to change rpath and runpath inside a
binary/library. This has to match relocated directory then.
-- Reuti
> an other
> How do I get around this cleanly? This works just fine when I set
> LD_LIBRARY_PATH in my .bashrc, but I’d rather not pollute that if I can avoid
> it.
Do you set or extend the LD_LIBRARY_PATH in your .bashrc?
-- Reuti
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
arently
> never propagated through remote startup,
Isn't it a setting inside SGE which the sge_execd is aware of? I never exported
any environment variable for this purpose.
-- Reuti
> so killing those orphans after
> VASP crashes may fail, though resource reporting works. (I ne
rsions. So I can't comment on this for sure, but it seems to set the memory
also in cgroups.
-- Reuti
> mpirun just uses the nodes that SGE provides.
>
> What your cmd line does is restrict the entire operation on each node (daemon
> + 8 procs) to 40GB of memory. OMPI
a string in the environment variable, you may want to use the
plain value in bytes there.
-- Reuti
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
e their headers installed on it. Then configure OMPI
> --with-xxx pointing to each of the RM’s headers so all the components get
> built. When the binary hits your customer’s machine, only those components
> that have active libraries present will execute.
Just note, th
t;
> qsub –pe orte 8 –b y –V –l m_mem_free=40G –cwd mpirun –np 8 a.out
m_mem_free is part of Univa SGE (but not the various free ones of SGE AFAIK).
Also: this syntax is for SGE, in LSF it's different.
To have this independent from the actual queuing system, one could look into
DR
is an additional point: which one?
It might be, that you have to put the two exports of PATHS and LD_LIBRARY_PATH
in your jobscript instead, in you never want to run the application from the
command line in parallel.
-- Reuti
>
> En date de : Mar
64?
2) do you source .bashrc also for interactive logins? Otherwise it should go to
~/.bash_profile or ~/.profile
>
>
> En date de : Mar 30.5.17, Reuti a écrit :
>
> Objet: Re: [OMPI users] No components were able to be opened in the pml
&
chemistry
> program.
Did you compile Open MPI on your own? Did you move it after the installation?
-- Reuti
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Hi,
Am 23.05.2017 um 05:03 schrieb Tim Jim:
> Dear Reuti,
>
> Thanks for the reply. What options do I have to test whether it has
> successfully built?
LIke before: can you compile and run mpihello.c this time – all as ordinary
user in case you installed the Open MPI into so
s.
> Regarding the final part of the email, is it a problem that 'undefined
> reference' is appearing?
Yes, it tries to resolve missing symbols and didn't succeed.
-- Reuti
>
> Thanks and regards,
> Tim
>
> On 22 May 2017 at 06:54, Reuti wrote:
>
>&
_LIBRARY_PATH differently
I don't think that Ubuntu will do anything different than any other Linux.
Did you compile Open MPI on your own, or did you install any repository?
Are the CUDA application written by yourself or any freely available
applications?
- -- Reuti
> and instead add
As I think it's not relevant to Open MPI itself, I answered in PM only.
-- Reuti
> Am 18.05.2017 um 18:55 schrieb do...@mail.com:
>
> On Tue, 9 May 2017 00:30:38 +0200
> Reuti wrote:
>> Hi,
>>
>> Am 08.05.2017 um 23:25 schrieb David Niklas:
>>
ing to download the
community edition (even the evaluation link on the Spectrum MPI page does the
same).
-- Reuti
> based on OpenMPI, so I hope there are some MPI expert can help me to solve
> the problem.
>
> When I run a simple Hello World MPI program, I get the follow error message:
the intended task the only option is to use a single machine with as many
cores as possible AFAICS.
- -- Reuti
-BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org
iEYEARECAAYFAlkQ8Y8ACgkQo/GbGkBRnRq4jgCeKI39e2U22qsx9f6VeNZyUqNK
QzQAoNsb
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Am 25.04.2017 um 17:27 schrieb Reuti:
> Hi,
>
> In case Open MPI is moved to a different location than it was installed into
> initially, one has to export OPAL_PREFIX. While checking for the availability
> of the GridEngine
ed
place, an appropriate output should go to stderr and the exit code set to 1.
-- Reuti
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Due to the last post in this thread this copy I suggested seems not to be
possible, but I also want to test whether this post goes through to the list
now.
-- Reuti
===
Hi,
> Am 19.04.2017 um 19:53 schrieb Jim Edwards :
>
> Hi,
>
> I have openmpi-2.0.2 builds on two differe
MPI process or is the application issuing many `mpiexec` during
its runtime?
Is there any limit how often `ssh` may access a node in a timeframe? Do you use
any queuing system?
-- Reuti
signature.asc
Description: Message signed with OpenPGP using GPGMail
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
> Am 10.04.2017 um 17:27 schrieb r...@open-mpi.org:
>
>
>> On Apr 10, 2017, at 1:37 AM, Reuti wrote:
>>
>>>
>>> Am 10.04.2017 um 01:58 schrieb r...@open-mpi.org:
>>>
>>> Let me try to clarify. If you launch a job that has only 1 o
> Am 10.04.2017 um 00:45 schrieb Reuti :
> […]BTW: I always had to use -ldl when using `mpicc`. Now, that I compiled in
> libnuma, this necessity is gone.
Looks like I compiled too many versions in the last couple of days. The -ldl is
necessary in case --disable-shared --enable-s
h looks like being bound to socket.
-- Reuti
> You can always override these behaviors.
>
>> On Apr 9, 2017, at 3:45 PM, Reuti wrote:
>>
>>>> But I can't see a binding by core for number of processes <= 2. Does it
>>>> mean 2 per node or 2 ov
y that warning in addition about the memory
couldn't be bound.
BTW: I always had to use -ldl when using `mpicc`. Now, that I compiled in
libnuma, this necessity is gone.
-- Reuti
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
this socket has
other jobs running (by accident).
So, this is solved - I wasn't aware of the binding by socket.
But I can't see a binding by core for number of processes <= 2. Does it mean 2
per node or 2 overall for the `mpiexec`?
- -- Reuti
>
>> On Apr 9, 2017, at 3:4
ht it might be because of:
- We define plm_rsh_agent=foo in $OMPI_ROOT/etc/openmpi-mca-params.conf
- We compiled with --with-sge
But also started on the command line by `ssh` to the nodes, there seems no
automatic core binding to take place any longer.
--
mpilation in my home
directory by a plain `export`. I can spot:
$ ldd libmpi_cxx.so.20
…
libstdc++.so.6 =>
/home/reuti/local/gcc-6.2.0/lib64/../lib64/libstdc++.so.6 (0x7f184d2e2000)
So this looks fine (although /lib64/../lib64/ looks nasty). In the library, the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Am 03.04.2017 um 23:07 schrieb Prentice Bisbal:
> FYI - the proposed 'here-doc' solution below didn't work for me, it produced
> an error. Neither did printf. When I used printf, only the first arg was
> passed along:
>
> #!/bin/bash
>
> realcmd=
d by the configure tests, that's a bit of a problem, Just
> adding another -E before $@, should fix the problem.
It's often suggested to use printf instead of the non-portable echo.
- -- Reuti
>
> Prentice
>
> On 04/03/2017 03:54 PM, Prentice Bisbal wrote:
>>
or the same part of the CPU, essentially
becoming a bottleneck. But using each half of a CPU for two (or even more)
applications will allow a better interleaving in the demand for resources. To
allow this in the best way: no taskset or binding to cores, let the Linux
kernel and CPU do their best - Y
gone after the hints on the discussion's link you posted?
As I face it there still about "libeevent".
-- Reuti
>
> *** C++ compiler and preprocessor
> checking whether we are using the GNU C++ compiler... yes
> checking whether pgc++ accepts -g... yes
> checking
> Am 22.03.2017 um 15:31 schrieb Heinz-Ado Arnolds
> :
>
> Dear Reuti,
>
> thanks a lot, you're right! But why did the default behavior change but not
> the value of this parameter:
>
> 2.1.0: MCA plm rsh: parameter "plm_rsh_agent" (current value: &
o the 1.10.6 (use SGE/qrsh)
> one? Are there mca params to set this?
>
> If you need more info, please let me know. (Job submitting machine and target
> cluster are the same with all tests. SW is residing in AFS directories
> visible on all machines. Parameter "plm_rsh_disable_qrsh&
Hi,
Only by reading recent posts I got aware of the DVM. This would be a welcome
feature for our setup*. But I see not all options working as expected - is it
still a work in progress, or should all work as advertised?
1)
$ soft@server:~> orte-submit -cf foo --hnp file:/home/reuti/dvmuri -
using DVM often leads to a terminated DVM once a process
returned with a non-zero exit code. But once the DVM is gone, the queued jobs
might be lost too I fear. I would wish that the DVM could be more forgivable
(or this feature be adjustable what to do in case of a non-zero exit code).
-- Reuti
er.
Under which user account the DVM daemons will run? Are all users using the same
account?
-- Reuti
signature.asc
Description: Message signed with OpenPGP using GPGMail
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
ured to use SSH? (I mean the entries in `qconf
-sconf` for rsh_command resp. daemon).
-- Reuti
> Can see the gridengine component via:
>
> $ ompi_info -a | grep gridengine
> MCA ras: gridengine (MCA v2.1.0, API v2.0.0, Component v2.0.2)
> MCA ras gridengin
her. For a first test you can
start both with "mpiexec --bind-to none ..." and check whether you see a
different behavior.
`man mpiexec` mentions some hints about threads in applications.
-- Reuti
>
>
> Regards,
> Mahmood
>
>
> ___
to. When I type in the command mpiexec -f hosts -n 4 ./applic
>
> I get this error
> [mpiexec@localhost.localdomain] HYDU_parse_hostfile
> (./utils/args/args.c:323): unable to open host file: hosts
As you mentioned MPICH and their Hydra startup, you better ask at their list:
http://www.mpi
ved from all nodes.
>
> While I know there are better ways to test OpenMPI's functionality,
> like compiling and using the programs in examples/, this is the method
> a specific client chose.
There are small "Hello world" programs like here:
http://mpitutorial.com/tutor
march=bdver1 what
Gilles mentioned) or to tell me what he thinks it should compile for?
For pgcc there is -show and I can spot the target it discovered in the
USETPVAL= line.
-- Reuti
>
> The solution was (as stated by guys) building Siesta on the compute node. I
> have to say that I teste
d and computes).
Would it work to compile with a shared target and copy it to /shared on the
frontend?
-- Reuti
> An important question is that, how can I find out what is the name of the
> illegal instruction. Then, I hope to find the document that points which
> instruction se
unction `load_driver':
> (.text+0x331): undefined reference to `dlerror'
> /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/libibverbs.a(src_libibverbs_la-init.o):
> In function `ibverbs_init':
> (.text+0xd25): undefined reference to `dlopen'
> /usr/lib/gcc/
t; I build libverbs from source first? Am I on the right direction?
The "-l" includes already the "lib" prefix when it tries to find the library.
Hence "-libverbs" might be misleading due to the "lib" in the word, as it
I didn't find the time to look further into
it. See my post from Aug 11, 2016. With older versions of Open MPI it wasn't
necessary to supply it in addition.
-- Reuti
>
> Cheers,
>
> Gilles
>
>
>
> On Wednesday, September 14, 2016, Mahmood Naderan
> wrot
ch-mp as this is a
different implementation of MPI, not Open MPI. Also the default location of
Open MPI isn't mpich-mp.
- what does:
$ mpicc -show
$ which mpicc
output?
- which MPI library was used to build the parallel FFTW?
-- Reuti
> Undefined symbols for archit
Am 16.08.2016 um 13:26 schrieb Jeff Squyres (jsquyres):
> On Aug 12, 2016, at 2:15 PM, Reuti wrote:
>>
>> I updated my tools to:
>>
>> autoconf-2.69
>> automake-1.15
>> libtool-2.4.6
>>
>> but I face with Open MPI's ./autogen.pl:
> how/why it got deleted.
>
> https://github.com/open-mpi/ompi/pull/1960
Yep, it's working again - thx.
But for sure there was a reason behind the removal, which may be elaborated in
the Open MPI team to avoid any side effects by fixing this issue.
-- Reuti
PS: The other items
macro: AC_PROG_LIBTOOL
I recall seeing in already before, how to get rid of it? For now I fixed the
single source file just by hand.
-- Reuti
> As for the blank in the cmd line - that is likely due to a space reserved for
> some entry that you aren’t using (e.g., when someone manually
d, try again later.
Sure, the name of the machine is allowed only after the additional "-inherit"
to `qrsh`. Please see below for the complete in 1.10.3, hence the
assembly seems also not to be done in the correct way.
-- Reuti
> On Aug 11, 2016, at 4:28 AM,
a
>>
>> mostly because you still get to set the path once and use it many times
>> without duplicating code.
>>
>>
>> For what it's worth, I've seen Ralph's suggestion generalized to something
>> like
>>
>> PREFIX=$PWD/arch
> Am 11.08.2016 um 13:28 schrieb Reuti :
>
> Hi,
>
> In the file orte/mca/plm/rsh/plm_rsh_component I see an if-statement, which
> seems to prevent the tight integration with SGE to start:
>
>if (NULL == mca_plm_rsh_component.agent) {
>
> Why is it there (i
tional blank.
==
I also notice, that I have to supply "-ldl" to `mpicc` to allow the compilation
of an application to succeed in 2.0.0.
-- Reuti
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
different stdin/-out/-err in DRMAA by setting
drmaa_input_path/drmaa_output_path/drmaa_error_path for example?
-- Reuti
> mpi_comm_spawn("/bin/sh","-c","siesta < infile",..) definitely does not work.
>
> Patching siesta to start as "siesta
them more easily (i.e.
terminate, suspend,...).
-- Reuti
http://www.drmaa.org/
https://arc.liv.ac.uk/SGE/howto/howto.html#DRMAA
> Alex
>
> 2014-12-12 22:35 GMT-02:00 Gilles Gouaillardet
> :
> Alex,
>
> You need MPI_Comm_disconnect at least.
> I am not sure if this is 1
Hi,
please have a look here:
http://www.open-mpi.org/faq/?category=building#installdirs
-- Reuti
Am 09.12.2014 um 07:26 schrieb Manoj Vaghela:
> Hi OpenMPI Users,
>
> I am trying to build OpenMPI libraries using standard configuration and
> compile procedure. It is just the on
ppreciate your replies and will read them thoroughly. I think it's best to
continue with the discussion after SC14. I don't want to put any burden on
anyone when time is tight.
-- Reuti
> These points are in no particular order...
>
> 0. Two fundamental points have been
to both the oob and tcp/btl?
Yes.
> Obviously, this won’t make it for 1.8 as it is going to be fairly intrusive,
> but we can probably do something for 1.9
>
>> On Nov 13, 2014, at 4:23 AM, Reuti wrote:
>>
>> Am 13.11.2014 um 00:34 schrieb Ralph Castain:
>&g
Am 13.11.2014 um 00:34 schrieb Ralph Castain:
>> On Nov 12, 2014, at 2:45 PM, Reuti wrote:
>>
>> Am 12.11.2014 um 17:27 schrieb Reuti:
>>
>>> Am 11.11.2014 um 02:25 schrieb Ralph Castain:
>>>
>>>> Another thing you can do is (a) ensure y
Gus,
Am 13.11.2014 um 02:59 schrieb Gus Correa:
> On 11/12/2014 05:45 PM, Reuti wrote:
>> Am 12.11.2014 um 17:27 schrieb Reuti:
>>
>>> Am 11.11.2014 um 02:25 schrieb Ralph Castain:
>>>
>>>> Another thing you can do is (a) ensure you built with —e
> no problem obfuscating the ip of the head node, i am only interested in
> netmasks and routes.
>
> Ralph Castain wrote:
>>
>>> On Nov 12, 2014, at 2:45 PM, Reuti wrote:
>>>
>>> Am 12.11.2014 um 17:27 schrieb Reuti:
>>>
>>>>
Am 12.11.2014 um 17:27 schrieb Reuti:
> Am 11.11.2014 um 02:25 schrieb Ralph Castain:
>
>> Another thing you can do is (a) ensure you built with —enable-debug, and
>> then (b) run it with -mca oob_base_verbose 100 (without the tcp_if_include
>> option) so we can watch
the internal or external name of the headnode
given in the machinefile - I hit ^C then. I attached the output of Open MPI
1.8.1 for this setup too.
-- Reuti
Wed Nov 12 16:43:12 CET 2014
[annemarie:01246] mca: base: components_register: registering oob components
[annemarie:0124
-mca hwloc_base_binding_policy none
So, the bash was removed. But I don't think that this causes anything.
-- Reuti
> Cheers,
>
> Gilles
>
> On Mon, Nov 10, 2014 at 5:56 PM, Reuti wrote:
> Hi,
>
> Am 10.11.2014 um 16:39 schrieb Ralph Castain:
>
>
Am 11.11.2014 um 19:29 schrieb Ralph Castain:
>
>> On Nov 11, 2014, at 10:06 AM, Reuti wrote:
>>
>> Am 11.11.2014 um 17:52 schrieb Ralph Castain:
>>
>>>
>>>> On Nov 11, 2014, at 7:57 AM, Reuti wrote:
>>>>
>>>> Am 11.1
Am 11.11.2014 um 17:52 schrieb Ralph Castain:
>
>> On Nov 11, 2014, at 7:57 AM, Reuti wrote:
>>
>> Am 11.11.2014 um 16:13 schrieb Ralph Castain:
>>
>>> This clearly displays the problem - if you look at the reported “allocated
>>> nodes”, you se
us the content of PE_HOSTFILE?
>
>
>> On Nov 11, 2014, at 4:51 AM, SLIM H.A. wrote:
>>
>> Dear Reuti and Ralph
>>
>> Below is the output of the run for openmpi 1.8.3 with this line
>>
>> mpirun -np $NSLOTS --display-map --display-allocati
1 - 100 of 547 matches
Mail list logo