Re: [OMPI users] RUNPATH vs. RPATH

2022-08-09 Thread Reuti via users
Hi Jeff, > Am 09.08.2022 um 16:17 schrieb Jeff Squyres (jsquyres) via users > : > > Just to follow up on this thread... > > Reuti: I merged the PR on to the main docs branch. They're now live -- we > changed the text: > • here: > https://docs.open-mpi

[OMPI users] RUNPATH vs. RPATH

2022-07-22 Thread Reuti via users
`ldd`.) Looks like I can get the intended behavior while configuring Open MPI on this (older) system: $ ./configure … LDFLAGS=-Wl,--enable-new-dtags -- Reuti

Re: [OMPI users] Issues with compilers

2021-01-22 Thread Reuti via users
Hi, what about putting the "-static-intel" into a configuration file for the Intel compiler. Besides the default configuration, one can have a local one and put the path in an environment variable IFORTCFG (there are other ones for C/C++). $ cat myconf --version $ export IFORTCFG=/

Re: [OMPI users] segfault in libibverbs.so

2020-07-27 Thread Reuti via users
lved and/or replace vader? This was the reason I found '-mca btl ^openib' more appealing than listing all others. -- Reuti > Prentice > > On 7/23/20 3:34 PM, Prentice Bisbal wrote: >> I manage a cluster that is very heterogeneous. Some nodes have InfiniBand, >> while

Re: [OMPI users] Moving an installation

2020-07-24 Thread Reuti via users
tell the open-mpi where it is > installed? There is OPAL_PREFIX to be set: https://www.open-mpi.org/faq/?category=building#installdirs - -- Reuti -BEGIN PGP SIGNATURE- Comment: GPGTools - https://gpgtools.org iEYEARECAAYFAl8bIa0ACgkQo/GbGkBRnRrGywCgj5PHSKdMRwSx3jVB4en+wbmV yG8Ani

Re: [OMPI users] running mpirun with grid

2020-02-06 Thread Reuti via users
ch node only once for sure. AFAIR there was a setting in Torque to allow or disallow mutiple elections of the fixed allocation rule per node. HTH -- Reuti

Re: [OMPI users] Univa Grid Engine and OpenMPI 1.8.7

2020-01-12 Thread Reuti via users
ing all necessary environment variable inside the job script itself, so that it is self contained. Maybe they judge it a security issue, as this variable would also be present in case you run a queue prolog/epilog as a different user. For the plain job itself it wouldn't matter IMO. And for any further investigation: which problem do you face in detail? -- Reuti

Re: [OMPI users] mpirun --output-filename behavior

2019-11-01 Thread Reuti via users
> that the feature was already there!) > > For the most part, this whole thing needs to get documented. Especially that the colon is a disallowed character in the directory name. Any suffix :foo will just be removed AFAICS without any error output about foo being an unknown option. --

Re: [OMPI users] can't run MPI job under SGE

2019-07-29 Thread Reuti via users
e.test/bin/grid-sshd -i > rlogin_command builtin > rlogin_daemonbuiltin > rsh_command builtin > rsh_daemon builtin That's fine. I wondered whether rsh_* would contain a redirection to

Re: [OMPI users] TMPDIR for running openMPI job under grid

2019-07-26 Thread Reuti via users
he length of the hostname where it's running on? If the admin are nice, the could define a symbolic link directly as /scratch pointing to /var/spool/sge/wv2/tmp and setup in the queue configuration /scratch as being TMPDIR. Effect and location like now, but safes some characters -- Reuti ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] can't run MPI job under SGE

2019-07-25 Thread Reuti via users
ing the applications. Side note: Open MPI binds the processes to cores by default. In case more than one MPI job is running on a node one will have to use `mpiexec --bind-to none …` as otherwise all jobs on this node will use core 0 upwards. -- Reuti > Thanks! > > -David Laidlaw >

Re: [OMPI users] can't run MPI job under SGE

2019-07-25 Thread Reuti via users
ing under the control of a queuing system. It should use `qrsh` in your case. What does: mpiexec --version ompi_info | grep grid reveal? What does: qconf -sconf | egrep "(command|daemon)" show? -- Reuti > Cheers, > > -David Laidlaw > > > > > He

Re: [OMPI users] MPI_INIT failed 4.0.1

2019-04-17 Thread Reuti
Hi, Am 17.04.2019 um 11:07 schrieb Mahmood Naderan: > Hi, > After successful installation of v4 on a custom location, I see some errors > while the default installation (v2) hasn't. Did you also recompile your application with this version of Open MPI? -- Reuti > $ /sha

Re: [OMPI users] relocating an installation

2019-04-10 Thread Reuti
s? For me using export OPAL_PREFIX=… beforehand worked up to now. -- Reuti ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] relocating an installation

2019-04-09 Thread Reuti
> Am 09.04.2019 um 14:52 schrieb Dave Love : > > Reuti writes: > >> export OPAL_PREFIX= >> >> to point it to the new location of installation before you start `mpiexec`. > > Thanks; that's now familiar, and I don't know how I missed it with >

Re: [OMPI users] relocating an installation

2019-04-09 Thread Reuti
to run it from home without containers etc.) I thought that was > possible, but I haven't found a way that works. Using --prefix doesn't > find help files, at least. export OPAL_PREFIX= to point it to the new location of instal

Re: [OMPI users] Wrapper Compilers

2018-10-25 Thread Reuti
ies and installed the runtime environment also with the package manager of your distribution, I would suggest to install "libopenmpi-dev" (and only this one to avoid conflicts with wrappers from other MPI implementations). -- Reuti PS: Interesting that t

Re: [OMPI users] MPI advantages over PBS

2018-09-05 Thread Reuti
communications. > > Is my final statement correct? In my opinion: no. A job scheduler can serialize the workflow and run one job after the other as free resources provide. Their usage may overlap in certain cases, but MPI and a job scheduler don't compete. -- Reuti > Thanks a lot >

Re: [OMPI users] Are MPI datatypes guaranteed to be compile-time constants?

2018-09-04 Thread Reuti
Spectrum MPI show here? While Platform-MPI was something unique, I thought Spectrum MPI is based on Open MPI. How does this effect manifests in Spectrum MPI? It changes between each compilation of all your source files, i.e. foo.c sees other values than baz.c, despite the fact that th

Re: [OMPI users] What happened to orte-submit resp. DVM?

2018-08-29 Thread Reuti
submit.c openmpi-3.1.2/orte/orted/.deps/liborted_mpir_la-orted_submit.Plo -- Reuti > Note that we are (gradually) replacing orte-dvm with PRRTE: > > https://github.com/pmix/prrte > > See the “how-to” guides for PRRTE towards the bottom of this page: > https://pmix.org/supp

[OMPI users] What happened to orte-submit resp. DVM?

2018-08-28 Thread Reuti
Hi, Should orte-submit/ompi-submit still be available in 3.x.y? I can spot the source, but it's neither build, nor any man page included. -- Reuti ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

[OMPI users] What happened to orte-submit resp. DVM?

2018-08-28 Thread Reuti
Hi, Should orte-submit/ompi-submit still be available in 3.x.y? I can spot the source, but it's neither build, nor any man page included. -- Reuti ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] MPI advantages over PBS

2018-08-25 Thread Reuti
ication at the same time in a cluster, which might be referred to "running in parallel" – but depending on the context such a statement might be ambiguous. But if you need the result of the first image resp. computation to decide how to proceed, then it's advantageous to paral

Re: [OMPI users] know which CPU has the maximum value

2018-08-10 Thread Reuti
> Am 10.08.2018 um 17:24 schrieb Diego Avesani : > > Dear all, > I have probably understood. > The trick is to use a real vector and to memorize also the rank. Yes, I thought of this: https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html -- Reuti > Have I un

Re: [OMPI users] know which CPU has the maximum value

2018-08-10 Thread Reuti
following: > > CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX, > MPI_MASTER_COMM, MPIworld%iErr) Would MPI_MAXLOC be sufficient? -- Reuti > However, I would like also to know to which CPU that value belongs. Is it > possible? > > I have set-up a s

Re: [OMPI users] mpi send/recv pair hangin

2018-04-10 Thread Reuti
> Am 10.04.2018 um 13:37 schrieb Noam Bernstein : > >> On Apr 10, 2018, at 4:20 AM, Reuti wrote: >> >>> >>> Am 10.04.2018 um 01:04 schrieb Noam Bernstein : >>> >>>> On Apr 9, 2018, at 6:36 PM, George Bosilca wrote: >>>>

Re: [OMPI users] mpi send/recv pair hangin

2018-04-10 Thread Reuti
with the 3.0.0). > > Correct. > >> Also according to your stacktrace I assume it is an x86_64, compiled with >> icc. > > x86_64, yes, but, gcc + ifort. I can test with gcc+gfortran if that’s > helpful. Was there any reason not to choose icc + ifort? -- Reuti

Re: [OMPI users] mpi send/recv pair hangin

2018-04-05 Thread Reuti
pen MPI in $MKLROOT/interfaces/mklmpi with identical results. -- Reuti ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] latest Intel CPU bug

2018-01-04 Thread Reuti
a hint about an "EIEIO" command only. Sure, in-order-execution might slow down the system too. -- Reuti > > * containers and VMs don’t fully resolve the problem - the only solution > other than the patches is to limit allocations to single users on a node > > HTH > Ralp

Re: [OMPI users] mpifort cannot find libgfortran.so at the correct path

2017-11-28 Thread Reuti
> Am 28.11.2017 um 17:19 schrieb Reuti : > > Hi, > >> Am 28.11.2017 um 15:58 schrieb Vanzo, Davide : >> >> Hello all, >> I am having a very weird problem with mpifort that I cannot understand. >> I am building OpenMPI 1.10.3 with GCC 5.4.0 with Easy

Re: [OMPI users] mpifort cannot find libgfortran.so at the correct path

2017-11-28 Thread Reuti
on advisable, the culprit seems to lie in: > cannot open /usr/lib64/libgfortran.so: No such file or directory Does this symbolic link exist? Does it point to your installed one too? Maybe the developer package of GCC 5.4.0 is missing. Hence it looks for libgfortran.so somewhere else and finds only a

Re: [OMPI users] Vague error message while executing MPI-Fortran program

2017-10-24 Thread Reuti
nux/bin/intel64 reveals: ARGBLOCK_%d ARGBLOCK_REC_%d So it looks like the output is generated on-the-fly and doesn't point to any existing variable. But to which argument of which routine is still unclear. Does the Intel Compile have the feature to output a cross-refernce of all used va

[OMPI users] Honor host_aliases file for tight SGE integration

2017-09-13 Thread Reuti
obscript to feed an "adjusted" $PE_HOSTFILE to Open MPI and then it's working as intended: Open MPI creates forks. Does anyone else need such a patch in Open MPI and is it suitable to be included? -- Reuti PS: Only the headnodes have more than one network interface in our cas

Re: [OMPI users] Setting LD_LIBRARY_PATH for orted

2017-08-22 Thread Reuti
IBRARY_PATH > > this is the easiest option, but cannot be used if you plan to relocate the > Open MPI installation directory. There is the tool `chrpath` to change rpath and runpath inside a binary/library. This has to match relocated directory then. -- Reuti > an other

Re: [OMPI users] Setting LD_LIBRARY_PATH for orted

2017-08-21 Thread Reuti
> How do I get around this cleanly? This works just fine when I set > LD_LIBRARY_PATH in my .bashrc, but I’d rather not pollute that if I can avoid > it. Do you set or extend the LD_LIBRARY_PATH in your .bashrc? -- Reuti ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Questions about integration with resource distribution systems

2017-08-01 Thread Reuti
arently > never propagated through remote startup, Isn't it a setting inside SGE which the sge_execd is aware of? I never exported any environment variable for this purpose. -- Reuti > so killing those orphans after > VASP crashes may fail, though resource reporting works. (I ne

Re: [OMPI users] Questions about integration with resource distribution systems

2017-07-26 Thread Reuti
rsions. So I can't comment on this for sure, but it seems to set the memory also in cgroups. -- Reuti > mpirun just uses the nodes that SGE provides. > > What your cmd line does is restrict the entire operation on each node (daemon > + 8 procs) to 40GB of memory. OMPI

Re: [OMPI users] Questions about integration with resource distribution systems

2017-07-26 Thread Reuti
a string in the environment variable, you may want to use the plain value in bytes there. -- Reuti ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Questions about integration with resource distribution systems

2017-07-26 Thread Reuti
e their headers installed on it. Then configure OMPI > --with-xxx pointing to each of the RM’s headers so all the components get > built. When the binary hits your customer’s machine, only those components > that have active libraries present will execute. Just note, th

Re: [OMPI users] Questions about integration with resource distribution systems

2017-07-26 Thread Reuti
t; > qsub –pe orte 8 –b y –V –l m_mem_free=40G –cwd mpirun –np 8 a.out m_mem_free is part of Univa SGE (but not the various free ones of SGE AFAIK). Also: this syntax is for SGE, in LSF it's different. To have this independent from the actual queuing system, one could look into DR

Re: [OMPI users] No components were able to be opened in the pml framework

2017-05-30 Thread Reuti
is an additional point: which one? It might be, that you have to put the two exports of PATHS and LD_LIBRARY_PATH in your jobscript instead, in you never want to run the application from the command line in parallel. -- Reuti > > En date de : Mar

Re: [OMPI users] No components were able to be opened in the pml framework

2017-05-30 Thread Reuti
64? 2) do you source .bashrc also for interactive logins? Otherwise it should go to ~/.bash_profile or ~/.profile > > > En date de : Mar 30.5.17, Reuti a écrit : > > Objet: Re: [OMPI users] No components were able to be opened in the pml &

Re: [OMPI users] No components were able to be opened in the pml framework

2017-05-30 Thread Reuti
chemistry > program. Did you compile Open MPI on your own? Did you move it after the installation? -- Reuti ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] OpenMPI installation issue or mpi4py compatibility problem

2017-05-23 Thread Reuti
Hi, Am 23.05.2017 um 05:03 schrieb Tim Jim: > Dear Reuti, > > Thanks for the reply. What options do I have to test whether it has > successfully built? LIke before: can you compile and run mpihello.c this time – all as ordinary user in case you installed the Open MPI into so

Re: [OMPI users] OpenMPI installation issue or mpi4py compatibility problem

2017-05-22 Thread Reuti
s. > Regarding the final part of the email, is it a problem that 'undefined > reference' is appearing? Yes, it tries to resolve missing symbols and didn't succeed. -- Reuti > > Thanks and regards, > Tim > > On 22 May 2017 at 06:54, Reuti wrote: > >&

Re: [OMPI users] OpenMPI installation issue or mpi4py compatibility problem

2017-05-21 Thread Reuti
_LIBRARY_PATH differently I don't think that Ubuntu will do anything different than any other Linux. Did you compile Open MPI on your own, or did you install any repository? Are the CUDA application written by yourself or any freely available applications? - -- Reuti > and instead add

Re: [OMPI users] MPI the correct solution?

2017-05-19 Thread Reuti
As I think it's not relevant to Open MPI itself, I answered in PM only. -- Reuti > Am 18.05.2017 um 18:55 schrieb do...@mail.com: > > On Tue, 9 May 2017 00:30:38 +0200 > Reuti wrote: >> Hi, >> >> Am 08.05.2017 um 23:25 schrieb David Niklas: >>

Re: [OMPI users] IBM Spectrum MPI problem

2017-05-18 Thread Reuti
ing to download the community edition (even the evaluation link on the Spectrum MPI page does the same). -- Reuti > based on OpenMPI, so I hope there are some MPI expert can help me to solve > the problem. > > When I run a simple Hello World MPI program, I get the follow error message:

Re: [OMPI users] MPI the correct solution?

2017-05-08 Thread Reuti
the intended task the only option is to use a single machine with as many cores as possible AFAICS. - -- Reuti -BEGIN PGP SIGNATURE- Comment: GPGTools - https://gpgtools.org iEYEARECAAYFAlkQ8Y8ACgkQo/GbGkBRnRq4jgCeKI39e2U22qsx9f6VeNZyUqNK QzQAoNsb

Re: [OMPI users] Behavior of `ompi_info`

2017-05-01 Thread Reuti
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Am 25.04.2017 um 17:27 schrieb Reuti: > Hi, > > In case Open MPI is moved to a different location than it was installed into > initially, one has to export OPAL_PREFIX. While checking for the availability > of the GridEngine

[OMPI users] Behavior of `ompi_info`

2017-04-25 Thread Reuti
ed place, an appropriate output should go to stderr and the exit code set to 1. -- Reuti ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] openmpi-2.0.2

2017-04-20 Thread Reuti
Due to the last post in this thread this copy I suggested seems not to be possible, but I also want to test whether this post goes through to the list now. -- Reuti === Hi, > Am 19.04.2017 um 19:53 schrieb Jim Edwards : > > Hi, > > I have openmpi-2.0.2 builds on two differe

Re: [OMPI users] Run-time issues with openmpi-2.0.2 and gcc

2017-04-13 Thread Reuti
MPI process or is the application issuing many `mpiexec` during its runtime? Is there any limit how often `ssh` may access a node in a timeframe? Do you use any queuing system? -- Reuti signature.asc Description: Message signed with OpenPGP using GPGMail ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] No more default core binding since 2.0.2?

2017-04-10 Thread Reuti
> Am 10.04.2017 um 17:27 schrieb r...@open-mpi.org: > > >> On Apr 10, 2017, at 1:37 AM, Reuti wrote: >> >>> >>> Am 10.04.2017 um 01:58 schrieb r...@open-mpi.org: >>> >>> Let me try to clarify. If you launch a job that has only 1 o

Re: [OMPI users] No more default core binding since 2.0.2?

2017-04-10 Thread Reuti
> Am 10.04.2017 um 00:45 schrieb Reuti : > […]BTW: I always had to use -ldl when using `mpicc`. Now, that I compiled in > libnuma, this necessity is gone. Looks like I compiled too many versions in the last couple of days. The -ldl is necessary in case --disable-shared --enable-s

Re: [OMPI users] No more default core binding since 2.0.2?

2017-04-10 Thread Reuti
h looks like being bound to socket. -- Reuti > You can always override these behaviors. > >> On Apr 9, 2017, at 3:45 PM, Reuti wrote: >> >>>> But I can't see a binding by core for number of processes <= 2. Does it >>>> mean 2 per node or 2 ov

Re: [OMPI users] No more default core binding since 2.0.2?

2017-04-09 Thread Reuti
y that warning in addition about the memory couldn't be bound. BTW: I always had to use -ldl when using `mpicc`. Now, that I compiled in libnuma, this necessity is gone. -- Reuti ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] No more default core binding since 2.0.2?

2017-04-09 Thread Reuti
this socket has other jobs running (by accident). So, this is solved - I wasn't aware of the binding by socket. But I can't see a binding by core for number of processes <= 2. Does it mean 2 per node or 2 overall for the `mpiexec`? - -- Reuti > >> On Apr 9, 2017, at 3:4

[OMPI users] No more default core binding since 2.0.2?

2017-04-09 Thread Reuti
ht it might be because of: - We define plm_rsh_agent=foo in $OMPI_ROOT/etc/openmpi-mca-params.conf - We compiled with --with-sge But also started on the command line by `ssh` to the nodes, there seems no automatic core binding to take place any longer. --

Re: [OMPI users] mpicc and libstdc++, general building question

2017-04-07 Thread Reuti
mpilation in my home directory by a plain `export`. I can spot: $ ldd libmpi_cxx.so.20 … libstdc++.so.6 => /home/reuti/local/gcc-6.2.0/lib64/../lib64/libstdc++.so.6 (0x7f184d2e2000) So this looks fine (although /lib64/../lib64/ looks nasty). In the library, the

Re: [OMPI users] Compiler error with PGI: pgcc-Error-Unknown switch: -pthread

2017-04-03 Thread Reuti
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Am 03.04.2017 um 23:07 schrieb Prentice Bisbal: > FYI - the proposed 'here-doc' solution below didn't work for me, it produced > an error. Neither did printf. When I used printf, only the first arg was > passed along: > > #!/bin/bash > > realcmd=

Re: [OMPI users] Compiler error with PGI: pgcc-Error-Unknown switch: -pthread

2017-04-03 Thread Reuti
d by the configure tests, that's a bit of a problem, Just > adding another -E before $@, should fix the problem. It's often suggested to use printf instead of the non-portable echo. - -- Reuti > > Prentice > > On 04/03/2017 03:54 PM, Prentice Bisbal wrote: >>

Re: [OMPI users] Performance degradation of OpenMPI 1.10.2 when oversubscribed?

2017-03-24 Thread Reuti
or the same part of the CPU, essentially becoming a bottleneck. But using each half of a CPU for two (or even more) applications will allow a better interleaving in the demand for resources. To allow this in the best way: no taskset or binding to cores, let the Linux kernel and CPU do their best - Y

Re: [OMPI users] Help with Open MPI 2.1.0 and PGI 16.10: Configure and C++

2017-03-23 Thread Reuti
gone after the hints on the discussion's link you posted? As I face it there still about "libeevent". -- Reuti > > *** C++ compiler and preprocessor > checking whether we are using the GNU C++ compiler... yes > checking whether pgc++ accepts -g... yes > checking

Re: [OMPI users] OpenMPI-2.1.0 problem with executing orted when using SGE

2017-03-22 Thread Reuti
> Am 22.03.2017 um 15:31 schrieb Heinz-Ado Arnolds > : > > Dear Reuti, > > thanks a lot, you're right! But why did the default behavior change but not > the value of this parameter: > > 2.1.0: MCA plm rsh: parameter "plm_rsh_agent" (current value: &

Re: [OMPI users] OpenMPI-2.1.0 problem with executing orted when using SGE

2017-03-22 Thread Reuti
o the 1.10.6 (use SGE/qrsh) > one? Are there mca params to set this? > > If you need more info, please let me know. (Job submitting machine and target > cluster are the same with all tests. SW is residing in AFS directories > visible on all machines. Parameter "plm_rsh_disable_qrsh&

[OMPI users] State of the DVM in Open MPI

2017-02-28 Thread Reuti
Hi, Only by reading recent posts I got aware of the DVM. This would be a welcome feature for our setup*. But I see not all options working as expected - is it still a work in progress, or should all work as advertised? 1) $ soft@server:~> orte-submit -cf foo --hnp file:/home/reuti/dvmuri -

Re: [OMPI users] Using OpenMPI / ORTE as cluster aware GNU Parallel

2017-02-27 Thread Reuti
using DVM often leads to a terminated DVM once a process returned with a non-zero exit code. But once the DVM is gone, the queued jobs might be lost too I fear. I would wish that the DVM could be more forgivable (or this feature be adjustable what to do in case of a non-zero exit code). -- Reuti

Re: [OMPI users] Using OpenMPI / ORTE as cluster aware GNU Parallel

2017-02-27 Thread Reuti
er. Under which user account the DVM daemons will run? Are all users using the same account? -- Reuti signature.asc Description: Message signed with OpenPGP using GPGMail ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Is gridengine integration broken in openmpi 2.0.2?

2017-02-03 Thread Reuti
ured to use SSH? (I mean the entries in `qconf -sconf` for rsh_command resp. daemon). -- Reuti > Can see the gridengine component via: > > $ ompi_info -a | grep gridengine > MCA ras: gridengine (MCA v2.1.0, API v2.0.0, Component v2.0.2) > MCA ras gridengin

Re: [OMPI users] Low CPU utilization

2016-10-16 Thread Reuti
her. For a first test you can start both with "mpiexec --bind-to none ..." and check whether you see a different behavior. `man mpiexec` mentions some hints about threads in applications. -- Reuti > > > Regards, > Mahmood > > > ___

Re: [OMPI users] Running a computer on multiple computers

2016-10-14 Thread Reuti
to. When I type in the command mpiexec -f hosts -n 4 ./applic > > I get this error > [mpiexec@localhost.localdomain] HYDU_parse_hostfile > (./utils/args/args.c:323): unable to open host file: hosts As you mentioned MPICH and their Hydra startup, you better ask at their list: http://www.mpi

Re: [OMPI users] MPI Behaviour Question

2016-10-11 Thread Reuti
ved from all nodes. > > While I know there are better ways to test OpenMPI's functionality, > like compiling and using the programs in examples/, this is the method > a specific client chose. There are small "Hello world" programs like here: http://mpitutorial.com/tutor

Re: [OMPI users] OMPI users] Still "illegal instruction"

2016-09-22 Thread Reuti
march=bdver1 what Gilles mentioned) or to tell me what he thinks it should compile for? For pgcc there is -show and I can spot the target it discovered in the USETPVAL= line. -- Reuti > > The solution was (as stated by guys) building Siesta on the compute node. I > have to say that I teste

Re: [OMPI users] OMPI users] Still "illegal instruction"

2016-09-15 Thread Reuti
d and computes). Would it work to compile with a shared target and copy it to /shared on the frontend? -- Reuti > An important question is that, how can I find out what is the name of the > illegal instruction. Then, I hope to find the document that points which > instruction se

Re: [OMPI users] static linking MPI libraries with applications

2016-09-14 Thread Reuti
unction `load_driver': > (.text+0x331): undefined reference to `dlerror' > /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/libibverbs.a(src_libibverbs_la-init.o): > In function `ibverbs_init': > (.text+0xd25): undefined reference to `dlopen' > /usr/lib/gcc/

Re: [OMPI users] static linking MPI libraries with applications

2016-09-14 Thread Reuti
t; I build libverbs from source first? Am I on the right direction? The "-l" includes already the "lib" prefix when it tries to find the library. Hence "-libverbs" might be misleading due to the "lib" in the word, as it

Re: [OMPI users] static linking MPI libraries with applications

2016-09-14 Thread Reuti
I didn't find the time to look further into it. See my post from Aug 11, 2016. With older versions of Open MPI it wasn't necessary to supply it in addition. -- Reuti > > Cheers, > > Gilles > > > > On Wednesday, September 14, 2016, Mahmood Naderan > wrot

Re: [OMPI users] OS X El Capitan 10.11.6 ld: symbol(s) not found for architecture x86_64

2016-08-23 Thread Reuti
ch-mp as this is a different implementation of MPI, not Open MPI. Also the default location of Open MPI isn't mpich-mp. - what does: $ mpicc -show $ which mpicc output? - which MPI library was used to build the parallel FFTW? -- Reuti > Undefined symbols for archit

Re: [OMPI users] SGE integration broken in 2.0.0

2016-08-16 Thread Reuti
Am 16.08.2016 um 13:26 schrieb Jeff Squyres (jsquyres): > On Aug 12, 2016, at 2:15 PM, Reuti wrote: >> >> I updated my tools to: >> >> autoconf-2.69 >> automake-1.15 >> libtool-2.4.6 >> >> but I face with Open MPI's ./autogen.pl:

Re: [OMPI users] SGE integration broken in 2.0.0

2016-08-12 Thread Reuti
> how/why it got deleted. > > https://github.com/open-mpi/ompi/pull/1960 Yep, it's working again - thx. But for sure there was a reason behind the removal, which may be elaborated in the Open MPI team to avoid any side effects by fixing this issue. -- Reuti PS: The other items

Re: [OMPI users] SGE integration broken in 2.0.0

2016-08-12 Thread Reuti
macro: AC_PROG_LIBTOOL I recall seeing in already before, how to get rid of it? For now I fixed the single source file just by hand. -- Reuti > As for the blank in the cmd line - that is likely due to a space reserved for > some entry that you aren’t using (e.g., when someone manually

Re: [OMPI users] SGE integration broken in 2.0.0

2016-08-12 Thread Reuti
d, try again later. Sure, the name of the machine is allowed only after the additional "-inherit" to `qrsh`. Please see below for the complete in 1.10.3, hence the assembly seems also not to be done in the correct way. -- Reuti > On Aug 11, 2016, at 4:28 AM,

Re: [OMPI users] mpirun won't find programs from the PATH environment variable that are in directories that are relative paths

2016-08-12 Thread Reuti
a >> >> mostly because you still get to set the path once and use it many times >> without duplicating code. >> >> >> For what it's worth, I've seen Ralph's suggestion generalized to something >> like >> >> PREFIX=$PWD/arch

Re: [OMPI users] SGE integration broken in 2.0.0

2016-08-11 Thread Reuti
> Am 11.08.2016 um 13:28 schrieb Reuti : > > Hi, > > In the file orte/mca/plm/rsh/plm_rsh_component I see an if-statement, which > seems to prevent the tight integration with SGE to start: > >if (NULL == mca_plm_rsh_component.agent) { > > Why is it there (i

[OMPI users] SGE integration broken in 2.0.0

2016-08-11 Thread Reuti
tional blank. == I also notice, that I have to supply "-ldl" to `mpicc` to allow the compilation of an application to succeed in 2.0.0. -- Reuti ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-18 Thread Reuti
different stdin/-out/-err in DRMAA by setting drmaa_input_path/drmaa_output_path/drmaa_error_path for example? -- Reuti > mpi_comm_spawn("/bin/sh","-c","siesta < infile",..) definitely does not work. > > Patching siesta to start as "siesta

Re: [OMPI users] OMPI users] MPI inside MPI (still)

2014-12-13 Thread Reuti
them more easily (i.e. terminate, suspend,...). -- Reuti http://www.drmaa.org/ https://arc.liv.ac.uk/SGE/howto/howto.html#DRMAA > Alex > > 2014-12-12 22:35 GMT-02:00 Gilles Gouaillardet > : > Alex, > > You need MPI_Comm_disconnect at least. > I am not sure if this is 1

Re: [OMPI users] Cannot open configuration file - openmpi/mpic++-wrapper-data.txt

2014-12-09 Thread Reuti
Hi, please have a look here: http://www.open-mpi.org/faq/?category=building#installdirs -- Reuti Am 09.12.2014 um 07:26 schrieb Manoj Vaghela: > Hi OpenMPI Users, > > I am trying to build OpenMPI libraries using standard configuration and > compile procedure. It is just the on

Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-14 Thread Reuti
ppreciate your replies and will read them thoroughly. I think it's best to continue with the discussion after SC14. I don't want to put any burden on anyone when time is tight. -- Reuti > These points are in no particular order... > > 0. Two fundamental points have been

Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-13 Thread Reuti
to both the oob and tcp/btl? Yes. > Obviously, this won’t make it for 1.8 as it is going to be fairly intrusive, > but we can probably do something for 1.9 > >> On Nov 13, 2014, at 4:23 AM, Reuti wrote: >> >> Am 13.11.2014 um 00:34 schrieb Ralph Castain: >&g

Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-13 Thread Reuti
Am 13.11.2014 um 00:34 schrieb Ralph Castain: >> On Nov 12, 2014, at 2:45 PM, Reuti wrote: >> >> Am 12.11.2014 um 17:27 schrieb Reuti: >> >>> Am 11.11.2014 um 02:25 schrieb Ralph Castain: >>> >>>> Another thing you can do is (a) ensure y

Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-13 Thread Reuti
Gus, Am 13.11.2014 um 02:59 schrieb Gus Correa: > On 11/12/2014 05:45 PM, Reuti wrote: >> Am 12.11.2014 um 17:27 schrieb Reuti: >> >>> Am 11.11.2014 um 02:25 schrieb Ralph Castain: >>> >>>> Another thing you can do is (a) ensure you built with —e

Re: [OMPI users] OMPI users] How OMPI picks ethernet interfaces

2014-11-13 Thread Reuti
> no problem obfuscating the ip of the head node, i am only interested in > netmasks and routes. > > Ralph Castain wrote: >> >>> On Nov 12, 2014, at 2:45 PM, Reuti wrote: >>> >>> Am 12.11.2014 um 17:27 schrieb Reuti: >>> >>>>

Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-12 Thread Reuti
Am 12.11.2014 um 17:27 schrieb Reuti: > Am 11.11.2014 um 02:25 schrieb Ralph Castain: > >> Another thing you can do is (a) ensure you built with —enable-debug, and >> then (b) run it with -mca oob_base_verbose 100 (without the tcp_if_include >> option) so we can watch

Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-12 Thread Reuti
the internal or external name of the headnode given in the machinefile - I hit ^C then. I attached the output of Open MPI 1.8.1 for this setup too. -- Reuti Wed Nov 12 16:43:12 CET 2014 [annemarie:01246] mca: base: components_register: registering oob components [annemarie:0124

Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-12 Thread Reuti
-mca hwloc_base_binding_policy none So, the bash was removed. But I don't think that this causes anything. -- Reuti > Cheers, > > Gilles > > On Mon, Nov 10, 2014 at 5:56 PM, Reuti wrote: > Hi, > > Am 10.11.2014 um 16:39 schrieb Ralph Castain: > >

Re: [OMPI users] oversubscription of slots with GridEngine

2014-11-11 Thread Reuti
Am 11.11.2014 um 19:29 schrieb Ralph Castain: > >> On Nov 11, 2014, at 10:06 AM, Reuti wrote: >> >> Am 11.11.2014 um 17:52 schrieb Ralph Castain: >> >>> >>>> On Nov 11, 2014, at 7:57 AM, Reuti wrote: >>>> >>>> Am 11.1

Re: [OMPI users] oversubscription of slots with GridEngine

2014-11-11 Thread Reuti
Am 11.11.2014 um 17:52 schrieb Ralph Castain: > >> On Nov 11, 2014, at 7:57 AM, Reuti wrote: >> >> Am 11.11.2014 um 16:13 schrieb Ralph Castain: >> >>> This clearly displays the problem - if you look at the reported “allocated >>> nodes”, you se

Re: [OMPI users] oversubscription of slots with GridEngine

2014-11-11 Thread Reuti
us the content of PE_HOSTFILE? > > >> On Nov 11, 2014, at 4:51 AM, SLIM H.A. wrote: >> >> Dear Reuti and Ralph >> >> Below is the output of the run for openmpi 1.8.3 with this line >> >> mpirun -np $NSLOTS --display-map --display-allocati

  1   2   3   4   5   6   >