S this cluster is installed with?
>>
>>
>>
>> On Thu, 4 Oct 2018 at 00:02, Castellana Michele
>> wrote:
>>
>> I fixed it, the correct file was in /lib64, not in /lib.
>>
>> Thank you for your help.
>>
>> On Oct 3, 2018, at 11:
It's probably in your Linux distro somewhere -- I'd guess you're missing a
package (e.g., an RPM or a deb) out on your compute nodes...?
> On Oct 3, 2018, at 4:24 PM, Castellana Michele
> wrote:
>
> Dear Ralph,
> Thank you for your reply. Do you know where I could find libcrypto.so.0.9.8 ?
(Ralph sent me Siegmar's pmix config.log, which Siegmar sent to him off-list)
It looks like Siegmar passed --with-hwloc=internal.
Open MPI's configure understood this and did the appropriate things.
PMIX's configure didn't.
I think we need to add an adjustment into the PMIx configure.m4 in
Siegmar --
I created a GitHub issue for this:
https://github.com/open-mpi/ompi/issues/5814
Nathan posted a test program on there for you to try; can you try it and reply
on the issue?
Thanks.
> On Oct 1, 2018, at 9:23 AM, Siegmar Gross
> wrote:
>
> Hi,
>
> I've tried to install
On Sep 27, 2018, at 1:52 PM, Zeinab Salah wrote:
>
> Thank you so much for your detailed answers.
> I use gfortran 4.8.3, what should I do? or what is the suitable openmpi
> version for this version?
If you build Open MPI v3.1.2 with gfortran 4.8.3, you will automatically get
the "old"
On Sep 27, 2018, at 12:16 AM, Zeinab Salah wrote:
>
> I have a problem in running an air quality model, maybe because of the size
> of calculations, so I tried different versions of openmpi.
> I want to install openmpi-3.0.2 with the option of
> "--with-mpi-f90-size=medium", but this option
Must have been some kind of temporary DNS glitch. Shrug.
Next time it happens, also be sure to check
https://downforeveryoneorjustme.com/download.open-mpi.org
> On Sep 25, 2018, at 9:13 AM, Jorge D'Elia wrote:
>
> - Mensaje original -
>> De: "Llolsten Kaonga"
>> Para: "Jorge
Alan --
Sorry for the delay.
I agree with Gilles: Brian's commit had to do with "reachable" plugins in Open
MPI -- they do not appear to be the problem here.
>From the config.log you sent, it looks like configure aborted because you
>requested UCX support (via --with-ucx) but configure wasn't
I can't say that we've tried to build on WSL; the fact that it fails is
probably not entirely unsurprising. :-(
I looked at your logs, and although I see the compile failure, I don't see any
reason *why* it failed. Here's the relevant fail from the tar_openmpi_fail
file:
-
5523 Making
Yeah, it's a bit terrible, but we didn't reliably reproduce this problem for
many months, either. :-\
As George noted, it's been ported to all the release branches but is not yet in
an official release. Until an official release (4.0.0 just had an rc; it will
be released soon, and 3.0.3 will
On Sep 12, 2018, at 4:54 AM, Balázs Hajgató wrote:
>
> Setting mca oob to tcp works. I will stick to this solution in our production
> environment.
Great!
> I am not sure that it is relevant, but I also tried the patch on a
> non-procduction OpenMPI 3.1.1, and "mpirun -host nic114,nic151
Can you send all the information listed here:
https://www.open-mpi.org/community/help/
> On Sep 12, 2018, at 11:03 AM, Greg Russell wrote:
>
> OpenMPI-3.1.2
>
> Sent from my iPhone
>
> On Sep 12, 2018, at 10:50 AM, Ralph H Castain wrote:
>
>> What OMPI version are we talking about
Thanks for reporting the issue.
First, you can workaround the issue by using:
mpirun --mca oob tcp ...
This uses a different out-of-band plugin (TCP) instead of verbs unreliable
datagrams.
Second, I just filed a fix for our current release branches (v2.1.x, v3.0.x,
and v3.1.x):
Gilles: Can you submit a PR to fix these 2 places?
Thanks!
> On Sep 11, 2018, at 9:10 AM, emre brookes wrote:
>
> Gilles Gouaillardet wrote:
>> It seems I got it wrong :-(
> Ah, you've joined the rest of us :)
>>
>> Can you please give the attached patch a try ?
>>
> Working with a git clone
Glad you figured it out. Just for some additional color:
https://www.open-mpi.org/faq/?category=building#install-overwrite
> On Sep 3, 2018, at 4:17 AM, Patrick Begou
> wrote:
>
> Solved.
> Strange conflict (not explained) after several compilation test of OpenMPI
> with gcc7. Solved
On Sep 4, 2018, at 5:22 PM, Benjamin Brock wrote:
>
> Are MPI datatypes like MPI_INT and MPI_CHAR guaranteed to be compile-time
> constants?
No. They are guaranteed to be link-time constants.
> Is this defined by the MPI standard, or in the Open MPI implementation?
MPI standard:
for the interruption, folks.
> On Aug 26, 2018, at 3:22 PM, Jeff Squyres (jsquyres)
> wrote:
>
> The lists.open-mpi.org server went offline due to an outage at our hosting
> provider sometime in the evening on Aug 22 / early morning Aug 23 (US Eastern
> time). As of yesterday morning
The lists.open-mpi.org server went offline due to an outage at our hosting
provider sometime in the evening on Aug 22 / early morning Aug 23 (US Eastern
time). As of yesterday morning (Saturday, Aug 25), the list server now appears
to be back online; I've seen at least a few backlogged emails
On Aug 22, 2018, at 11:49 AM, Diego Avesani wrote:
>
> I have a philosophical question.
>
> I am reading a lot of papers where people use Portable Batch System or job
> scheduler in order to parallelize their code.
>
> What are the advantages in using MPI instead?
It depends on the code in
I think Gilles is right: remember that datatypes like MPI_2DOUBLE_PRECISION are
actually 2 values. So if you want to send 1 pair of double precision values
with MPI_2DOUBLE_PRECISION, then your count is actually 1.
> On Aug 22, 2018, at 8:02 AM, Gilles Gouaillardet
> wrote:
>
> Diego,
>
>
The lists.open-mpi.org server went offline due to an outage at our hosting
provider sometime in the evening on Aug 22 / early morning Aug 23 (US Eastern
time). The list server now appears to be back online; I've seen at least a few
backlogged emails finally come through.
If you sent a mail in
I'm afraid the error message you're getting is from libibverbs; it's trying to
load a plugin named libsmartio-rdmav17.so. That's not part of Open MPI, sorry.
That likely means that some dependency of libsmartio-rdmav17.so wasn't found,
and the run-time loading of the plugin failed (vs. not
config.log and the ompi_info
>
> Thanks.
>
> On Wed, Aug 15, 2018 at 11:46 AM, Jeff Squyres (jsquyres) via users
> wrote:
> There can be lots of reasons that this happens. Can you send all the
> information listed here?
>
> https://www.open-mpi.org/community/he
There can be lots of reasons that this happens. Can you send all the
information listed here?
https://www.open-mpi.org/community/help/
> On Aug 15, 2018, at 10:55 AM, Mota, Thyago wrote:
>
> Hello.
>
> I have openmpi 2.0.4 installed on a Cent OS 7. When I try to run "mpirun" it
>
On Aug 12, 2018, at 2:18 PM, Diego Avesani wrote:
>
> Dear all, Dear Jeff,
> I have three communicator:
>
> the standard one:
> MPI_COMM_WORLD
>
> and other two:
> MPI_LOCAL_COMM
> MPI_MASTER_COMM
>
> a sort of two-level MPI.
>
> Suppose to have 8 threats,
> I use 4 threats for run the same
On Aug 10, 2018, at 6:27 PM, Diego Avesani wrote:
>
> The question is:
> Is it possible to have a barrier for all CPUs despite they belong to
> different group?
> If the answer is yes I will go in more details.
By "CPUs", I assume you mean "MPI processes", right? (i.e., not threads inside
an
two cents from a pedestrian MPI user,
> who thinks minloc and maxloc are great,
> knows nothing about the MPI Forum protocols and activities,
> but hopes the Forum pays attention to users' needs.
>
> Gus Correa
>
> PS - Jeff S.: Please, bring Diego's request to the Forum! Add
o deprecate MPI_{MIN,MAX}LOC, they should start that
>> discussion on https://github.com/mpi-forum/mpi-issues/issues or
>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll.
>> Jeff
>> On Fri, Aug 10, 2018 at 10:27 AM, Jeff Squyres (jsquyres) via users
>> ma
I'm not quite clear what the problem is that you're running in to -- you just
said that there is "some problem with MPI_barrier".
What problem, exactly, is happening with your code? Be as precise and specific
as possible.
It's kinda hard to tell what is happening in the code snippet below
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.
As far as I know, Nathan hasn't advanced a proposal to kill them in MPI-4,
meaning that they'll likely continue to be in MPI for at least another 10
years. :-)
(And even if they did get killed in MPI-4, implementations
On Aug 2, 2018, at 4:40 PM, Grove, John W wrote:
>
> I am compiling an application using openmpi 3.1.1. The application is mixed
> Fortran/C/C++. I am using the intel compiler on a mac pro running OS 10.13.6.
> When I try to use the mpi_f08 interface I get unresolved symbols at load
> time,
Do you need to check it with Java, or will any MPI application do?
If any language will do, you might want to check out the OSU MPI benchmarks:
http://mvapich.cse.ohio-state.edu/benchmarks/
> On Jul 27, 2018, at 10:54 AM, John Bauman wrote:
>
> Hello everyone,
>
> I just want to start
PM, Noam Bernstein
> wrote:
>
>> On Jul 12, 2018, at 11:58 AM, Jeff Squyres (jsquyres)
>> wrote:
>>
>>
>>
>> (You may have already done this; I just want to make sure we're on the same
>> sheet of music here…)
>
> I’m not talking
On Jul 12, 2018, at 11:45 AM, Noam Bernstein
wrote:
>
>> E.g., if you "ulimit -c" in your interactive shell and see "unlimited", but
>> if you "ulimit -c" in a launched job and see "0", then the job scheduler is
>> doing that to your environment somewhere.
>
> I am using a scheduler
On Jul 12, 2018, at 10:59 AM, Noam Bernstein
wrote:
>
>> Do you get core files?
>>
>> Loading up the core file in a debugger might give us more information.
>
> No, I don’t, despite setting "ulimit -c unlimited”. I’m not sure what’s
> going on with that (or the lack of line info in the
Do you get core files?
Loading up the core file in a debugger might give us more information.
> On Jul 12, 2018, at 9:35 AM, Noam Bernstein
> wrote:
>
>
>> On Jul 12, 2018, at 8:37 AM, Noam Bernstein
>> wrote:
>>
>> I’m going to try the 3.1.x 20180710 nightly snapshot next.
>
> Same
-debug'
'--enable-mem-debug' '--enable-picky'
Internal debug support: yes
Memory debugging support: yes
C/R Enabled Debugging: no
That should tell you whether you have debug support or not.
> On Jul 11, 2018, at 5:25 PM, Noam Bernstein
> wrote:
>
>> On Jul 11, 2018, at 11:29 A
Ok, that would be great -- thanks.
Recompiling Open MPI with --enable-debug will turn on several debugging/sanity
checks inside Open MPI, and it will also enable debugging symbols. Hence, If
you can get a failure when a debug Open MPI build, it might give you a core
file that can be used to
Can you send the full verbose output with "--mca btl_base_verbose 100"?
> On Jul 4, 2018, at 4:36 PM, carlos aguni wrote:
>
> Hi Gilles.
>
> Thank you for your reply! :)
> I'm now using a compiled version of OpenMPI 3.0.2 and all seems to work fine
> now.
> Running `mpirun -n 3 -host
Greetings Matt.
https://github.com/open-mpi/ompi/commit/4d126c16fa82c64a9a4184bc77e967a502684f02
is the specific commit where the fixes came in.
Here's a little creative grepping that shows the APIs affected (there's also
some callback function signatures that were fixed, too, but they're
Simon --
You don't currently have another Open MPI installation in your PATH /
LD_LIBRARY_PATH, do you?
I have seen dependency library loads cause "make check" to get confused, and
instead of loading the libraries from the build tree, actually load some -- but
not all -- of the required
On Jun 22, 2018, at 7:36 PM, carlos aguni wrote:
>
> I'm trying to run a code on 2 machines that has at least 2 network interfaces
> in it.
> So I have them as described below:
>
> compute01
> compute02
> ens3
> 192.168.100.104/24
> 10.0.0.227/24
> ens8
> 10.0.0.228/24
> 172.21.1.128/24
> ens9
You already asked this question on the devel list, and I've asked you for more
information.
Please don't just re-post your question over here on the user list and expect
to get a different answer.
Thanks!
> On Jun 22, 2018, at 4:55 PM, lille stor wrote:
>
> Hi,
>
>
> When compiling a
> I will help with the OFI part.
>
> Thanks,
> _MAC
>
> -Original Message-
> From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Jeff
> Squyres (jsquyres) via users
> Sent: Thursday, June 14, 2018 12:50 PM
> To: Open MPI User's List
> C
eeling more than a little ignorant these days. :)
>
> Thanks to all for the responses. It has been a huge help.
>
> Charlie
>
>> On Jun 14, 2018, at 1:18 PM, Jeff Squyres (jsquyres) via users
>> wrote:
>>
>> Charles --
>>
>> It may have gott
Charles --
It may have gotten lost in the middle of this thread, but the
vendor-recommended way of running on InfiniBand these days is with UCX. I.e.,
install OpenUCX and use one of the UCX transports in Open MPI. Unless you have
special requirements, you should likely give this a try and
On Jun 8, 2018, at 11:38 AM, Bennet Fauber wrote:
>
> Hmm. Maybe I had insufficient error checking in our installation process.
>
> Can you make and make install after the configure fails? I somehow got an
> installation, despite the configure status, perhaps?
If it's a fresh tarball
Hmm. I'm confused -- can we clarify?
I just tried configuring Open MPI v3.1.0 on a RHEL 7.4 system with the RHEL
hwloc RPM installed, but *not* the hwloc-devel RPM. Hence, no hwloc.h (for
example).
When specifying an external hwloc, configure did fail, as expected:
-
$ ./configure
Siegmar --
I asked some Fortran gurus, and they don't think that there is any restriction
on having ASYNCHRONOUS and INTENT on the same line. Indeed, Open MPI's
definition of MPI_ACCUMULATE seems to agree with what is in MPI-3.1.
Is this a new version of a Fortran compiler that you're using,
Alexander --
I don't know offhand if 2.0.2 was faulty in this area. We usually ask users to
upgrade to at least the latest release in a given series (e.g., 2.0.4) because
various bug fixes are included in each sub-release. It wouldn't be much use to
go through all the effort to make a proper
> On May 29, 2018, at 10:30 PM, Kaiming Ouyang wrote:
>
> I have a question about recompiling openmpi.
> Recently I updated the infiniband driver for network card Mellanox, but I
> found original openmpi did not work anymore. Does this mean the driver update
> must be followed by recompiling
If your Linux distro does not have an Open MPI package readily available, you
can build Open MPI from source for an RPi fairly easily. Something like this
(not tested on an RPi / YMMV):
wget https://download.open-mpi.org/release/open-mpi/v3.1/openmpi-3.1.0.tar.bz2
tar xf openmpi-3.1.0.tar.bz2
with the data you requested? It's just a
> tad bigger.
>
> Best regards.
>
> Alexander
>
> On Wed, May 23, 2018 at 5:24 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
> wrote:
> Alexander --
>
> Can you provide some more detail? The information listed her
Alexander --
Can you provide some more detail? The information listed here would be helpful:
https://www.open-mpi.org/community/help/
> On May 23, 2018, at 7:38 AM, Alexander Supalov
> wrote:
>
> Hi everybody,
>
> I've observed the process binding
There are two issues:
1. You should be using MPI.C_COMPLEX, not MPI.COMPLEX. MPI.COMPLEX is a
Fortran datatype; MPI.C_COMPLEX is the C datatype (which is what NumPy is using
behind the scenes).
2. Somehow the received B values are different between the two.
I derived this program from your
On May 15, 2018, at 1:39 AM, Max Mellette wrote:
>
> Thanks everyone for all your assistance. The problem seems to be resolved
> now, although I'm not entirely sure why these changes made a difference.
> There were two things I changed:
>
> (1) I had some additional `export
Yes, that "T" state is quite puzzling. You didn't attach a debugger or hit the
ssh with a signal, did you?
(we had a similar situation on the devel list recently, but it only happened
with a very old version of Slurm. We concluded that it was a SLURM bug that
has since been fixed. And just
On May 4, 2018, at 1:08 PM, Max Mellette wrote:
>
> I'm trying to set up OpenMPI 3.0.1 on a pair of linux machines, but I'm
> running into a problem where mpirun hangs when I try to execute a simple
> command across the two machines:
>
> $ mpirun --host b09-30,b09-32
Can you provide some more detail? I'm not able to get this to fail (i.e., it
seems to be working as expected for me).
For example, what's the contents of your /etc/openmpi/openmpi-default-hostfile
-- did you list some hostnames in there?
> On May 11, 2018, at 4:43 AM, Konstantinos
It looks like you're getting a segv when calling MPI_Comm_rank().
This is quite unusual -- MPI_Comm_rank() is just a local lookup / return of an
integer. If MPI_Comm_rank() is seg faulting, it usually indicates that there's
some other kind of memory error in the application, and this seg fault
FWIW, setting CFLAGS (or whatever other env variables you want) on the
configure command line has the secondary benefit of recording that info in the
first few lines of config.log. I.e., "head config.log" will show you the exact
command line you used to configure Open MPI, including any
On Apr 23, 2018, at 11:00 AM, Marshall2, John (SSC/SPC)
wrote:
>
> Only one ib interface shows up via ifconfig and at /sys/class/net/ibX.
>
> But, under /sys/class/infiniband and /sys/class/infiniband_cm, all the mlx4_Y
> do show
> up. E.g.,
> mlx4_0mlx4_10
On Apr 20, 2018, at 1:03 PM, Marshall2, John (SSC/SPC)
wrote:
>
> I am trying to verify/determine what the proper setting is for
> btl_openib_ib_include.
I think you mean btl_openib_if_include ("if" = "interface").
> Some background:
> * openmpi 2.1.1 (and 1.6.5 -
Can you send all the information listed here:
https://www.open-mpi.org/community/help/
> On Apr 22, 2018, at 2:28 PM, Amir via users wrote:
>
> Hi everybody,
>
> After having some problems with setting up the debugging environment for
> Visual Studio 10, I am
On Apr 10, 2018, at 9:03 AM, Michael Di Domenico wrote:
>
>> We've actually been arguing about exactly how to do this for quite a while.
>> It's complicated (I can explain further, if you care). :-\
>
> i have no doubt its complicated. i'm not overly interested in
Are you 100% sure that you are not accidentally mixing and matching multiple
versions of Open MPI in the same job? This type of error (PMIX bad param) is
typical when you accidentally use Open MPI vXYZ on one node and Open MPI vABC
on a different node.
> On Apr 4, 2018, at 12:20 AM, abhisek
On Apr 6, 2018, at 8:12 AM, Michael Di Domenico wrote:
>
> so the resulting warnings i get
>
> mca_btl_openib: lbrdmacm.so.1
> mca_btl_usnic: libfabric.so.1
> mca_oob_ud: libibverbs.so.1
> mca_mtl_mxm: libmxm.so.2
> mca_mtl_ofi: libfabric.so.1
> mca_mtl_psm:
Can you please send all the information listed here:
https://www.open-mpi.org/community/help/
Thanks!
> On Apr 6, 2018, at 8:27 AM, Ankita m wrote:
>
> Hello Sir/Madam
>
> I am Ankita Maity, a PhD scholar from Mechanical Dept., IIT Roorkee, India
>
> I am
Greetings, and welcome to the wonderful world of MPI. :-)
First thing to note is that there are multiple different software packages that
implement the MPI specification. Open MPI -- the mailing list that you sent to
-- is one of them. MPICH, from Argonne National Labs, is another.
From the
On Apr 4, 2018, at 12:58 PM, Quentin Faure wrote:
>
> Sorry, I did not see my autocorrect changed some word.
>
> I added the -l and it did not change anything. Also the mpicxx —showme does
> not work. It says that the option —showme does not exist
If 'mpicxx --showme'
On Apr 2, 2018, at 1:39 PM, dpchoudh . wrote:
>
> Sorry for a pedantic follow up:
>
> Is this (heterogeneous cluster support) something that is specified by
> the MPI standard (perhaps as an optional component)?
The MPI standard states that if you send a message, you should
uldn't be needed as the wrapper will do that for you. You can see what the
> wrapper passes to gcc by running: mpicxx --showme.
>
> -Nathan
>
> On Apr 03, 2018, at 02:16 PM, Quentin Faure <quentin...@hotmail.fr> wrote:
>
>> Hello,
>>
>>>
>>
On Apr 2, 2018, at 10:03 AM, abhisek Mondal wrote:
>
> I just have installed openmpi-3.0.1. Installation seemed to finish without
> any errors. However, I'm facing few issues while running a job.
Can you be more specific?
> This is what is printed after configuration:
On Mar 31, 2018, at 10:57 PM, peng jia wrote:
>
> I would like to run some MPI code with a cluster out of a normal laptop and a
> ARM-architektur raspberrypi, but unsuccessfully the system would not
> response, even though i install openmpi manuelly on both pc and
On Mar 29, 2018, at 11:19 AM, Quentin Faure wrote:
>
> I would like to use openmpi with a software called LAMMPS. I know it is
> possible when compiling the software to indicate it to use it with openmpi.
> However when I do that I have a warning message telling me that
On Mar 29, 2018, at 7:14 AM, Maxime Boissonneault
wrote:
>
> If the C++ MPI bindings had been similar to Boost MPI, they would probably
> have been adopted more widely and may still be alive.
FYI: the initial C++ bindings that were proposed (by me!) to
> Am 29.03.2018 um 09:58 schrieb Florian Lindner:
>> #define MPI_BOOL MPI_Select_unsigned_integer_datatype::datatype
>>
>> It redefines MPI_BOOL based on the size of bool. I wonder if this is needed
>> and why?
>>
>> I was speculating that the compiler could pack multiple bools in
You need to configure with --enable-mpi-cxx -- the Open MPI C++ bindings are
not built by default.
Can I ask what software you are using that requires the MPI C++ bindings?
I ask because the MPI C++ bindings were deprecated in the MPI-2.2 standard in
2009, and were formally deleted from the
I replied:
https://github.com/Microsoft/vscode-cpptools/issues/1723#issuecomment-376479335
> On Mar 27, 2018, at 1:35 AM, Denis Davydov wrote:
>
> Dear all,
>
> Developers of Cpptools plugin for Visual Studio Code (a free source code
> editor developed by Microsoft for
To follow up for the web thread: I talked with Tim about this off-list.
Not only will Open MPI likely not work in Tim's environment, MPI itself is
probably too much for what he's trying to do.
> On Mar 21, 2018, at 10:49 PM, Ralph H Castain wrote:
>
> I don’t see how Open
o try to patch mpi.h file, but it fails during the patching process
> for new version openmpi. I don't know the reason yet, and will check it soon.
> Thank you.
>
> Kaiming Ouyang, Research Assistant.
> Department of Computer Science and Engineering
> University of California, Ri
On Mar 19, 2018, at 11:32 PM, Kaiming Ouyang wrote:
>
> Thank you.
> I am using newest version HPL.
> I forgot to say I can run HPL with openmpi-3.0 under infiniband. The reason I
> want to use old version is I need to compile a library that only supports old
> version
ith PID 46005 on node test-ib exited on
> signal 15 (Terminated).
>
> Hope you can give me some suggestions. Thank you.
>
> Kaiming Ouyang, Research Assistant.
> Department of Computer Science and Engineering
> University of California, Riverside
> 900 University Av
That's actually failing in a shared memory section of the code.
But to answer your question, yes, Open MPI 1.2 did have IB support.
That being said, I have no idea what would cause this shared memory segv --
it's quite possible that it's simple bit rot (i.e., v1.2.9 was released 9 years
ago --
(Sending this to the users list, not to just the owner of the users list)
It looks like you might have installed Open MPI correctly.
But you have to give some command line options to mpirun to tell it what to do
-- you're basically getting an error saying "you didn't tell me what to do, so
I
Let's chat about it in person next week.
We can try converting at the same time.
> On Mar 13, 2018, at 10:14 PM, Kawashima, Takahiro
> wrote:
>
> Jeff,
>
> Thank you. I received the password. I cannot remember I had received it
> before...
>
> My colleague was
Yes, it's trivial to reset the Fujitsu MTT password -- I'll send you a mail
off-list with the new password.
If you're just starting up with MTT, you might want to use the Python client,
instead. That's where 95% of ongoing development is occurring.
If all goes well, I plan to sit down with
Pharthiphan --
No need to cross-post the same question in three places (GitHub issue, this
list, and the devel list).
Let's keep the thread on the devel list, where the first parts of your
questions have already been answered.
Thanks.
> On Mar 13, 2018, at 11:30 AM, Pharthiphan Asokan
Check out this FAQ item:
https://www.open-mpi.org/faq/?category=running#run-prereqs
> On Mar 2, 2018, at 1:36 PM, r...@open-mpi.org wrote:
>
> Not that I’ve heard - you need to put it in your LD_LIBRARY_PATH
>
>
>> On Mar 2, 2018, at 10:15 AM, Mahmood Naderan
; On 02/28/2018 12:10 PM, Jeff Squyres (jsquyres) wrote:
>> Oops; it looks like there's 2 chunks of usNIC code in the 1.10.x code base,
>> and --without-usnic only disables one of them.
>> I do believe we fixed that in a later 1.10.x release -- I am guessing you
>> don't wan
On Feb 28, 2018, at 1:31 PM, Justin Luitjens wrote:
> Here is the error I see:
>
> make[2]: Entering directory
> '/tmpnfs/jluitjens/libs/src/openmpi-3.0.0/opal/mca/crs'
> CC base/crs_base_open.lo
> GENERATE opal_crs.7
> CC base/crs_base_select.lo
> CC
Oops; it looks like there's 2 chunks of usNIC code in the 1.10.x code base, and
--without-usnic only disables one of them.
I do believe we fixed that in a later 1.10.x release -- I am guessing you don't
want to upgrade to v3.0.x for compatibility/testing reasons, but do you think
you could
It sounds like he installed his distro package for Open MPI, and they
configured / built it less-than-optimally.
> On Jan 31, 2018, at 6:04 PM, Gilles Gouaillardet
> wrote:
>
> Joshua,
>
> Can you extract the configure command line that was used when building
loads MPI-IO support
the first time it is used).
> On Jan 30, 2018, at 5:20 PM, Vahid Askarpour <vh261...@dal.ca> wrote:
>
> No, I installed this version of openmpi once and with intel14.
>
> Vahid
>
>> On Jan 30, 2018, at 4:41 PM, Jeff Squyres (jsquyres) <jsq
0.x), I loaded the fortran
>>>>>>>>>> compiler module, configured with only the “--prefix="
>>>>>>>>>> and then “make all install”. I did not enable or disable any other
>>>>>>>>>> options.
>>&
On Jan 30, 2018, at 11:51 AM, n8tm via users wrote:
>
> Most Linux distros ignore repetition of forward slash. Sometimes used as
> excuse for sloppiness.
That path is generated during configure. Meaning: some CLI arg to configure
was (probably) given as
Joshua -
Can you describe how you installed Open MPI?
I think the exact command you used to configure Open MPI will be important.
Sent from my phone. No type good.
On Jan 30, 2018, at 10:30 AM, Joshua Wall
> wrote:
Hello users,
I
Ben --
Did you not see Jeff Hammond's reply earlier today?
https://www.mail-archive.com/users@lists.open-mpi.org//msg31964.html
> On Jan 24, 2018, at 5:40 PM, Benjamin Brock wrote:
>
> Recently, when I try to run something locally with OpenMPI with more than two
>
On Jan 23, 2018, at 8:33 AM, Gilles Gouaillardet
wrote:
>
> There used to be a bug in the IOF part, but I am pretty sure this has already
> been fixed.
Gilles: can you cite what you're talking about?
Edgar was testing on master, so if there was some kind of IOF
On Jan 18, 2018, at 5:53 PM, Vahid Askarpour wrote:
>
> My openmpi3.0.x run (called nscf run) was reading data from a routine Quantum
> Espresso input file edited by hand. The preliminary run (called scf run) was
> done with openmpi3.0.x on a similar input file also edited by
301 - 400 of 1536 matches
Mail list logo