Hi Everyone:
I'm trying to employ MPI in an unconventional programming language, Forth, running over Debian
Linux. The Forth I have can import a Linux shared library in the .so file format and then compile
in the executable functions as externals. The question: how to do it? I'm looking to
Eric,
an other option is to use mtt https://github.com/open-mpi/mtt
it can download/build/install the latest tarball, download/build/install
your code and run it.
results are uploaded into a database and you can browse the results via
a webserver.
This is not quite lightweight, but i
The following may be a viable alternative. Just a suggestion.
git clone --depth 10 -b v2.x https://github.com/open-mpi/ompi-release.git
open-mpi-v2.x
Jeff
On Wed, Jun 22, 2016 at 8:30 PM, Eric Chamberland <
eric.chamberl...@giref.ulaval.ca> wrote:
> Excellent!
>
> I will put all in place,
Excellent!
I will put all in place, then try both URLs and see which one is
"manageable" for me!
Thanks,
Eric
On 22/06/16 02:10 PM, Jeff Squyres (jsquyres) wrote:
On Jun 22, 2016, at 2:06 PM, Eric Chamberland
wrote:
We have a similar mechanism already
On 22/06/16 01:49 PM, Jeff Squyres (jsquyres) wrote:
We have a similar mechanism already (that is used by the Open MPI community for nightly
regression testing), but with the advantage that it will give you a unique download
filename (vs. "openmpi-v2.x-latest.bz2" every night). Do this:
On Jun 22, 2016, at 1:33 PM, Eric Chamberland
wrote:
>
> I would like to do compile+test our code each night with the "latest" openmpi
> v2 release (or nightly if enough stable).
Cool!
> Just to ease the process, I would like to "wget" the latest archive
Hi,
I would like to do compile+test our code each night with the "latest"
openmpi v2 release (or nightly if enough stable).
Just to ease the process, I would like to "wget" the latest archive with
a "permanent" link...
Is it feasible for you to just put a symlink or something like it so I
On 06/22/2016 05:47 AM, Gilles Gouaillardet wrote:
Thanks for the info, I updated
https://github.com/open-mpi/ompi/issues/1809 accordingly.
fwiw, the bug occurs when addresses do not fit in 32 bits.
for some reasons, I always run into it on OSX but not on Linux, ubless I
use dmalloc.
I
Good morning, Dave,
Amongst reasons for not running Docker, a major one that I didn't notice
> raised is that containers are not started by the resource manager, but
> by a privileged daemon, so the resource manager can't directly control
> or monitor them.
>
There's an endless debate
Rob Nagler writes:
> Thanks, John. I sometimes wonder if I'm the only one out there with this
> particular problem.
>
> Ralph, thanks for sticking with me. :) Using a pool of uids doesn't really
> work due to the way cgroups/containers works. It also would require
>
Hi Sreenidhi,
We use predominantly Mellanox HCAs (Connect x3) all connected to a giant
Qlogic QDR switch. We have QDR/FDR Mellanox and Qlogic switches in the
mix, but everything is managed by a single subnet manager. We have had
problems with Mellanox and RHEL OFED stacks both in the past due
"Llolsten Kaonga" writes:
> Hello Grigory,
>
> I am not sure what Redhat does exactly but when you install the OS, there is
> always an InfiniBand Support module during the installation process. We
> never check/install that module when we do OS installations because it is
>
I imagine we can always provide more info - what specifically did you want to
know about? The only frameworks removed were associated with the
checkpoint/restart code, and that was done because the code is stale/dead
> On Jun 22, 2016, at 6:40 AM, Dave Love wrote:
>
>
I know it's not traditional, but is there any chance of complete
documentation of the important changes in v2.0? Currently NEWS mentions
things like minor build issues, but there's nothing, for instance, on
the addition and removal of whole frameworks, one of which I've been
trying to understand.
Sure - for example, if you intend to run 4 threads, then —map-by core:pe=4
(assuming you are running OMPI 1.10 or higher) will bind each process to 4
cores in a disjoint pattern (i.e., no sharing).
> On Jun 22, 2016, at 3:37 AM, Gilles Gouaillardet
> wrote:
>
Thanks for the info, I updated https://github.com/open-mpi/ompi/issues/1809
accordingly.
fwiw, the bug occurs when addresses do not fit in 32 bits.
for some reasons, I always run into it on OSX but not on Linux, ubless I
use dmalloc.
I replaced malloc with alloca (and remove free) so I always hit
my point is the way I (almost) always use it is
export KMP_AFFINITY=compact,granularity=fine
the trick is I rely on OpenMPI and/or the batch manager to pin MPI tasks on
disjoint core sets.
that is obviously not the case with
mpirun --bind-to none ...
but that can be achieved with the
On Wed, Jun 22, 2016 at 11:58:25AM +0900, Gilles Gouaillardet wrote:
> Nicolas,
>
> can you please give the attached patch a try ?
>
> in my environment, it fixes your test case.
Yes ! It does here too ...
Just patched ADIOI_NFS_WriteStrided() using the same fix. And the
original tool that
I have installed openmpi-1.10.1 in a system and while executing one of the
example codes of OpenSHMEM I am getting an error. The snapshot of the error is
attached. Please help me to sort out this error.RegardsRyan Saptarshi Ray
my bad, I was assuming KMP_AFFINITY was used
so let me put it this way :
do *not* use KMP_AFFINITY with mpirun -bind-to none, otherwise, you will
very likely end up doing time sharing ...
Cheers,
Gilles
On 6/22/2016 5:07 PM, Jeff Hammond wrote:
Linux should not put more than one thread
Linux should not put more than one thread on a core if there are free
cores. Depending on cache/bandwidth needs, it may or may not be better to
colocate on the same socket.
KMP_AFFINITY will pin the OpenMP threads. This is often important for MKL
performance. See
Remi,
Keep in mind this is still suboptimal.
if you run 2 tasks per node, there is a risks threads from different
ranks end up bound to the same core, which means time sharing and a drop
in performance.
Cheers,
Gilles
On 6/22/2016 4:45 PM, remi marchal wrote:
Dear Gilles,
Thanks a
Dear Gilles,
Thanks a lot.
The mpirun --bind-to-none solve the problem.
Thanks a lot,
Regards,
Rémi
> Le 22 juin 2016 à 09:34, Gilles Gouaillardet a écrit :
>
> Remi,
>
>
> in the same environment, can you
>
> mpirun -np 1 grep Cpus_allowed_list /proc/self/status
Remi,
in the same environment, can you
mpirun -np 1 grep Cpus_allowed_list /proc/self/status
it is likely Open MPI allows only one core, and in this case, i suspect
MKL refuses to do some time sharing and hence transparently reduce the
number of threads to 1.
/* unless it *does* time
Dear openmpi users,
Today, I faced a strange problem.
I am compiling a quantum chemistry software (CASTEP-16) using intel16, mkl
threaded libraries and openmpi-18.1.
The compilation works fine.
When I ask for MKL_NUM_THREAD=4 and call the program in serial mode (without
mpirun), it works
25 matches
Mail list logo