Can you try a newer version of OMPI, say the 3.0.0 release? Just curious to
know if we perhaps “fixed” something relevant.
> On Oct 3, 2017, at 5:33 PM, Anthony Thyssen wrote:
>
> FYI...
>
> The problem is discussed further in
>
> Redhat Bugzilla: Bug 1321154 -
FYI...
The problem is discussed further in
Redhat Bugzilla: Bug 1321154 - numa enabled torque don't work
https://bugzilla.redhat.com/show_bug.cgi?id=1321154
I'd seen this previous as it required me to add "num_node_boards=1" to each
node in the
/var/lib/torque/server_priv/nodes to get
Thank you Gilles. At least I now have something to follow though with.
As a FYI, the torque is the pre-built version from the Redhat Extras (EPEL)
archive.
torque-4.2.10-10.el7.x86_64
Normally pre-build packages have no problems, but in this case.
On Tue, Oct 3, 2017 at 3:39 PM, Gilles
Not sure exactly how but back up and running, with all cores. Thanks to
all. Hopefully the new binary for Ubuntu with open-mpi 3.0.0 won't be too
far away.
Very Best,
J
On 3 October 2017 at 21:06, r...@open-mpi.org wrote:
> You can add it to the default MCA param file, if
You can add it to the default MCA param file, if you want -
/etc/openmpi-mca-params.conf
> On Oct 3, 2017, at 12:44 PM, Jim Maas wrote:
>
> Thanks RHC where do I put that so it will be in the environment?
>
> J
>
> On 3 October 2017 at 16:01, r...@open-mpi.org
Thanks RHC where do I put that so it will be in the environment?
J
On 3 October 2017 at 16:01, r...@open-mpi.org wrote:
> As Gilles said, we default to slots = cores, not HTs. If you want to treat
> HTs as independent cpus, then you need to add
>
I'm building on ARMv8 (64bit kernel, ompi master) and so far no problems.
On Wed, Sep 27, 2017 at 7:34 AM, Jeff Layton wrote:
> I could never get OpenMPI < 2.x to build on a Pi 2. I ended up using the
> binary from the repos. Pi 3 is a different matter - I got that to build
>
As Gilles said, we default to slots = cores, not HTs. If you want to treat HTs
as independent cpus, then you need to add
OMPI_MCA_hwloc_base_use_hwthreads_as_cpus=1 in your environment.
> On Oct 3, 2017, at 7:27 AM, Jim Maas wrote:
>
> Tried this and got this error, and
Tried this and got this error, and slots are available, nothing else is
running.
> cl <- startMPIcluster(count=7)
--
There are not enough slots available in the system to satisfy the 7 slots
that were requested by the
Previously it worked fine if I asked for 12, I'm sure you are correct, it
is only 6 physical cores, but with hyperthreading or whatever, it looks
like 12. The system monitor shows 12.
Thanks
J
On 3 October 2017 at 15:07, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:
> Thanks, i
Thanks, i will have a look at it.
By default, a slot is a core, so there are 6 slots on your system.
Could your app spawn 6 procs on top of the initial proc ? That would be 7 slots
and there are only 6.
What if you ask 5 slots only ?
With some parameters i do not know off hand, you could either
Thanks Gilles, relative noob here at this level, apologies if nonsensical!
I removed previous versions of open mpi which were compiled from source
using sudo make uninstall ...
downloaded new open-mpi 3.0.0 in tar.gz
configure --disable-dlopen
sudo make install
then ran sudo ldconfig
updated
Hi Jim,
can you please provide minimal instructions on how to reproduce the issue ?
we know Open MPI, but i am afraid few or none of us know about Rmpi nor doMPI.
once you explain how to download and build these, and how to run the
failing test,
we ll be able to investigate that.
also, can you
I've used this for years, just updated open-mpi to 3.0.0 and reloaded R,
have reinstalled doMPI and thus Rmpi but when I try to use startMPICluster,
asking for 6 slots (there are 12 on this machine) I get this error. Where
can I start to debug it?
Thanks
J
14 matches
Mail list logo