Not per se, but you could set the #slots to 12, and then add
rmaps_base_no_oversubscribe = 1
to your default MCA param file. That would do what you describe.
On Nov 1, 2012, at 4:21 PM, David Turner wrote:
> Hi,
>
> Is there a way to limit the number of tasks started by mpirun?
> For example
Hi,
Is there a way to limit the number of tasks started by mpirun?
For example, on our 48-core SMP, I'd like to limit MPI jobs to
a maximum of 12 tasks. That is, "mpirun -np 16 ..." would
return an error. Note that this is a strictly interactive
system; no batch environment available.
I've jus
Hi Rayson,
Just seen this.
In the end we've worked around it, by creating successive views of the file
that are all else than 2GB and then offsetting them to eventually read in
everything. It's a bit of a pain to keep track of, but it works at the moment.
I was intending on following your hin
On 11/1/2012 7:55 PM, Ralph Castain wrote:
On Nov 1, 2012, at 11:47 AM, marco atzeri wrote:
On 11/1/2012 5:08 PM, Ralph Castain wrote:
I think we'd be interested in looking at possibly adding this to the
code base. We still need to announce this (and will shortly), but our
Windows maintainer
On Nov 1, 2012, at 11:47 AM, marco atzeri wrote:
> On 11/1/2012 5:08 PM, Ralph Castain wrote:
>> I think we'd be interested in looking at possibly adding this to the
>> code base. We still need to announce this (and will shortly), but our
>> Windows maintainer has moved on to other pastures. So
On 11/1/2012 5:08 PM, Ralph Castain wrote:
I think we'd be interested in looking at possibly adding this to the
code base. We still need to announce this (and will shortly), but our
Windows maintainer has moved on to other pastures. So support for native
Windows operations is ending with the 1.6
I think we'd be interested in looking at possibly adding this to the code base.
We still need to announce this (and will shortly), but our Windows maintainer
has moved on to other pastures. So support for native Windows operations is
ending with the 1.6 series, barring someone stepping up to fil
Yes - we are getting reports of it for v1.7 as well. I *think* the problem is
that the test is stale, but don't know that for sure. Hopefully will resolve
that soon.
On Nov 1, 2012, at 8:56 AM, J R Jones wrote:
> Hi
>
> I am trying to build OpenMPI 1.6.3 with the Intel compilers, but one of t
Hi
I am trying to build OpenMPI 1.6.3 with the Intel compilers, but one of
the tests in make check fails with a segmentation fault. I wondered if
anyone else has come across this and if so, how they got around it?
$ icc --version
icc (ICC) 12.0.4 20110427
I have logged the output of the bui
George,
We move 40K and 160K size messages from process to process on the same node.
Our app does mlock(MCL_CURRENT | MCL_FUTURE) before MPI_INIT.
I measure the page faults using getrusage and record when they increase. I
observe increasing ru_minflt values and no ru_majflt increase.
Increased
I have understood about the the advantages of shared memeory BTL. I wanted to
share some of my observations and gain an understanding about the internal
mechanisms of opemmpi. I am wondering why openmpi uses a temporary file for
transferring data between the two processes which are on the same n
It will depend on the protocol used by the OpenIB BTL to wire up the peers
(OOB, UDCM, RDMACM). In the worst case (OOB), the connection process will be
done using TCP. We are looking at a handshake (over TCP 40 ms latency for a
one-way message is standard, the handshake will take at least 80ms).
Hi marco.
it was tested and it is working very good,
good initiative
thanks!
On Wed, Oct 31, 2012 at 7:34 PM, marco atzeri wrote:
> Hi,
> I built and packaged openmpi-1.6.3 for cygwin.
> Before deploying it as an official package, I would
> like feedback from testers.
>
> Source and binary her
On Oct 30, 2012, at 09:57 , Jeff Squyres wrote:
> On Oct 30, 2012, at 9:51 AM, Hodge, Gary C wrote:
>
>> FYI, recently, I was tracking down the source of page faults in our
>> application that has real-time requirements. I found that disabling the sm
>> component (--mca btl ^sm) eliminated m
14 matches
Mail list logo