Thanks for the reply Jeff. This is directional.
On 01-Feb-2014 7:51 am, "Jeff Squyres (jsquyres)"
wrote:
> On Jan 31, 2014, at 2:49 AM, Siddhartha Jana
> wrote:
>
> > Sorry for the typo:
> > ** I was hoping to understand the impact of OpenMPI's imple
Sorry for the typo:
** I was hoping to understand the impact of OpenMPI's implementation of
these protocols using traditional TCP.
This is the paper I was referring to:
Woodall, et al., "High Performance RDMA Protocols in HPC".
On 31 January 2014 00:43, Siddhartha Jana wrote:
Good evening
Is there any documentation describing the difference in MPI-level
implementation of the eager and rendezvous protocols in OpenIB BTL versus
TCP BTL ?
I am only aware of only the following paper. While this presents an
excellent overview of how RDMA capabilities of modern interconnects
btl_tcp_max_send_size 2097152 \
-np 2 ./a.out
Just wanted to confirm whether OpenMPI has some strict limits when it comes
to detecting whether a message size should be treated as short or whether
the user has the final say.
Thanks,
Sid
On 27 December 2013 03:01, Siddhartha Jana wrote
ge the cross-over for shared memory, but
> it's really per-transport (so you'd have to change it for your off-node
> transport as well). That's all in the FAQ you mentioned, so hopefully you
> can take it from there. Note that, in general, moving the eager limits has
> some un
ake it from there. Note that, in general, moving the eager limits has
> some unintended side effects. For example, it can cause more / less
> copies. It can also greatly increase memory usage.
> >
> > Good luck,
> >
> > Brian
> >
> > On 12/16/13 1:49 AM, &
Ah got it ! Thanks
-- Sid
On 18 December 2013 07:44, Jeff Squyres (jsquyres) wrote:
> On Dec 14, 2013, at 8:02 AM, Siddhartha Jana
> wrote:
>
> > Is there a preferred method/tool among developers of MPI-library for
> checking the count of the packets transmitted by the n
s
> copies. It can also greatly increase memory usage.
> >
> > Good luck,
> >
> > Brian
> >
> > On 12/16/13 1:49 AM, "Siddhartha Jana"
> wrote:
> >
> >> Thanks Christoph.
> >> I should have looked into the FAQ section on
569 Stuttgart
>
> Tel: ++49(0)711-685-87203
> email: nietham...@hlrs.de
> http://www.hlrs.de/people/niethammer
>
>
>
> - Ursprüngliche Mail -
> Von: "Siddhartha Jana"
> An: "OpenMPI users mailing list"
> Gesendet: Samstag, 14. Dezember 2013
Is there a preferred method/tool among developers of MPI-library for
checking the count of the packets transmitted by the network card during
two-sided communication?
Is the use of
iptables -I INPUT -i eth0
iptables -I OUTPUT -o eth0
recommended ?
Thanks,
Siddhartha
Hi
In OpenMPI, are MPI_Send, MPI_Recv (and friends) implemented using
rendezvous protocol or eager protocol?
If both, is there a way to choose one or the other during runtime or while
building the library?
If there is a threshold of the message size that dictates the protocol to
be used, is ther
usins for
binding processes. It is my understanding, however, that coupling hwloc
with cpu-shielding will enable exclusive access to cores within the set.
Thanks again,
Siddhartha Jana
> On Aug 18, 2013, at 7:01 PM, Siddhartha Jana
> wrote:
>
> > Noted. Thanks again
> > --
gt;
>
> On Aug 18, 2013, at 3:24 PM, Siddhartha Jana
> wrote:
>
>
> A process can always change its binding by "re-binding" to wherever it
>> wants after MPI_Init completes.
>>
> Noted. Thanks. I guess the important thing that I wanted to know was tha
;
> On Aug 18, 2013, at 9:38 AM, Siddhartha Jana
> wrote:
>
> Firstly, I would like my program to dynamically assign it self to one of
> the cores it pleases and remain bound to it until it later reschedules
> itself.
> *
>
> Ralph Castain wrote:*
> *>> "If
So if I place step-3 above after step-4, my request will hold for the rest
of the execution. Please do let me know, if my understanding is correct.
Thanks for all the help
Sincerely,
Siddhartha Jana
HPCTools
On 18 August 2013 10:49, Ralph Castain wrote:
> If you requir
the quick replies,
-- Sid
On 18 August 2013 09:04, Siddhartha Jana wrote:
> Thanks John. But I have an incredibly small system. 2 nodes - 16 cores
> each.
> 2-4 MPI processes. :-)
>
> On 18 August 2013 09:03, John Hearns wrote:
>
>> You really should install a job sch
Thanks John. But I have an incredibly small system. 2 nodes - 16 cores each.
2-4 MPI processes. :-)
On 18 August 2013 09:03, John Hearns wrote:
> You really should install a job scheduler.
> There are free versions.
>
> I'm not sure about cpuset support in Gridengine. Anyone?
>
> ___
Noted. Thanks. Unfortunately, in my case the cluster is a basic Linux
cluster without any job schedulers.
On 18 August a2013 02:30, John Hearns wrote:
> For information, if you use a batch system such as PbsPro or Torque it can
> be configured to set up the cpuset for a job and start the job w
Hi,
Thanks for the reply,
> My requirements:
> > 1. Avoid the OS from scheduling tasks on cores 0-7 allocated to my
> > process.
> > 2. Avoid rescheduling of processes to other cores.
> >
> > My solution: I use Linux's CPU-shielding.
> > [ Man page:
> > http://www.kernel.org/doc/man-pages/onl
ution, given mpirun's own techniques of binding to cores,
scheduling processes by slot, et al.
Will mpirun's bind-by-slot technique guarantee cpu shielding?
I would be highly obliged if some one could direct me to the right
direction.
Many thanks
Sincerely
Siddhartha Jana
20 matches
Mail list logo