Re: [OMPI users] Windows support for OpenMPI

2012-12-07 Thread Durga Choudhury
e versions. It can always be > resurrected from SVN history if someone wants to pick up this effort again > in the future. > > > On Dec 6, 2012, at 11:07 AM, Damien wrote: > > > So far, I count three people interested in OpenMPI on Windows. That's > not a case for on

Re: [OMPI users] RDMA GPUDirect CUDA...

2012-08-14 Thread Durga Choudhury
Dear OpenMPI developers I'd like to add my 2 cents that this would be a very desirable feature enhancement for me as well (and perhaps others). Best regards Durga On Tue, Aug 14, 2012 at 4:29 PM, Zbigniew Koza wrote: > Hi, > > I've just found this information on nVidia's

Re: [OMPI users] system() call corrupts MPI processes

2012-01-19 Thread Durga Choudhury
This is just a thought: according to the system() man page, 'SIGCHLD' is blocked during the execution of the program. Since you are executing your command as a daemon in the background, it will be permanently blocked. Does OpenMPI daemon depend on SIGCHLD in any way? That is about the only

Re: [OMPI users] [Beowulf] How to justify the use MPI codes on multicore systems/PCs?

2011-12-12 Thread Durga Choudhury
I think this is a *great* topic for discussion, so let me throw some fuel to the fire: the mechanism described in the blog (that makes perfect sense) is fine for (N)UMA shared memory architectures. But will it work for asymmetric architectures such as the Cell BE or discrete GPUs where the data

Re: [OMPI users] Shared-memory problems

2011-11-03 Thread Durga Choudhury
Since /tmp is mounted across a network and /dev/shm is (always) local, /dev/shm seems to be the right place for shared memory transactions. If you create temporary files using mktemp is it being created in /dev/shm or /tmp? On Thu, Nov 3, 2011 at 11:50 AM, Bogdan Costescu

Re: [OMPI users] problème with MPI_FINALIZE

2011-11-02 Thread Durga Choudhury
Any particular reason these calls don't nest? In some other HPC-like paradigms (e.g. VSIPL) such calls are allowed to nest (i.e. only the finalize() that matches the first init() will destroy allocated resources.) Just a curiosity question, doesn't really concern me in any particular way. Best

Re: [OMPI users] configure with cuda

2011-10-27 Thread Durga Choudhury
Is there any provision/future plans to add OpenCL support as well? CUDA is an Nvidia-only technology, so it might be a bit limiting in some cases. Best regards Durga On Thu, Oct 27, 2011 at 2:45 PM, Rolf vandeVaart wrote: > Actually, that is not quite right.  From the

Re: [OMPI users] Memory mapped memory

2011-10-17 Thread Durga Choudhury
If the mmap() pages are created with MAP_SHARED, then they should be sharable with other processes in the same node, isn't it? MPI processes are just like any other process, aren't they? Will one of the MPI Gurus please comment? Regards Durga On Mon, Oct 17, 2011 at 9:45 AM, Gabriele Fatigati

Re: [OMPI users] Related to project ideas in OpenMPI

2011-08-25 Thread Durga Choudhury
Is anything done at the kernel level portable (e.g. to Windows)? It *can* be, in principle at least (by putting appropriate #ifdef's in the code), but I am wondering if it is in reality. Also, in 2005 there was an attempt to implement SSI (Single System Image) functionality to the then-current

Re: [OMPI users] multi-threaded programming

2011-03-08 Thread Durga Choudhury
A follow-up question (and pardon if this sounds stupid) is this: If I want to make my process multithreaded, BUT only one thread has anything to do with MPI (for example, using OpenMP inside MPI), then the results will be correct EVEN IF #1 or #2 of Eugene holds true. Is this correct? Thanks

Re: [OMPI users] Pros and cons of --enable-heterogeneous

2010-10-07 Thread Durga Choudhury
I'd like to add to this question the following: If I compile with --enable-heterogenous flag for different *architectures* (I have a mix of old 32 bit x86, newer x86_64 and some Cell BE based boxes (PS3)), would I be able to form a MPD ring between all these different machines? Best regards

Re: [OMPI users] Shared memory

2010-09-24 Thread Durga Choudhury
I think the 'middle ground' approach can be simplified even further if the data file is in a shared device (e.g. NFS/Samba mount) that can be mounted at the same location of the file system tree on all nodes. I have never tried it, though and mmap()'ing a non-POSIX compliant file system such as

Re: [OMPI users] Shared memory

2010-09-24 Thread Durga Choudhury
Is the data coming from a read-only file? In that case, a better way might be to memory map that file in the root process and share the map pointer in all the slave threads. This, like shared memory, will work only for processes within a node, of course. On Fri, Sep 24, 2010 at 3:46 AM, Andrei

Re: [OMPI users] Number of Sockets used by OpenMPI

2009-11-15 Thread Durga Choudhury
a/PAPERS/scop3.pdf. > >  george. > > > On Nov 15, 2009, at 14:39 , Durga Choudhury wrote: > >> I apologize for dragging in this conversation in a different >> direction, but I'd be very interested to know why the behavior with >> the Playstation is different from ot

Re: [OMPI users] Number of Sockets used by OpenMPI

2009-11-15 Thread Durga Choudhury
I apologize for dragging in this conversation in a different direction, but I'd be very interested to know why the behavior with the Playstation is different from other architectures. The PS3 box has a single gigabit ethernet and no exapansion ports, so I'd assume it's behavior would be no

Re: [OMPI users] fault tolerance in open mpi

2009-08-03 Thread Durga Choudhury
in Open MPI. I attempted, >> but no luck. Can you please tell how to write such programs in Open MPI. >> >> Thanks in advance. >> >> Regards, >> On Thu, Jul 9, 2009 at 8:30 PM, Durga Choudhury <dpcho...@gmail.com> wrote: >>> >>> Although

Re: [OMPI users] Network connection check

2009-07-23 Thread Durga Choudhury
The 'system' command will fork a separate process to run. If I remember correctly, forking within MPI can lead to undefined behavior. Can someone in OpenMPI development team clarify? What I don't understand is: why is your TCP network so unstable that you are worried about reachability? For MPI

Re: [OMPI users] fault tolerance in open mpi

2009-07-09 Thread Durga Choudhury
Although I have perhaps the least experience on the topic in this list, I will take a shot; more experienced people, please correct me: MPI standards specify communication mechanism, not fault tolerance at any level. You may achieve network tolerance at the IP level by implementing 'equal cost

Re: [OMPI users] Apllication level checkpointing tools.

2009-06-30 Thread Durga Choudhury
Josh This actually is a concern addressed to all the authors/OpenMPI contributors. The links to IEEExplore or ACM requires a subscription which, unfortunately, not all the list subscribers have. Would it be a copyright violation to post the actual paper/article to the list instead of just a

Re: [OMPI users] How to override MPI functions such as MPI_Init, MPI_Recv...

2009-05-13 Thread Durga Choudhury
You could use a separate namespace (if you are using C++) and define your functions there... Durga On Wed, May 13, 2009 at 1:20 PM, Le Duy Khanh wrote: > Dear, > >  I intend to override some MPI functions such as MPI_Init, MPI_Recv... but I > don't want to dig into OpenMPI

Re: [OMPI users] OpenMP + OpenMPI

2007-12-06 Thread Durga Choudhury
Automatically striping large messages across multiple NICs is certainly a very nice feature; I was not aware that OpenMPI does this transparently. (I wonder if other MPI implementations do this or not). However, I have the following concern: Since the communication over an ethernet NIC is most

Re: [OMPI users] libmpi.so.0 problem

2007-08-14 Thread Durga Choudhury
Did you export your variables? Otherwise the child shell that forks the MPI process will not inherit it. On 8/14/07, Rodrigo Faccioli wrote: > > Thanks, Tim Prins for your email. > > However It did't resolve my problem. > > I set the enviroment variable on my

Re: [OMPI users] How to set paffinity on a multi-cpu node?

2006-11-29 Thread Durga Choudhury
s high potential of the resources (ie, memory) ending up associated with a different processor than the one the process gets pinned to. That isn't a big deal on Intel machines, but is a major issue for AMD processors. Just my $0.02, anyway. Brian On Nov 28, 2006, at 6:09 PM, Durga Choudhury

Re: [OMPI users] How to set paffinity on a multi-cpu node?

2006-11-28 Thread Durga Choudhury
Jeff (and everybody else) First of all, pardon me if this is a stupid comment; I am learning the nuts-and-bolts of parallel programming; but my comment is as follows: Why can't this be done *outside* openMPI, by calling Linux's processor affinity APIs directly? I work with a blade server kind

Re: [OMPI users] efficient memory to memory transfer

2006-11-07 Thread Durga Choudhury
Chev Interesting question; I too would like to hear about it from the experts in this forum. However, off the top of my head, I have the following advise for you. Yes, you could share the memory between processes using the shm_xxx system calls of unix. However, it would be a lot easier if you

Re: [OMPI users] openmpi problem

2006-11-03 Thread Durga Choudhury
Calin Your questions don't belong in this forum. You either need to be computer literate (your questions are basic OS related) or delegate this task to someone more experienced. Good luck Durga On 11/3/06, calin pal wrote: /*please read the mail and ans my

Re: [OMPI users] Fault Tolerance & Behavior

2006-10-26 Thread Durga Choudhury
As an alternate suggestion (although George's is better, since this will affect your entire network connectivity), you could override the default TCP timeout values with the "sysctl -w" command. The following three OIDs affect TCP timeout behavior under Linux: net.ipv4.tcp_keepalive_intvl = 75

Re: [OMPI users] Dual Gigabit ethernet support

2006-10-24 Thread Durga Choudhury
Very interesting, indeed! Message passing running over raw Ethernet using cheap COTS PCs is indeed the need of the hours for people like me who has a very shallow pocket. Great work! What would make this effort *really* cool is to have a one-to-one mapping of APIs from MPI domain to GAMMA domain,

Re: [OMPI users] dual Gigabit ethernet support

2006-10-23 Thread Durga Choudhury
nsfers rate falls down to ~130MB/s and after long run finally comes to ~54MB/s. Why this type of network slowing down with time is happenning? Regards, Jayanta On Mon, 23 Oct 2006, Durga Choudhury wrote: > Did you try channel bonding? If your OS is Linux, there are plenty of > "howto

Re: [OMPI users] dual Gigabit ethernet support

2006-10-23 Thread Durga Choudhury
Did you try channel bonding? If your OS is Linux, there are plenty of "howto" on the internet which will tell you how to do it. However, your CPU might be the bottleneck in this case. How much of CPU horsepower is available at 140MB/s? If the CPU *is* the bottleneck, changing your network

Re: [OMPI users] problem abut openmpi running

2006-10-19 Thread Durga Choudhury
George I knew that was the answer to Calin's question, but I still would like to understand the issue: by default, the openMPI installer installs the libraries in /usr/local/lib, which is a standard location for the C compiler to look for libraries. So *why* do I need to explicitly specify this

[OMPI users] OpenMPI error in the simplest configuration...

2006-08-27 Thread Durga Choudhury
Hi all I am getting an error (details follow) in the simplest of the possible test scenarios: Two identical regular Dell PCs connected back-to-back via an ethernet switch on the 10/100 ethernet. Both run Fedora Core 4. Identical version (1.1) of Open MPI is compiled and installed on both of

[OMPI users] Proprieatary transport layer for openMPI...

2006-08-07 Thread Durga Choudhury
Hi All We have been using the Argonne MPICH (over TCP/IP) on our in-house designed embedded multicomputer for last several months with satisfactory results. Our network technology is custom built and is * *not** infiniband (or any published standards, such as Myrinet) based. This is due to the

Re: [OMPI users] Open MPI on Dual Core Laptop?

2006-08-01 Thread Durga Choudhury
Do you want to use MPI to chain a bunch of such laptops together (e.g. via ethernet) or just for the cores to talk to each other? If the latter; you do not need MPI. Your SMP operating system (e.g. Linux) will automatically utilize both cores. The Linux 2.6 kernel also supports processor affinity