Re: [OMPI users] efficient memory to memory transfer

2006-11-07 Thread George Bosilca
The quick answers are yes and nothing. Open MPI seamlessly support shared memory communications when we detect that 2 processes are on the same node. In fact Open MPI can use different communication methods (read networks) between the processes of the same application. Please read our FAQ

Re: [OMPI users] efficient memory to memory transfer

2006-11-07 Thread Greg Lindahl
On Tue, Nov 07, 2006 at 05:02:54PM +, Miguel Figueiredo Mascarenhas Sousa Filipe wrote: > if your aplication is on one given node, sharing data is better than > copying data. Unless sharing data repeatedly leads you to false sharing and a loss in performance. > the MPI model assumes you

Re: [MTT users] make distclean option?

2006-11-07 Thread Ethan Mallove
Here's a workaround that might work. Put something like this your INI file: [Test build: foo] test_get = foo module = Shell shell_build_command = make [Test build: foo clean] test_get = foo module = Shell shell_build_command = make distclean Then do: $ client/mtt --no-section clean ... In a

Re: [OMPI users] efficient memory to memory transfer

2006-11-07 Thread Miguel Figueiredo Mascarenhas Sousa Filipe
Hi, On 11/7/06, Chevchenkovic Chevchenkovic wrote: Hi, I had the following setup: Rank 0 process on node 1 wants to send an array of particular size to Rank 1 process on same node. 1. What are the optimisations that can be done/invoked while running mpirun to

[MTT users] More LANL failures

2006-11-07 Thread Jeff Squyres
Jim -- I made some changes to the IBM test suite last night that should fix some of your other failures. Do you plan to run MTT on a regular basis (e.g., via cron), or will you run them manually? -- Jeff Squyres Server Virtualization Business Unit Cisco Systems

[MTT users] lanl failures

2006-11-07 Thread Jeff Squyres
Jim -- I see this from last night: hello.f: PGFTN-S-0034-Syntax error at or near identifier programmain (hello.f: 18) It looks like you're using PGI 6.1-3. I tested my trivial hello.f program with PGI 6.1-1 on an IU machine. Can you try compiling the following program with mpif77 and

Re: [OMPI users] efficient memory to memory transfer

2006-11-07 Thread Durga Choudhury
Chev Interesting question; I too would like to hear about it from the experts in this forum. However, off the top of my head, I have the following advise for you. Yes, you could share the memory between processes using the shm_xxx system calls of unix. However, it would be a lot easier if you

Re: [OMPI users] MPI_Comm_spawn multiple bproc support

2006-11-07 Thread Ralph H Castain
Hi Herve Sorry you are experiencing these problems. Part of the problem is that I have no access to a BJS machine. I suspect the issue you are encountering is that our interface to BJS may not be correct - the person that wrote it, I believe, may have used the wrong environmental variables. At

[OMPI users] Re: MPI_Comm_spawn multiple bproc support

2006-11-07 Thread hpe...@infonie.fr
Hi Ralf, sorry for the delay in the answer but I encountered some difficulties to access to internet since yesterday. I have tried all your suggestions but I continue to experience problems. Actually, I have a problem with bjs on the one hand that I may submit to a bproc forum and I still spawn