Re: [OMPI users] Avoiding the memory registration costs by having memory always registered, is it possible with Linux ?

2016-06-20 Thread Alex A. Granovsky
Would be the use of mlockall helpful for this approach? From: Audet, Martin Sent: Monday, June 20, 2016 11:15 PM To: mailto:us...@open-mpi.org Subject: Re: [OMPI users] Avoiding the memory registration costs by having memory always registered, is it possible with Linux ? Thanks Jeff for your

Re: [OMPI users] mpi_wtime implementation

2014-11-27 Thread Alex A. Granovsky
AFAIK, Linux synchronizes all CPU timers on boot. The skew is normally no more than 50-100 CPU cycles. The reasons why you can observe larger differences are: 1) Main. The CPUs do not have "constant TSC" feature . Without this feature timer frequency changes across different power states of CP

Re: [OMPI users] MPI_Wtime not working with -mno-sse flag

2014-11-10 Thread Alex A. Granovsky
Hello, use RDTSC (or RDTSCP) to read TSC directly Kind regards, Alex Granovsky -Original Message- From: maxinator333 Sent: Monday, November 10, 2014 4:35 PM To: us...@open-mpi.org Subject: [OMPI users] MPI_Wtime not working with -mno-sse flag Hello again, I have a piece of code, w

Re: [OMPI users] Segfault with MPI + Cuda on multiple nodes

2014-08-19 Thread Alex A. Granovsky
*** ... (same segfault from the other node) Maxime Le 2014-08-18 16:52, Alex A. Granovsky a écrit : Try the following: export MALLOC_CHECK_=1 and then run it again Kind regards, Alex Granovsky -Original Message- From: Maxime Boissonneault Sent: Tuesday, August 19, 2014 12:23 AM To: Open

Re: [OMPI users] Segfault with MPI + Cuda on multiple nodes

2014-08-19 Thread Alex A. Granovsky
*** ... (same segfault from the other node) Maxime Le 2014-08-18 16:52, Alex A. Granovsky a écrit : Try the following: export MALLOC_CHECK_=1 and then run it again Kind regards, Alex Granovsky -Original Message- From: Maxime Boissonneault Sent: Tuesday, August 19, 2014 12:23 AM To: Open MPI

Re: [OMPI users] Segfault with MPI + Cuda on multiple nodes

2014-08-18 Thread Alex A. Granovsky
Try the following: export MALLOC_CHECK_=1 and then run it again Kind regards, Alex Granovsky -Original Message- From: Maxime Boissonneault Sent: Tuesday, August 19, 2014 12:23 AM To: Open MPI Users Subject: [OMPI users] Segfault with MPI + Cuda on multiple nodes Hi, Since my previ

Re: [OMPI users] Problems with computation-communication overlap innon-blocking mode

2014-03-11 Thread Alex A. Granovsky
Dear Nikola, you can check this presentation: http://classic.chem.msu.su/gran/gamess/mp2par.pdf for the solution we have been using with Firefly (formerly PC GAMESS) for more than last ten years. Hope this helps. Kind regards, Alex Granovsky -Original Message- From: Velickovic N

Re: [OMPI users] Segmentation fault in MPI_Init when passingpointers allocated in main()

2013-11-12 Thread Alex A. Granovsky
12, 2013 9:18 PM To: Open MPI Users Subject: Re: [OMPI users] Segmentation fault in MPI_Init when passingpointers allocated in main() Kernighan and Richie's C programming language manual - it goes all the way back to the original C definition. On Nov 12, 2013, at 9:15 AM, Alex A. Gran

Re: [OMPI users] Segmentation fault in MPI_Init when passing pointers allocated in main()

2013-11-12 Thread Alex A. Granovsky
Hello, It seems that argv[argc] should always be NULL according to the standard. So OMPI failure is not actually a bug! could you please point to the exact document where this is explicitly stated? Otherwise, I'd assume this is a bug. Kind regards, Alex Granovsky -Original Message

Re: [OMPI users] Program hangs in mpi_bcast

2011-12-09 Thread Alex A. Granovsky
> something else? Yes, this is with regards to collective hang issue. All the best, Alex - Original Message - From: "Jeff Squyres" To: "Alex A. Granovsky" ; Sent: Saturday, December 03, 2011 3:36 PM Subject: Re: [OMPI users] Program hangs in mpi_bcast

Re: [OMPI users] Program hangs in mpi_bcast

2011-12-02 Thread Alex A. Granovsky
Dear OpenMPI users, Dear OpenMPI developers, I would like to start discussion on implementation of collective operations within OpenMPI. The reason for this is at least twofold. Last months, there was the constantly growing number of messages in the list sent by persons facing problems with co

Re: [OMPI users] Granular locks?

2011-01-05 Thread Alex A. Granovsky
Hi Gilbert, why not to use architecture-specific atomic updates writing to the array? In this case, you wouldn't need anything special reading from array at all. Moreover, this model looks like a good candidate to be implemented as multithreaded application, rather than two separate processes shar

Re: [OMPI users] MPI_Reduce performance

2010-09-09 Thread Alex A. Granovsky
performance Alex A. Granovsky wrote: Isn't in evident from the theory of random processes and probability theory that in the limit of infinitely large cluster and parallel process, the probability of deadlocks with current implementation is unfortunately quite a finite quantity a

Re: [OMPI users] MPI_Reduce performance

2010-09-09 Thread Alex A. Granovsky
Isn't in evident from the theory of random processes and probability theory that in the limit of infinitely large cluster and parallel process, the probability of deadlocks with current implementation is unfortunately quite a finite quantity and in limit approaches to unity regardless on any p