Would be the use of mlockall helpful for this approach?
From: Audet, Martin
Sent: Monday, June 20, 2016 11:15 PM
To: mailto:us...@open-mpi.org
Subject: Re: [OMPI users] Avoiding the memory registration costs by having
memory always registered, is it possible with Linux ?
Thanks Jeff for your
AFAIK, Linux synchronizes all CPU timers on boot. The skew is normally no
more than 50-100 CPU cycles.
The reasons why you can observe larger differences are:
1) Main. The CPUs do not have "constant TSC" feature . Without this feature
timer frequency changes across different power states of CP
Hello,
use RDTSC (or RDTSCP) to read TSC directly
Kind regards,
Alex Granovsky
-Original Message-
From: maxinator333
Sent: Monday, November 10, 2014 4:35 PM
To: us...@open-mpi.org
Subject: [OMPI users] MPI_Wtime not working with -mno-sse flag
Hello again,
I have a piece of code, w
***
... (same segfault from the other node)
Maxime
Le 2014-08-18 16:52, Alex A. Granovsky a écrit :
Try the following:
export MALLOC_CHECK_=1
and then run it again
Kind regards,
Alex Granovsky
-Original Message- From: Maxime Boissonneault
Sent: Tuesday, August 19, 2014 12:23 AM
To: Open
***
... (same segfault from the other node)
Maxime
Le 2014-08-18 16:52, Alex A. Granovsky a écrit :
Try the following:
export MALLOC_CHECK_=1
and then run it again
Kind regards,
Alex Granovsky
-Original Message- From: Maxime Boissonneault
Sent: Tuesday, August 19, 2014 12:23 AM
To: Open MPI
Try the following:
export MALLOC_CHECK_=1
and then run it again
Kind regards,
Alex Granovsky
-Original Message-
From: Maxime Boissonneault
Sent: Tuesday, August 19, 2014 12:23 AM
To: Open MPI Users
Subject: [OMPI users] Segfault with MPI + Cuda on multiple nodes
Hi,
Since my previ
Dear Nikola,
you can check this presentation:
http://classic.chem.msu.su/gran/gamess/mp2par.pdf
for the solution we have been using with Firefly (formerly PC GAMESS) for
more than last ten years.
Hope this helps.
Kind regards,
Alex Granovsky
-Original Message-
From: Velickovic N
12, 2013 9:18 PM
To: Open MPI Users
Subject: Re: [OMPI users] Segmentation fault in MPI_Init when
passingpointers allocated in main()
Kernighan and Richie's C programming language manual - it goes all the way
back to the original C definition.
On Nov 12, 2013, at 9:15 AM, Alex A. Gran
Hello,
It seems that argv[argc] should always be NULL according to the
standard. So OMPI failure is not actually a bug!
could you please point to the exact document where this is explicitly
stated?
Otherwise, I'd assume this is a bug.
Kind regards,
Alex Granovsky
-Original Message
> something else?
Yes, this is with regards to collective hang issue.
All the best,
Alex
- Original Message -
From: "Jeff Squyres"
To: "Alex A. Granovsky" ;
Sent: Saturday, December 03, 2011 3:36 PM
Subject: Re: [OMPI users] Program hangs in mpi_bcast
Dear OpenMPI users,
Dear OpenMPI developers,
I would like to start discussion on implementation of collective
operations within OpenMPI. The reason for this is at least twofold.
Last months, there was the constantly growing number of messages in
the list sent by persons facing problems with co
Hi Gilbert,
why not to use architecture-specific atomic updates writing to the array?
In this case, you wouldn't need anything special reading from array at all.
Moreover, this model looks like a good candidate to be implemented as
multithreaded application, rather than two separate processes shar
performance
Alex A. Granovsky wrote:
Isn't in evident from the theory of random processes and probability theory
that in the limit of infinitely
large cluster and parallel process, the probability of deadlocks with
current implementation is unfortunately
quite a finite quantity a
Isn't in evident from the theory of random processes and probability theory
that in the limit of infinitely
large cluster and parallel process, the probability of deadlocks with current
implementation is unfortunately
quite a finite quantity and in limit approaches to unity regardless on any
p
14 matches
Mail list logo