Re: [OMPI users] Granular locks?

2011-01-05 Thread Alex A. Granovsky
Hi Gilbert,

why not to use architecture-specific atomic updates writing to the array?
In this case, you wouldn't need anything special reading from array at all.
Moreover, this model looks like a good candidate to be implemented as
multithreaded application, rather than two separate processes sharing
segment of memory.

regards,
Alex Granovsky

- Original Message -
From: "Gilbert Grosdidier" <gilbert.grosdid...@cern.ch>
To: "Open MPI Users" <us...@open-mpi.org>
Sent: Wednesday, January 05, 2011 11:47 PM
Subject: Re: [OMPI users] Granular locks?


Hi Gijsbert,

  Thank you for this proposal, I think it could be useful for our LQCD
application,
at least for further evaluations. How could I get to the code, please ?

  Thanks in advance for your help,   Best,   G.



Le 03/01/2011 22:36, Gijsbert Wiesenekker a écrit :
> On Oct 2, 2010, at 10:54 , Gijsbert Wiesenekker wrote:
>
>> On Oct 1, 2010, at 23:24 , Gijsbert Wiesenekker wrote:
>>
>>> I have a large array that is shared between two processes. One process 
>>> updates array elements randomly, the other process reads
array elements randomly. Most of the time these writes and reads do not overlap.
>>> The current version of the code uses Linux shared memory with NSEMS 
>>> semaphores. When array element i has to be read or updated
semaphore (i % NSEMS) is used. if NSEMS = 1 the entire array will be locked 
which leads to unnecessary waits because reads and
writes do not overlap most of the time. Performance increases as NSEMS 
increases, and flattens out at NSEMS = 32, at which point the
code runs twice as fast when compared to NSEMS = 1.
>>> I want to change the code to use OpenMPI RMA, but MPI_Win_lock locks the 
>>> entire array, which is similar to NSEMS = 1. Is there a
way to have more granular locks?
>>>
>>> Gijsbert
>>>
>> Also, is there an MPI_Win_lock equavalent for IPC_NOWAIT?
>>
>> Gijsbert
>>
> FYI, as in my case the performance penalty by using OpenMPI RMA instead of 
> shared memory was too large I have written a couple of
wrapper functions that use OpenMPI to gracefully allocate and release shared 
memory:
>
> file://mpi_alloc_shm is a collective operation that allocates arg_nrecords of 
> arg_record_size each in the shared memory segment
identified by arg_key with arg_nsems semaphores to control access.
> file://arg_key is the shared memory key.
> file://arg_nrecords is the number of records.
> file://arg_record_size is the size of a record.
> file://arg_default is the default record value. If not equal to NULL all 
> arg_nrecord records will be initialized to *arg_default.
> file://arg_nsems is the number of semaphores that will be used to control 
> access. If record irecord has to be updated or read,
semaphore (irecord % arg_nsems) will be used for exclusive access.
> file://arg_mpi_id is the mpi_id of the process that will create the shared 
> memory segment. If the mpi_id of the calling process is
not equal to arg_mpi_id the process will not create but try to open it.
> void mpi_alloc_shm(key_t arg_key, i64_t arg_nrecords, i64_t arg_record_size,
>void *arg_default, int arg_nsems, int arg_mpi_id, MPI_Comm comm);
>
> file://mpi_shm_put updates record irecord in the shared memory segment 
> identified by shm_key with value *source.
> void mpi_shm_put(key_t shm_key, void *source, i64_t irecord);
>
> file://mpi_shm_get tries to read record irecord in the shared memory segment 
> identified by shm_key using IPC_NO_WAIT to request a
lock.
> file://FALSE is returned if the lock could not be obtained, else TRUE and the 
> record in *dest.
> file://as in my case only the creator of the shared memory segment will 
> update it, a lock is not used if the creator tries to read
record irecord.
> int  mpi_shm_get(key_t shm_key, i64_t irecord, void *dest);
>
> file://mpi_free_shm is a collective operation that deallocates the shared 
> memory segment identified by shm_key
> void mpi_free_shm(key_t shm_key, MPI_Comm comm);
>
> Please feel free to contact me if you would like to have a copy of the source 
> code of these routines.
>
> Regards,
> Gijsbert

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





Re: [OMPI users] Granular locks?

2011-01-05 Thread Gilbert Grosdidier

Hi Gijsbert,

 Thank you for this proposal, I think it could be useful for our LQCD 
application,

at least for further evaluations. How could I get to the code, please ?

 Thanks in advance for your help,   Best,   G.



Le 03/01/2011 22:36, Gijsbert Wiesenekker a écrit :

On Oct 2, 2010, at 10:54 , Gijsbert Wiesenekker wrote:


On Oct 1, 2010, at 23:24 , Gijsbert Wiesenekker wrote:


I have a large array that is shared between two processes. One process updates 
array elements randomly, the other process reads array elements randomly. Most 
of the time these writes and reads do not overlap.
The current version of the code uses Linux shared memory with NSEMS semaphores. 
When array element i has to be read or updated semaphore (i % NSEMS) is used. 
if NSEMS = 1 the entire array will be locked which leads to unnecessary waits 
because reads and writes do not overlap most of the time. Performance increases 
as NSEMS increases, and flattens out at NSEMS = 32, at which point the code 
runs twice as fast when compared to NSEMS = 1.
I want to change the code to use OpenMPI RMA, but MPI_Win_lock locks the entire 
array, which is similar to NSEMS = 1. Is there a way to have more granular 
locks?

Gijsbert


Also, is there an MPI_Win_lock equavalent for IPC_NOWAIT?

Gijsbert


FYI, as in my case the performance penalty by using OpenMPI RMA instead of 
shared memory was too large I have written a couple of wrapper functions that 
use OpenMPI to gracefully allocate and release shared memory:

//mpi_alloc_shm is a collective operation that allocates arg_nrecords of 
arg_record_size each in the shared memory segment identified by arg_key with 
arg_nsems semaphores to control access.
//arg_key is the shared memory key.
//arg_nrecords is the number of records.
//arg_record_size is the size of a record.
//arg_default is the default record value. If not equal to NULL all arg_nrecord 
records will be initialized to *arg_default.
//arg_nsems is the number of semaphores that will be used to control access. If 
record irecord has to be updated or read, semaphore (irecord % arg_nsems) will 
be used for exclusive access.
//arg_mpi_id is the mpi_id of the process that will create the shared memory 
segment. If the mpi_id of the calling process is not equal to arg_mpi_id the 
process will not create but try to open it.
void mpi_alloc_shm(key_t arg_key, i64_t arg_nrecords, i64_t arg_record_size,
   void *arg_default, int arg_nsems, int arg_mpi_id, MPI_Comm comm);

//mpi_shm_put updates record irecord in the shared memory segment identified by 
shm_key with value *source.
void mpi_shm_put(key_t shm_key, void *source, i64_t irecord);

//mpi_shm_get tries to read record irecord in the shared memory segment 
identified by shm_key using IPC_NO_WAIT to request a lock.
//FALSE is returned if the lock could not be obtained, else TRUE and the record 
in *dest.
//as in my case only the creator of the shared memory segment will update it, a 
lock is not used if the creator tries to read record irecord.
int  mpi_shm_get(key_t shm_key, i64_t irecord, void *dest);

//mpi_free_shm is a collective operation that deallocates the shared memory 
segment identified by shm_key
void mpi_free_shm(key_t shm_key, MPI_Comm comm);

Please feel free to contact me if you would like to have a copy of the source 
code of these routines.

Regards,
Gijsbert




Re: [OMPI users] Granular locks?

2011-01-03 Thread Gijsbert Wiesenekker

On Oct 2, 2010, at 10:54 , Gijsbert Wiesenekker wrote:

> 
> On Oct 1, 2010, at 23:24 , Gijsbert Wiesenekker wrote:
> 
>> I have a large array that is shared between two processes. One process 
>> updates array elements randomly, the other process reads array elements 
>> randomly. Most of the time these writes and reads do not overlap.
>> The current version of the code uses Linux shared memory with NSEMS 
>> semaphores. When array element i has to be read or updated semaphore (i % 
>> NSEMS) is used. if NSEMS = 1 the entire array will be locked which leads to 
>> unnecessary waits because reads and writes do not overlap most of the time. 
>> Performance increases as NSEMS increases, and flattens out at NSEMS = 32, at 
>> which point the code runs twice as fast when compared to NSEMS = 1.
>> I want to change the code to use OpenMPI RMA, but MPI_Win_lock locks the 
>> entire array, which is similar to NSEMS = 1. Is there a way to have more 
>> granular locks?
>> 
>> Gijsbert
>> 
> 
> Also, is there an MPI_Win_lock equavalent for IPC_NOWAIT?
> 
> Gijsbert
> 

FYI, as in my case the performance penalty by using OpenMPI RMA instead of 
shared memory was too large I have written a couple of wrapper functions that 
use OpenMPI to gracefully allocate and release shared memory:

//mpi_alloc_shm is a collective operation that allocates arg_nrecords of 
arg_record_size each in the shared memory segment identified by arg_key with 
arg_nsems semaphores to control access.
//arg_key is the shared memory key.
//arg_nrecords is the number of records.
//arg_record_size is the size of a record.
//arg_default is the default record value. If not equal to NULL all arg_nrecord 
records will be initialized to *arg_default.
//arg_nsems is the number of semaphores that will be used to control access. If 
record irecord has to be updated or read, semaphore (irecord % arg_nsems) will 
be used for exclusive access.
//arg_mpi_id is the mpi_id of the process that will create the shared memory 
segment. If the mpi_id of the calling process is not equal to arg_mpi_id the 
process will not create but try to open it.
void mpi_alloc_shm(key_t arg_key, i64_t arg_nrecords, i64_t arg_record_size,
  void *arg_default, int arg_nsems, int arg_mpi_id, MPI_Comm comm);

//mpi_shm_put updates record irecord in the shared memory segment identified by 
shm_key with value *source.
void mpi_shm_put(key_t shm_key, void *source, i64_t irecord);

//mpi_shm_get tries to read record irecord in the shared memory segment 
identified by shm_key using IPC_NO_WAIT to request a lock.
//FALSE is returned if the lock could not be obtained, else TRUE and the record 
in *dest.
//as in my case only the creator of the shared memory segment will update it, a 
lock is not used if the creator tries to read record irecord.
int  mpi_shm_get(key_t shm_key, i64_t irecord, void *dest);

//mpi_free_shm is a collective operation that deallocates the shared memory 
segment identified by shm_key
void mpi_free_shm(key_t shm_key, MPI_Comm comm);

Please feel free to contact me if you would like to have a copy of the source 
code of these routines.

Regards,
Gijsbert




Re: [OMPI users] Granular locks?

2010-10-04 Thread Barrett, Brian W
On Oct 1, 2010, at 3:24 PM, Gijsbert Wiesenekker wrote:

> I have a large array that is shared between two processes. One process 
> updates array elements randomly, the other process reads array elements 
> randomly. Most of the time these writes and reads do not overlap.
> The current version of the code uses Linux shared memory with NSEMS 
> semaphores. When array element i has to be read or updated semaphore (i % 
> NSEMS) is used. if NSEMS = 1 the entire array will be locked which leads to 
> unnecessary waits because reads and writes do not overlap most of the time. 
> Performance increases as NSEMS increases, and flattens out at NSEMS = 32, at 
> which point the code runs twice as fast when compared to NSEMS = 1.
> I want to change the code to use OpenMPI RMA, but MPI_Win_lock locks the 
> entire array, which is similar to NSEMS = 1. Is there a way to have more 
> granular locks?

The MPI standard defines MPI_WIN_LOCK as protecting the entire window, so the 
short answer to your question is no.  Depending on your application, it may be 
possible to have multiple windows independent pieces of the data to get the 
behavior you want, but that does seem icky.

Brian

-- 
  Brian W. Barrett
  Dept. 1423: Scalable System Software
  Sandia National Laboratories





Re: [OMPI users] Granular locks?

2010-10-02 Thread Gijsbert Wiesenekker

On Oct 1, 2010, at 23:24 , Gijsbert Wiesenekker wrote:

> I have a large array that is shared between two processes. One process 
> updates array elements randomly, the other process reads array elements 
> randomly. Most of the time these writes and reads do not overlap.
> The current version of the code uses Linux shared memory with NSEMS 
> semaphores. When array element i has to be read or updated semaphore (i % 
> NSEMS) is used. if NSEMS = 1 the entire array will be locked which leads to 
> unnecessary waits because reads and writes do not overlap most of the time. 
> Performance increases as NSEMS increases, and flattens out at NSEMS = 32, at 
> which point the code runs twice as fast when compared to NSEMS = 1.
> I want to change the code to use OpenMPI RMA, but MPI_Win_lock locks the 
> entire array, which is similar to NSEMS = 1. Is there a way to have more 
> granular locks?
> 
> Gijsbert
> 

Also, is there an MPI_Win_lock equavalent for IPC_NOWAIT?

Gijsbert




[OMPI users] Granular locks?

2010-10-01 Thread Gijsbert Wiesenekker
I have a large array that is shared between two processes. One process updates 
array elements randomly, the other process reads array elements randomly. Most 
of the time these writes and reads do not overlap.
The current version of the code uses Linux shared memory with NSEMS semaphores. 
When array element i has to be read or updated semaphore (i % NSEMS) is used. 
if NSEMS = 1 the entire array will be locked which leads to unnecessary waits 
because reads and writes do not overlap most of the time. Performance increases 
as NSEMS increases, and flattens out at NSEMS = 32, at which point the code 
runs twice as fast when compared to NSEMS = 1.
I want to change the code to use OpenMPI RMA, but MPI_Win_lock locks the entire 
array, which is similar to NSEMS = 1. Is there a way to have more granular 
locks?

Gijsbert