Hi all,
Does anyone know of a relatively portable solution for querying a
given system for the shmctl behavior that I am relying on, or is this
going to be a nightmare? Because, if I am reading this thread
correctly, the presence of shmget and Linux is not sufficient for
determining an
On May 2 2010, Ashley Pittman wrote:
On 2 May 2010, at 04:03, Samuel K. Gutierrez wrote:
As to performance there should be no difference in use between sys-V
shared memory and file-backed shared memory, the instructions issued and
the MMU flags for the page should both be the same so the
On 02/05/10 06:49, Ashley Pittman wrote:
> I think you should look into this a little deeper, it
> certainly used to be the case on Linux that setting
> IPC_RMID would also prevent any further processes from
> attaching to the segment.
That certainly appears to be the case in the current master
On 01/05/10 23:03, Samuel K. Gutierrez wrote:
> I call shmctl IPC_RMID immediately after one process has
> attached to the segment because, at least on Linux, this
> only marks the segment for destruction.
That's correct, looking at the kernel code (at least in the
current git master) the
On 2 May 2010, at 04:03, Samuel K. Gutierrez wrote:
> As far as I can tell, calling shmctl IPC_RMID is immediately destroying
> the shared memory segment even though there is at least one process
> attached to it. This is interesting and confusing because Solaris 10's
> behavior description of
Hi Ethan,
Sorry about the lag.
As far as I can tell, calling shmctl IPC_RMID is immediately destroying
the shared memory segment even though there is at least one process
attached to it. This is interesting and confusing because Solaris 10's
behavior description of shmctl IPC_RMID is similar to
Hi Ethan,
Bummer. What does the following command show?
sysctl -a | grep shm
Thanks!
--
Samuel K. Gutierrez
Los Alamos National Laboratory
On Apr 29, 2010, at 1:32 PM, Ethan Mallove wrote:
Hi Samuel,
I'm trying to run off your HG clone, but I'm seeing issues with
c_hello, e.g.,
$
Hi,
Faster component initialization/finalization times is one of the main
motivating factors of this work. The general idea is to get away from
creating a rather large backing file. With respect to module
bandwidth and latency, mmap and sysv seem to be comparable - at least
that is
On Tue, Apr 27, 2010 at 7:55 PM, Samuel K. Gutierrez wrote:
> With Jeff and Ralph's help, I have completed a System V shared memory
> component for Open MPI.
What is the motivation for this work ? Are there situations where the
mmap based SM component doesn't work or is slow(er)
Hi,
With Jeff and Ralph's help, I have completed a System V shared memory
component for Open MPI. I have conducted some preliminary tests on
our systems, but would like to get test results from a broader audience.
As it stands, mmap is the defaul, but System V shared memory can be
10 matches
Mail list logo