Hi Samuel,

I'm trying to run off your HG clone, but I'm seeing issues with
c_hello, e.g.,

  $ mpirun -mca mpi_common_sm sysv --mca btl self,sm,tcp --host 
burl-ct-v440-2,burl-ct-v440-2 -np 2 ./c_hello
  --------------------------------------------------------------------------
  A system call failed during shared memory initialization that should
  not have.  It is likely that your MPI job will now either abort or
  experience performance degradation.

    Local host:  burl-ct-v440-2
    System call: shmat(2)
    Process:     [[43408,1],1]
    Error:       Invalid argument (errno 22)
  --------------------------------------------------------------------------
  ^Cmpirun: killing job...

  $ uname -a
  SunOS burl-ct-v440-2 5.10 Generic_118833-33 sun4u sparc SUNW,Sun-Fire-V440

The same test works okay if I s/sysv/mmap/.

Regards,
Ethan


On Wed, Apr/28/2010 07:16:12AM, Samuel K. Gutierrez wrote:
>  Hi,
> 
>  Faster component initialization/finalization times is one of the main 
>  motivating factors of this work.  The general idea is to get away from 
>  creating a rather large backing file.  With respect to module bandwidth and 
>  latency, mmap and sysv seem to be comparable - at least that is what my 
>  preliminary tests have shown.  As it stands, I have not come across a  
>  situation where the mmap SM component doesn't work or is slower.
> 
>  Hope that helps,
> 
>  --
>  Samuel K. Gutierrez
>  Los Alamos National Laboratory
> 
> 
> 
> 
> 
>  On Apr 28, 2010, at 5:35 AM, Bogdan Costescu wrote:
> 
> > On Tue, Apr 27, 2010 at 7:55 PM, Samuel K. Gutierrez <sam...@lanl.gov> 
> > wrote:
> >> With Jeff and Ralph's help, I have completed a System V shared memory
> >> component for Open MPI.
> >
> > What is the motivation for this work ? Are there situations where the
> > mmap based SM component doesn't work or is slow(er) ?
> >
> > Kind regards,
> > Bogdan
> > _______________________________________________
> > devel mailing list
> > de...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/devel
> 
>  _______________________________________________
>  devel mailing list
>  de...@open-mpi.org
>  http://www.open-mpi.org/mailman/listinfo.cgi/devel

Reply via email to