Hi Eddie

Open MPI needs to create a temporary file system ­ what we call our ³session
directory² - where it stores things like the shared memory file. From this
output, it appears that your /tmp directory is ³locked² to root access only.

You have three options for resolving this problem:

(a) you could make /tmp accessible to general users;

(b) you could use the ‹tmpdir xxx command line option to point Open MPI at
another directory that is accessible to the user (for example, you could use
a ³tmp² directory under the user¹s home directory); or

(c) you could set an MCA parameter OMPI_MCA_tmpdir_base to identify a
directory we can use instead of /tmp.

 If you select options (b) or (c), the only requirement is that this
location must be accessible on every node being used. Let me be clear on
this: the tmp directory must not be NSF mounted and therefore shared across
all nodes. However, each node must be able to access a location of the given
name ­ that location should be strictly local to each node.

Hope that helps
Ralph



On 1/17/07 12:25 AM, "eddie168" <eddie168+ompi_u...@gmail.com> wrote:

> Dear all,
>  
> I have recently installed OpenMPI 1.1.2 on a OpenSSI cluster running Fedora
> core 3. I tested a simple hello world mpi program (attached) and it runs ok as
> root. However, if I run the same program under normal user, it gives the
> following error: 
>  
> [eddie@oceanus:~/home2/mpi_tut]$ mpirun -np 2 tut01
> [oceanus:125089] mca_common_sm_mmap_init: ftruncate failed with errno=13
> [oceanus:125089] mca_mpool_sm_init: unable to create shared memory mapping (
> /tmp/openmpi-sessions-eddie@localhost_0/default-universe/1/shared_mem_pool.loc
> alhost)
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>   PML add procs failed
>   --> Returned "Out of resource" (-2) instead of "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** before MPI was initialized
> *** MPI_ERRORS_ARE_FATAL (goodbye)
> [eddie@oceanus:~/home2/mpi_tut]$
> 
> Am I need to give certain permission to the user in order to oversubscribe
> processes?
> 
> Thanks in advance,
> 
> Eddie.
> 
>  
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to