Oh..... Thank you for the information.

The machine has 6GM of RAM and I am creating 4 processes (for 4 cores). Are
you sure that it is because of lack of resources or some problem with the
network settings (I want to run the programs only on my server)?

Is there anyway to do this (I need to run only 4 processes for my project)?

Thank you.

Best,
Kishore Kumar Pusukuri
http://www.cs.ucr.edu/~kishore



On 28 April 2010 17:18, Martin Siegert <sieg...@sfu.ca> wrote:

> How much memory is available on that quad core machine?
> The minimum requirements for MPIM2007 are:
> 16GB of memory for the whole system or 1GB of memory per rank, whichever
> is larger.
> For MPIL2007 you need to use at least 64 processes and a minimum of 128GB
> (2GB/process) is required.
>
> Cheers,
> Martin
>
> --
> Martin Siegert
> Head, Research Computing
> WestGrid Site Lead
> IT Services                                phone: 778 782-4691
> Simon Fraser University                    fax:   778 782-4242
> Burnaby, British Columbia                  email: sieg...@sfu.ca
> Canada  V5A 1S6
>
> On Wed, Apr 28, 2010 at 05:32:12AM -0500, Jeff Squyres (jsquyres) wrote:
> >
> >    I don't know much about specmpi, but it seems like it is choosing to
> >    abort. Maybe the "no room for lattice" has some meaning...?
> >    -jms
> >    Sent from my PDA. No type good.
> >
>  _______________________________________________________________________
> >
> >    From: users-boun...@open-mpi.org <users-boun...@open-mpi.org>
> >    To: us...@open-mpi.org <us...@open-mpi.org>
> >    Sent: Wed Apr 28 01:47:01 2010
> >    Subject: [OMPI users] MPI_ABORT was invoked on rank 0 in
> >    communicatorMPI_COMM_WORLD with errorcode 0.
> >
> >    Hi,
> >    I am trying to run SPEC MPI 2007 workload on a quad-core machine.
> >    However getting this error message. I also tried to use hostfile
> option
> >    by specifying localhost slots=4, but still getting the following
> error.
> >    Please help me.
> >    $mpirun  --mca btl tcp,sm,self -np 4 su3imp_base.solaris
> >    SU3 with improved KS action
> >    Microcanonical simulation with refreshing
> >    MIMD version 6
> >    Machine =
> >    R algorithm
> >    type 0 for no prompts  or 1 for prompts
> >    nflavors 2
> >    nx 30
> >    ny 30
> >    nz 56
> >    nt 84
> >    iseed 1234
> >    LAYOUT = Hypercubes, options = EVENFIRST,
> >    NODE 0: no room for lattice
> >    termination: Tue Apr 27 23:41:44 2010
> >    Termination: node 0, status = 1
> >
>  -----------------------------------------------------------------------
> >    ---
> >    MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> >    with errorcode 0.
> >    NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> >    You may or may not see output from other processes, depending on
> >    exactly when Open MPI kills them.
> >
>  -----------------------------------------------------------------------
> >    ---
> >
>  -----------------------------------------------------------------------
> >    ---
> >    mpirun has exited due to process rank 0 with PID 17239 on
> >    node cache-aware exiting without calling "finalize". This may
> >    have caused other processes in the application to be
> >    terminated by signals sent by mpirun (as reported here).
> >    Best,
> >    Kishore Kumar Pusukuri
> >    [1]http://www.cs.ucr.edu/~kishore <http://www.cs.ucr.edu/%7Ekishore>
> >
> > References
> >
> >    1. http://www.cs.ucr.edu/~kishore <http://www.cs.ucr.edu/%7Ekishore>
>
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to