[OMPI users] Fwd: mpirun noticed that process rank 5 with PID 0 on node localhost exited on signal 9 (Killed).

2018-09-28 Thread Zeinab Salah
Thank you for your response.
I attached the results of using :valgrind --tool=memcheck --leak-check=yes
-v

valgrind --tool=memcheck --leak-check=yes -v  mpirun -np 8  -mca btl
vader,self,tcp -mca btl_tcp_eager_limit 4095 -x
LD_LIBRARY_PATH   ./chimere.e

‫في الجمعة، 28 سبتمبر 2018 في 7:03 م تمت كتابة ما يلي بواسطة ‪Ralph H
Castain‬‏ <‪r...@open-mpi.org‬‏>:‬

> Ummm…looks like you have a problem in your input deck to that application.
> Not sure what we can say about it…
>
>
> > On Sep 28, 2018, at 9:47 AM, Zeinab Salah  wrote:
> >
> > Hi everyone,
> > I use openmpi-3.0.2 and I want to run chimere model with 8 processors,
> but in the step of parallel mode, the run stopped with the following error
> message,
> > Please could you help me?
> > Thank you in advance
> > Zeinab
> >
> >  +++ CHIMERE RUNNING IN PARALLEL MODE +++
> >   MPI SUB-DOMAINS :
> > rank  izstart  izend  nzcount  imstart imend  nmcount i   j
> > 
> >1   1  14  14   1  22  22   1   1
> >2  15  27  13   1  22  22   2   1
> >3  28  40  13   1  22  22   3   1
> >4  41  53  13   1  22  22   4   1
> >5   1  14  14  23  43  21   1   2
> >6  15  27  13  23  43  21   2   2
> >7  28  40  13  23  43  21   3   2
> >8  41  53  13  23  43  21   4   2
> >  Sub domain dimensions:   14  22
> >
> >  boundary conditions:
> /home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.list
> >3  boundary conditions file(s) found
> >  Opening
> /home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-gas
> >  Opening
> /home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-aer
> >  Opening
> /home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-dust
> >  Opening
> /home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-gas
> >  Opening
> /home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-aer
> >  Opening
> /home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-dust
> > ---
> > Primary job  terminated normally, but 1 process returned
> > a non-zero exit code. Per user-direction, the job has been aborted.
> > ---
> >
> --
> > mpirun noticed that process rank 5 with PID 0 on node localhost exited
> on signal 9 (Killed).
> >
> --
> >
> > real  3m51.733s
> > user  0m5.044s
> > sys   1m8.617s
> > Abnormal termination of step2.sh
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
<>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

[OMPI users] mpirun noticed that process rank 5 with PID 0 on node localhost exited on signal 9 (Killed).

2018-09-28 Thread Zeinab Salah
Hi everyone,
I use openmpi-3.0.2 and I want to run chimere model with 8 processors, but
in the step of parallel mode, the run stopped with the following error
message,
Please could you help me?
Thank you in advance
Zeinab

 +++ CHIMERE RUNNING IN PARALLEL MODE +++
  MPI SUB-DOMAINS :
rank  izstart  izend  nzcount  imstart imend  nmcount i   j

   1   1  14  14   1  22  22   1   1
   2  15  27  13   1  22  22   2   1
   3  28  40  13   1  22  22   3   1
   4  41  53  13   1  22  22   4   1
   5   1  14  14  23  43  21   1   2
   6  15  27  13  23  43  21   2   2
   7  28  40  13  23  43  21   3   2
   8  41  53  13  23  43  21   4   2
 Sub domain dimensions:   14  22

 boundary conditions:
/home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.list
   3  boundary conditions file(s) found
 Opening
/home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-gas
 Opening
/home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-aer
 Opening
/home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-dust
 Opening
/home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-gas
 Opening
/home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-aer
 Opening
/home/dream/CHIMERE/chimere2017r4/../BIGFILES/OUTPUTS/Test/../INIBOUN.10/BOUN_CONCS.2009030700_2009030900_Test.nc-dust
---
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
---
--
mpirun noticed that process rank 5 with PID 0 on node localhost exited on
signal 9 (Killed).
--

real 3m51.733s
user 0m5.044s
sys 1m8.617s
Abnormal termination of step2.sh
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] --with-mpi-f90-size in openmpi-3.0.2

2018-09-27 Thread Zeinab Salah
Thank you so much for your detailed answers.
I use gfortran 4.8.3, what should I do? or what is the suitable openmpi
version for this version?

Thanks again
Best wishes
Zeinab

‫في الخميس، 27 سبتمبر 2018 في 4:21 م تمت كتابة ما يلي بواسطة ‪Jeff Squyres
(jsquyres) via users‬‏ <‪users@lists.open-mpi.org‬‏>:‬

> On Sep 27, 2018, at 12:16 AM, Zeinab Salah  wrote:
> >
> > I have  a problem in running an air quality model, maybe because of the
> size of calculations, so I tried different versions of openmpi.
> > I want to install openmpi-3.0.2 with the option of
> "--with-mpi-f90-size=medium", but this option is unrecognized in the new
> versions. So, what is the update option for that, or how we can control the
> size of the MPI F90 module in the new versions?,
>
> This option did, indeed, disappear in more recent versions of Open MPI.
>
> The short version is that if you have a "modern" gfortran (i.e., >= v4.9)
> or any other modern fortran compiler, then you will get the full "mpi"
> module.
>
> The "mpi-f90-size" option was only necessary for older gfortran versions
> that had limitations that caused us to make the user choose: do you want
> small, medium, or large?
>
> Meaning: if your gfortran is >= v4.9 or you're using a different fortran
> compiler, don't worry about this option.
>
> > also, does this option make the numerical models to deal with lagre data
> files?
>
> No; this option only had to do with how many MPI API interfaces were in
> the "mpi" module (i.e., if your application invokes "use mpi" instead of
> "include 'mpif.h'").  Even if you chose the "small" size (meaning: many MPI
> APIs were not listed in the "mpi" module), all the MPI APIs would compile
> and link and work at run-time just fine.  The number of interfaces in the
> "mpi" module basically just means how much compile-time checking you get
> when you are compiling your MPI application.
>
> Check out this paper (
> https://www.open-mpi.org/papers/euro-pvmmpi-2005-fortran/) if you care
> about the reasons why.
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

[OMPI users] --with-mpi-f90-size in openmpi-3.0.2

2018-09-26 Thread Zeinab Salah
Hi everyone,
I have  a problem in running an air quality model, maybe because of the
size of calculations, so I tried different versions of openmpi.
I want to install openmpi-3.0.2 with the option of
"--with-mpi-f90-size=medium", but this option is unrecognized in the new
versions. So, what is the update option for that, or how we can control the
size of the MPI F90 module in the new versions?, also, does this option
make the numerical models to deal with lagre data files?

Thank you in advance.
Best regards,
Zeinab
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users