Dear Andrew Burns,
Many thanks to you for providing steps to check my programs. The combined
program is now running parallel. But the values from one of the program are
appearing as NaN. The possible reason may be the MPI_COMM_WORLD. I am still
trying to sort it out. I have attached here the related files and outputs
for your kind suggestions:

Regards
Ashfaq

On Fri, Feb 6, 2015 at 6:35 PM, Burns, Andrew J CTR (US) <
andrew.j.burns35....@mail.mil> wrote:

> Classification: UNCLASSIFIED
> Caveats: NONE
>
> The placing of clminitialize and clmstop feel a little awkward, but it
> doesn't look like they would break the program. If you were
> calling MPI_Init more than once it would throw an error and if Finalize
> were called early in clmstop the only serial section would
> be the deallocation.
>
>
>
> One other thought is to ensure that you are properly launching the program
> as multicore.
>
> The command should be similar to as follows (where NPROCS is the number of
> cores being used):
>
> mpirun -n NPROCS ./program
>
> If you were to launch the program with simply "./program" it would run as
> serial. It would also run as serial if you were to call
> "mpirun ./program" since no number of processes are specified.
>
>
>
>
> If the program is properly launched in parallel and then converts to
> serial, you should be able to track down the location where
> this happens by inserting some core polling similar to the following
> pseudocode:
>
> for (i = 0; i < numprocs; ++i)
> {
>   if (i = coreid)
>   {
>     print("core ", id, " out of ", numprocs)
>   }
>   MPI_Barrier()
> }
>
>
>
> You will want to check all of the calls inside the main loop to ensure
> that none of them call finalize.
>
> -Andrew Burns
>
> -----Original Message-----
> From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Muhammad
> Ashfaqur Rahman
> Sent: Friday, February 06, 2015 9:50 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] prob in running two mpi merged program
> (UNCLASSIFIED)
>
> Dear Andrew Burns,
> Many thanks for your correct understanding and descriptive suggestion.
> I have now changed the FLAGS of one program for not to take any MPI tags,
> i.e., switched off MPI. And then for the another set kept
> open for MPI options.
> After that call the MPI_Initialize to the beginning of Main program
> (aadmn.F here) and call MPI_Finalize containing program
> (clmstop.F90 here) at the end part.
> After compilation it is still serial.
> I have attached here the FILES containing MPI calling and Makefile for
> your kind consideration.
>
>
> Regards
> Ashfaq
>
> On Thu, Feb 5, 2015 at 8:25 PM, Burns, Andrew J CTR (US) <
> andrew.j.burns35....@mail.mil> wrote:
>
>
>         Classification: UNCLASSIFIED
>         Caveats: NONE
>
>         Okay, I think I may get what's going on. I think you're calling
> one mpi capable program from within another mpi program.
> What you
>         have to do is assume that the program that is being called already
> had MPI_Init called and that MPI_Finalize will be called
> after
>         the program returns.
>
>         Example (pseudocode for brevity):
>
>         int main()
>         {
>           MPI_Init();
>
>           int x;
>
>           int p2result = Program2(x, comm);
>
>           MPI_Bcast(p2result, comm);
>
>           MPI_Finalize();
>         }
>
>         int Program2(int x, MPI_Comm comm)
>         {
>           int returnval;
>           MPI_AllReduce(&returnval, x, comm);
>           return returnval;
>         }
>
>
>
>         If the second program were to be:
>
>         int Program2(int x, MPI_Comm comm)
>         {
>           MPI_Init();
>           int returnval;
>           MPI_AllReduce(&returnval, x, comm);
>           return returnval;
>           MPI_Finalize()
>         }
>
>         The program would return to serial when MPI_Finalize is first
> called, potentially throwing several errors.
>
>         -Andrew Burns
>
>         -----Original Message-----
>         From: users [mailto:users-boun...@open-mpi.org] On Behalf Of
> Muhammad Ashfaqur Rahman
>         Sent: Wednesday, February 04, 2015 3:42 PM
>         To: Open MPI Users
>
>         Subject: Re: [OMPI users] prob in running two mpi merged program
> (UNCLASSIFIED)
>
>         Dear Andrew Burns,
>         Thank you for your ideas. Your guess is partly correct, I am
> trying to merge two sets of programs into one executable and
> then run
>         in mpi.
>         As per your suggestion, I have omitted the MPI_Finalize from of
> one set. And also commented the MPI_Barrier in some parts.
>         But still it is serial.
>         For your idea: attached here Makefile.
>
>
>         Regards
>         Ashfaq
>
>
>         On Tue, Feb 3, 2015 at 6:26 PM, Burns, Andrew J CTR (US) <
> andrew.j.burns35....@mail.mil> wrote:
>
>
>                 Classification: UNCLASSIFIED
>                 Caveats: NONE
>
>                 If I could venture a guess, it sounds like you are trying
> to merge two separate programs into one executable and run
> them in
>         parallel
>                 via MPI.
>
>                 The problem sounds like an issue where your program starts
> in parallel but then changes back to serial while the
> program is
>         still
>                 executing.
>
>                 I can't be entirely sure without looking at the code
> itself.
>
>                 One guess is that MPI_Finalize is in the wrong location.
> Finalize should be called to end the parallel section and
> move the
>         program
>                 back to serial. Typically this means that Finalize will be
> very close to the last line of the program.
>
>                 It may also be possible that with the way your program is
> structured, the effect is effectively serial since only
> one core
>         is
>                 processing at any given moment. This may be due to
> extensive use of barrier or similar functions.
>
>                 Andrew Burns
>                 Lockheed Martin
>                 Software Engineer
>                 410-306-0409
>                 ARL DSRC
>                 andrew.j.bur...@us.army.mil
>                 andrew.j.burns35....@mail.mil
>
>                 -----Original Message-----
>                 From: users [mailto:users-boun...@open-mpi.org] On Behalf
> Of Ralph Castain
>                 Sent: Tuesday, February 03, 2015 9:05 AM
>                 To: Open MPI Users
>                 Subject: Re: [OMPI users] prob in running two mpi merged
> program
>
>                 I'm afraid I don't quite understand what you are saying,
> so let's see if I can clarify. You have two fortran MPI
> programs.
>         You start
>                 one using "mpiexec". You then start the other one as a
> singleton - i.e., you just run "myapp" without using mpiexec.
> The two
>         apps are
>                 attempting to execute an MPI_Connect/accept so they can
> "join".
>
>                 Is that correct? You mention MPICH in your statement about
> one of the procs - are you using MPICH or Open MPI? If
> the
>         latter, which
>                 version are you using?
>
>                 Ralph
>
>
>                 On Mon, Feb 2, 2015 at 11:35 PM, Muhammad Ashfaqur Rahman <
> ashfaq...@gmail.com> wrote:
>
>
>                         Dear All,
>                         Take my greetings. I am new in mpi usage. I have
> problems in parallel run, when two fortran mpi programs are
> merged
>         to one
>                 executable. If these two are separate, then they are
> running parallel.
>
>                         One program has used spmd and another one  has
> used mpich header directly.
>
>                         Other issue is that while trying to run the above
> mentioned merged program in mpi, it's first started with
> separate
>         parallel
>                 instances of same step and then after some steps it
> becomes serial.
>
>                         Please help me in this regards
>
>                         Ashfaq
>                         Ph.D Student
>                         Dept. of Meteorology
>
>                         _______________________________________________
>                         users mailing list
>                         us...@open-mpi.org
>                         Subscription:
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>                         Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/02/26264.php
>
>
>
>
>                 Classification: UNCLASSIFIED
>                 Caveats: NONE
>
>
>
>                 _______________________________________________
>                 users mailing list
>                 us...@open-mpi.org
>                 Subscription:
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>                 Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/02/26266.php
>
>
>
>
>
>         Classification: UNCLASSIFIED
>         Caveats: NONE
>
>
>
>         _______________________________________________
>         users mailing list
>         us...@open-mpi.org
>         Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>         Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/02/26293.php
>
>
>
>
> Classification: UNCLASSIFIED
> Caveats: NONE
>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/02/26300.php
>
  *** FILE OPEND for WRITE UNIT =            21  NAME = 
 /home2/ashfaq/merged/run/lbc/rst.t42.811101                                    
                                                 
  *** STEP=            2  TIME=0081/11/01-01:10:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=            3  TIME=0081/11/01-01:20:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=            4  TIME=0081/11/01-01:30:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=            5  TIME=0081/11/01-01:40:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=            6  TIME=0081/11/01-01:50:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=            7  TIME=0081/11/01-02:00:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=            8  TIME=0081/11/01-02:10:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=            9  TIME=0081/11/01-02:20:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           10  TIME=0081/11/01-02:30:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           11  TIME=0081/11/01-02:40:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           12  TIME=0081/11/01-02:50:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           13  TIME=0081/11/01-03:00:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           14  TIME=0081/11/01-03:10:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           15  TIME=0081/11/01-03:20:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           16  TIME=0081/11/01-03:30:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           17  TIME=0081/11/01-03:40:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           18  TIME=0081/11/01-03:50:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           19  TIME=0081/11/01-04:00:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           20  TIME=0081/11/01-04:10:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           21  TIME=0081/11/01-04:20:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           22  TIME=0081/11/01-04:30:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           23  TIME=0081/11/01-04:40:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           24  TIME=0081/11/01-04:50:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           25  TIME=0081/11/01-05:00:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
  *** STEP=           26  TIME=0081/11/01-05:10:00 
 salam:: dyna gat                       NaN                       NaN 
                       NaN                       NaN                       NaN
* PACKAGE AADMIN !" admin: main routine
*
*   [HIS] 97/10/31(jkkim)     SNUGCM 2.0
**************************************************************CCC******
        subroutine aadmn          !" GCM main program
*
*   [PARAM]
#ifdef DYNAMIC
      USE zcdim                 !" # of grid point & wave
      USE zddim                 !" NMDIM
      USE zidim                 !" number of output data
      USE zsdim                 !" saving information (including zpdim.F)
      USE lsmpar
      USE lsmtcv
      USE lsmtvv
      USE lsmvar
      USE ocnvar
c KIM [2008/07/11] tokioka constraint
      USE geovv
# ifdef MPI
      USE mpipar                !" parameters for MPI parallilization
      INCLUDE   'mpif.h'        !" user include file for MPI programs
# endif
#else
# include       "zcdim.F"       /* # of grid point & wave */
# include       "zpdim.F"       /* physics etc. */
!# ifdef MPI
!#  error Macro MPI should be set at the same time as DYNAMIC.
!# endif
#endif
#include        "zhdim.F"       /* # of char. in string */
#include        "zccom.F"       /* stand. physical const. */
*
*   [VAR]
* grid value(t) <DYNMCS>  grid value(t+dt)
*
#ifdef DYNAMIC
      REAL, ALLOCATABLE :: GAU(:,:,:)    !" westerly
      REAL, ALLOCATABLE :: GAV(:,:,:)    !" southern wind
      REAL, ALLOCATABLE :: GAT(:,:,:)    !" temperature
      REAL, ALLOCATABLE :: GAPS(:,:)     !" surface pressure
      REAL, ALLOCATABLE :: GAQ(:,:,:,:)  !" humidity etc.
#else
      REAL       GAU   ( IDIM, JDIM, KMAX      )  !" westerly
      REAL       GAV   ( IDIM, JDIM, KMAX      )  !" southern wind
      REAL       GAT   ( IDIM, JDIM, KMAX      )  !" temperature
      REAL       GAPS  ( IDIM, JDIM            )  !" surface pressure
      REAL       GAQ   ( IDIM, JDIM, KMAX, NTR )  !" humidity etc.
#endif
*
* grid value(t-dt) <DYNMCS>  grid value(t)
*
#ifdef DYNAMIC
      REAL, ALLOCATABLE :: GBU(:,:,:)    !" westerly
      REAL, ALLOCATABLE :: GBV(:,:,:)    !" southern wind
      REAL, ALLOCATABLE :: GBT(:,:,:)    !" temperature
      REAL, ALLOCATABLE :: GBPS(:,:)     !" surface pressure
      REAL, ALLOCATABLE :: GBQ(:,:,:,:)  !" humidity etc.
#else
      REAL       GBU   ( IDIM, JDIM, KMAX      )  !" westerly
      REAL       GBV   ( IDIM, JDIM, KMAX      )  !" southern wind
      REAL       GBT   ( IDIM, JDIM, KMAX      )  !" temperature
      REAL       GBPS  ( IDIM, JDIM            )  !" surface pressure
      REAL       GBQ   ( IDIM, JDIM, KMAX, NTR )  !" humidity etc.
#endif
*
* time etc.
*
      REAL       TIME                  !" time
      INTEGER    ISTEP                 !" serial No. of step
      REAL       DELT                  !" time step delta(t)
      LOGICAL    OINIT                 !" initializing time or not
*
*   [ONCE]
      REAL       TSTART                !" start t of calculation
      REAL       TEND                  !" finish t of calculation
*
* value at point
*
#ifdef DYNAMIC
      REAL, ALLOCATABLE :: GDZS(:)     !" surface topography
      REAL, ALLOCATABLE :: ALAT(:)     !" latitude
      REAL, ALLOCATABLE :: DLAT(:)     !" weight of latitude
      REAL, ALLOCATABLE :: ALON(:)     !" longitude
      REAL, ALLOCATABLE :: DLON(:)     !" weight of longitude
      REAL, ALLOCATABLE :: SIG(:)      !" sigma-level (integer)
      REAL, ALLOCATABLE :: SIGM(:)     !" sigma-level (half lev)
      REAL, ALLOCATABLE :: DSIG(:)     !" delta(sigma) (integer)
      REAL, ALLOCATABLE :: DSIGM(:)    !" delta(sigma) (half lev)
#else
      REAL       GDZS  ( IJDIM )       !" surface topography
      REAL       ALAT  ( IJDIM )       !" latitude
      REAL       DLAT  ( IJDIM )       !" weight of latitude
      REAL       ALON  ( IJDIM )       !" longitude
      REAL       DLON  ( IJDIM )       !" weight of longitude
      REAL       SIG   ( KMAX )        !" sigma-level (integer)
      REAL       SIGM  (KMAX+1)        !" sigma-level (half lev)
      REAL       DSIG  ( KMAX )        !" delta(sigma) (integer)
      REAL       DSIGM (KMAX+1)        !" delta(sigma) (half lev)
#endif
*
*   [INTERNAL WORK] 
      INTEGER    IFPAR, JFPAR
      CHARACTER  HTIME*20
      LOGICAL    OQUIT
*
*   [INTERNAL PARM] 
      LOGICAL    OCHECK, OCKALL
      NAMELIST  /NMCHCK/ OCHECK, OCKALL
      DATA       OCHECK, OCKALL / .FALSE., .FALSE. /
#ifdef BENCH
      REAL(KIND=8) :: elapse
# ifdef MPI
      REAL(KIND=8) :: t_start, t_end
# else
      INTEGER :: icount0, icount1, icount_rate, icount_max
# endif
#endif
*
* << SETPUP : initial setting >>
*
#ifdef MPI
!      CALL MPI_Init ( ierror )
      CALL MPI_Comm_size ( MPI_COMM_WORLD, nprocs, ierror )
      CALL MPI_Comm_rank ( MPI_COMM_WORLD, myrank, ierror )
      IF ( KIND(dummy) == 4 ) THEN
         MY_REAL  = MPI_REAL
         MY_2REAL = MPI_2REAL
      ELSE IF ( KIND(dummy) == 8 ) THEN
         MY_REAL  = MPI_DOUBLE_PRECISION
         MY_2REAL = MPI_2DOUBLE_PRECISION
      ELSE
         WRITE ( 6,* ) "ERROR: KIND(REAL) = ", KIND(dummy)
         CALL MPI_Finalize ( ierror )
         STOP
      END IF
#endif
#ifdef MPI
      IF ( myrank == 0 ) THEN
#endif
      WRITE ( 6,* ) ' @@@ AADMN: AGCM5.4 MAIN 95/01/30'
#ifdef MPI
      END IF
#endif
      CALL YPREP                !" sys.dep. initialization
*
      CALL REWNML ( IFPAR , JFPAR )
#ifdef MPI
      IF ( myrank == 0 ) THEN
         READ   ( IFPAR, NMCHCK, IOSTAT=iostat )
         lcom(1) = OCHECK
         lcom(2) = OCKALL
      END IF
      CALL MPI_Bcast ( iostat, 1, MPI_LOGICAL, 0,
     &     MPI_COMM_WORLD, ierror )
      icount = 2
      CALL MPI_Bcast ( lcom, icount, MPI_LOGICAL, 0,
     &     MPI_COMM_WORLD, ierror )
      IF ( myrank /= 0 ) THEN
         OCHECK = lcom(1)
         OCKALL = lcom(2)
      END IF
      IF ( iostat > 0 ) THEN
         WRITE (6,*) "READ ERROR: NMCHCK"
         CALL MPI_Finalize ( ierror )
         STOP
      END IF
#else
      READ   ( IFPAR, NMCHCK, END=190 )
#endif
  190 WRITE  ( JFPAR, NMCHCK )
*
      CALL CALNDR               !" set treatment of calendar
      CALL PCONST               !" physical constants
      CALL SETPAR               !" time of experiment etc.
     O         ( TSTART, TEND   )
#ifdef MPI
      ALLOCATE ( my_jmax    (0:nprocs-1) )
      ALLOCATE ( my_kpt   (3,0:nprocs-1), my_lpt   (3,0:nprocs-1) )
      ALLOCATE ( my_kdispl(2,0:nprocs-1), my_ldispl(2,0:nprocs-1) )
      ALLOCATE ( my_jh      (0:nprocs-1), my_jhdispl (0:nprocs-1) )
      ALLOCATE ( scounts    (0:nprocs-1), sdispls    (0:nprocs-1) )
      ALLOCATE ( rcounts    (0:nprocs-1), rdispls    (0:nprocs-1) )
      ALLOCATE ( jwho (jmax), jg2l (jmax), jl2g (jmax,0:nprocs-1) )
      ALLOCATE ( istatus (MPI_STATUS_SIZE) )
      CALL setpar_mpi ( JMAX )  !" set parameters for MPI
#endif
#ifdef DYNAMIC
*
* >>> zcdim
!      IMAX   = 128
!      JMAX   =  64
!      KMAX   =  20
!      NMAX   =  42
!      MINT   =   1
      MMAX   = NMAX
      LMAX   = NMAX
      IDIM   = IMAX+1
      JDIM   = JMAX
      KDIM   = KMAX
      IJDIM  = IDIM*JDIM
      IJSDIM = IDIM
      IJKDIM = IJDIM*KDIM
# ifdef MPI
      jmax_local    = my_jmax(myrank)
      jdim_local    = jmax_local
      jdim_global   = jmax_global
      ijdim_global  = IDIM*jdim_global
      ijkdim_global = ijdim_global*KDIM
      ldim  = MAX ( IDIM, jdim_global, KDIM )
      jkdim = MAX ( JDIM, KDIM )
      ALLOCATE ( gdata1d(ldim), buf1d(ldim) )
# endif
      ALLOCATE ( GAU   ( IDIM, JDIM, KMAX      ),
     &           GAV   ( IDIM, JDIM, KMAX      ),
     &           GAT   ( IDIM, JDIM, KMAX      ),
     &           GAPS  ( IDIM, JDIM            ),
     &           GAQ   ( IDIM, JDIM, KMAX, NTR ) )
      ALLOCATE ( GBU   ( IDIM, JDIM, KMAX      ),
     &           GBV   ( IDIM, JDIM, KMAX      ),
     &           GBT   ( IDIM, JDIM, KMAX      ),
     &           GBPS  ( IDIM, JDIM            ),
     &           GBQ   ( IDIM, JDIM, KMAX, NTR ) )
      ALLOCATE ( GDZS  ( IJDIM ),
     &           ALAT  ( IJDIM ),
     &           DLAT  ( IJDIM ),
     &           ALON  ( IJDIM ),
     &           DLON  ( IJDIM ),
     &           SIG   ( KMAX ),
     &           SIGM  (KMAX+1),
     &           DSIG  ( KMAX ),
     &           DSIGM (KMAX+1) )
c KIM [2008/07/11] tokioka constraint 
* >>> geovv 
      ALLOCATE ( pblh_save   ( IDIM, JDIM  ))
      ALLOCATE ( stdw_save   ( IDIM, JDIM  ))
      ALLOCATE ( wstar_save  ( IDIM, JDIM  ))
      ALLOCATE ( gdsh_save   ( IDIM, JDIM  ))
      ALLOCATE ( gdqf_save   ( IDIM, JDIM  ))
      ALLOCATE ( tkeout_save ( IDIM, JDIM, KMAX ) )

      allocate (ts_save(idim,jdim))
      allocate (qs_save(idim,jdim))

      pblh_save  = 500. 
      stdw_save  = 0.1
      wstar_save = 0.5 
      gdsh_save  = 10.   
      gdqf_save  = 4.0e-5

      ts_save = 0.
      qs_save = 0.

#ifdef BULK
      allocate (omega_save(idim,jdim,kmax))
      omega_save = 0.
      allocate (rino_save(idim,jdim,kmax))
      rino_save = 0.
      allocate (ri_star_save(idim,jdim))
      ri_star_save = 0.
      allocate (cbmf_save(idim,jdim))
      cbmf_save = 0.
#endif

      pblh_save = 300.
      tkeout_save = 0.1

* >>> zddim
      NMDIM = (MMAX/MINT+1)*(2*(NMAX+1)-MMAX)
     &      - (NMAX-LMAX)/MINT*(NMAX-LMAX+1)
#ifdef MPI
      jmxhf_global = jmax_global/2+1
      jmxhf_local  = jmax_local /2+1
#endif
      JMXHF = JMAX/2+1
* >>> zidim
# ifndef OPT_NHDIM
      NHDIM = NHISGK*IMAX*JMAX*KMAX
     &      + NHISG1*IMAX*JMAX
     &      + NHISZK*JMAX*KMAX
     &      + NHISZ1*JMAX
     &      + NHISZ0
# else      
      NHDIM = OPT_NHDIM
# endif
# ifndef OPT_NRDIM
      NRDIM = NRDAT*IJDIM+JDIM*KDIM+5*IJDIM*KMAX
# else      
      NRDIM = OPT_NRDIM
# endif
# ifdef MPI
      NGDWRK = ijdim_global*(KMAX+KPDIM)
# else
      NGDWRK = IJDIM*(KMAX+KPDIM)
# endif
* >>> zsdim
      NGSBUF = IJDIM*(KMAX+1)*MGSBFK+IJDIM*MGSBF1
* >>> lsmpar
!      lsminfo  = 42
      lsmlon   = IMAX
      lsmlat   = JMAX
!      lpt      = 2843
!      kpt      = 6042
!      msub     = 5
      mpt_sm   = lsmlon*msub
      mpt_lm   = kpt
      mpt      = mpt_sm
      numlv_sm = lsmlat
      numlv_lm = 1
      numlv    = numlv_sm
      atmlon   = idim
      atmlat   = jdim
* >>> lsmtcv
      ALLOCATE (
     &     ixy(lpt), jxy(lpt), ijxy(lpt), klnd(kpt),
     &     kvec(lpt,msub) )
      ALLOCATE ( wsg2g(lpt,msub) )
      ALLOCATE ( ivt(kpt), ist(kpt), isc(kpt) )
      ALLOCATE (
     &     watsat(kpt), hksat (kpt), smpsat(kpt),
     &     bch   (kpt), watdry(kpt), watopt(kpt),
     &     csol  (kpt), tksol (kpt), tkdry (kpt),
     &     sand  (kpt), clay  (kpt),
     &     dzsoi(msl,kpt), zsoi(msl,kpt), root(msl,kpt) )
      ALLOCATE ( lati(kpt), long(kpt) )
      ALLOCATE (
     &     begkpt(lsmlat), numkpt(lsmlat),
     &     beglpt(lsmlat), numlpt(lsmlat) )
* >>> lsmtvv
      ALLOCATE (
     &     h2osno(kpt), h2ocan(kpt), h2osoi(msl,kpt),
     &     tv    (kpt), tg    (kpt), tsoi  (msl,kpt),
     &     moz   (kpt), eah   (kpt), soot  (kpt), hsno  (kpt),
     &     fsno  (kpt), fwet  (kpt), htop  (kpt), tlai  (kpt),
     &     tsai  (kpt), elai  (kpt), esai  (kpt), foln  (kpt),
     &     stemb (kpt), rootb (kpt), soilc (kpt), igs   (kpt),
     &     albd  (mband,kpt), albi  (mband,kpt),
     &     albgrd(mband,kpt), albgri(mband,kpt),
     &     fabd  (mband,kpt), fabi  (mband,kpt),
     &     ftdd  (mband,kpt), ftid  (mband,kpt),
     &     ftii  (mband,kpt), fsun  (kpt) )
* >>> lsmvar
      ALLOCATE (
     &     pgcm   (kpt), psrf   (kpt), tgcm   (kpt),
     &     qgcm   (kpt), firgcm (kpt), qprecc (kpt),
     &     qprecl (kpt), ugcm   (kpt), vgcm   (kpt),
     &     hgcm   (kpt), thgcm  (kpt), egcm   (kpt),
     &     rhogcm (kpt), co2gcm (kpt), o2gcm  (kpt),
     &     solad(mband,kpt), solai(mband,kpt) )

      ALLOCATE (
     &     taux (kpt), tauy (kpt), fire (kpt), fsh  (kpt),
     &     fcev (kpt), fgev (kpt), fctr (kpt), trad (kpt),
     &     tsa  (kpt), fpsn (kpt), frm  (kpt), frg  (kpt),
     &     fmicr(kpt), fco2 (kpt), fira (kpt), fgr  (kpt),
     &     fsm  (kpt), tam  (kpt), tah  (kpt), fsa  (kpt),
     &     fsr  (kpt), ndvi (kpt), frmf (kpt), dmi  (kpt) )

      ALLOCATE (
     &     bevap(kpt), btran(kpt), rsw  (kpt), qsoil(kpt),
     &     qvege(kpt), qvegt(kpt), qintr(kpt), qcdew(kpt),
     &     qceva(kpt), qsnow(kpt), qfros(kpt), qsubl(kpt),
     &     qmelt(kpt), qinfl(kpt), qseva(kpt), qtran(kpt),
     &     qdrai(kpt), qover(kpt), qdrip(kpt), qsdew(kpt) )
* >>> ocnvar
      ALLOCATE (
     &     wgocn  (ijdim), tsocn  (ijdim),
     &     wgice  (ijdim), tsice  (ijdim,plevmx),
     &     hsice  (ijdim), hsnow  (ijdim) )
      ALLOCATE (
     &     seaasdir (ijdim), seaaldir (ijdim),
     &     seaasdif (ijdim), seaaldif (ijdim),
     &     sicasdir (ijdim), sicaldir (ijdim),
     &     sicasdif (ijdim), sicaldif (ijdim) )
#endif
*
      CALL SETCOR                      !" value at point
     O         ( ALON  , DLON  ,
     O           ALAT  , DLAT  ,
     O           SIG   , DSIG  ,
     O           SIGM  , DSIGM  )
* 
      CALL SETZS                       !" surface topography
     O         ( GDZS  )
*
      CALL RDSTRT                      !" read initial data
     O         ( GAU   , GAV   , GAT   , GAPS  , GAQ   , 
     O           GBU   , GBV   , GBT   , GBPS  , GBQ   , 
     O           TIME  , OINIT ,
     I           TSTART                                  )
*
      CALL SETTIM                      !" adjust time
     I         ( TIME  )
*
      CALL ACHKV
     I         ( GAU   , GAV   , GAT   , GAPS  , GAQ   , 
     I           'start GA'    , .TRUE.                  )
      CALL ACHKV
     I         ( GBU   , GBV   , GBT   , GBPS  , GBQ   , 
     I           'start GB'    , .TRUE.                  )
*
      ISTEP = 0
*
* << LOOP : <************* main loop >>
*
#ifdef BENCH
# ifdef MPI
      CALL MPI_Barrier ( MPI_COMM_WORLD, ierror )
      t_start = MPI_Wtime()
# else
      CALL SYSTEM_CLOCK ( icount0, icount_rate, icount_max )
# endif
#endif
 5000 CONTINUE
*
         ISTEP = ISTEP + 1
         CALL CPERPO ( .FALSE. )
         CALL CSS2CC ( HTIME, TIME )
         CALL CPERPO ( .TRUE.  )
         WRITE (6,*) ' *** STEP=', ISTEP, ' TIME=', HTIME
*
* << START : start of step >>
*
         CALL TIMSTP            !" control time
     M         ( DELT  , OINIT ,
     I           TIME  , GAU   , GAV   , ISTEP  )
         CALL HISTRT
*
         CALL SETPSF            !" set output Ps
     I         ( GAPS   )     
         CALL AHSTIN            !" standard data output
     I         ( GAU   , GAV   , GAT   , GAPS  , GAQ   )
*
* << DYNMCS : dynamics >>
* 
#ifndef NODYNAMICS
         CALL CLCSTR ( 'DYNMCS' )
         IF ( OCHECK ) THEN
            CALL ACHKV 
     I         ( GAU   , GAV   , GAT   , GAPS  , GAQ   ,
     I           'before DYNMCS', OCKALL                  )
         ENDIF
         CALL DYNMCS                   !" dynamics
     M         ( GAU   , GAV   , GAT   , GAPS  , GAQ   ,
     M           GBU   , GBV   , GBT   , GBPS  , GBQ   ,
     I           DELT  , OINIT ,
     C           GDZS  , ALON  , DLON  , ALAT  , DLAT  ,
     C           SIG   , SIGM  , DSIG  , DSIGM           )
                print*,'salam:: dyna gat',GAT(1:5,1,1)
         CALL CLCEND ( 'DYNMCS' )
#endif
*
* << PHYSCS : physics >>
*
#ifndef NOPHYSICS
         CALL CLCSTR ( 'PHYSCS' )
         IF ( OCHECK ) THEN
            CALL ACHKV 
     I         ( GAU   , GAV   , GAT   , GAPS  , GAQ   ,
     I           'before PHYSCS', OCKALL                 )
         ENDIF
         CALL PHYSCS                   !" physics
     M         ( GAU   , GAV   , GAT   , GAPS  , GAQ   ,
     I           TIME  , DELT*2, DELT  ,
     C           GDZS  , ALON  , ALAT  ,
     C           SIG   , SIGM  , DSIG  , DSIGM          )
         CALL CLCEND ( 'PHYSCS' )
*
#endif
*
         IF ( .NOT. OINIT ) THEN
*
            CALL STPTIM
     M         ( TIME  ,
     I           DELT    )
*
            CALL TFILT               !" time filter
     M         ( GBU   , GBV   , GBT   , GBPS  , GBQ   ,
     I           GAU   , GAV   , GAT   , GAPS  , GAQ    )
*
            CALL HISTRP
            CALL HISTOU( .FALSE. )     !" output data
*
            CALL QUITCK( OQUIT, TIME  )
            IF ( OQUIT ) THEN
               TEND = TIME
            ENDIF
*
            CALL WRRSTR         !" write to restart file
     I         ( GAU   , GAV   , GAT   , GAPS  , GAQ   ,
     I           GBU   , GBV   , GBT   , GBPS  , GBQ   ,
     I           TIME  , TSTART, TEND                   )
*
         ENDIF
*
* << LEND : end of main loop *************> >>
*
      IF ( .NOT. ( TIME .GE. TEND )  ) GOTO  5000
#ifdef BENCH
# ifdef MPI
      CALL MPI_Barrier ( MPI_COMM_WORLD, ierror )
      t_end = MPI_Wtime()
      elapse = t_end - t_start
# else
      CALL SYSTEM_CLOCK ( icount1, icount_rate, icount_max )
      elapse = (icount1-icount0)/dble(icount_rate)
# endif
#endif
*
      CALL HISTOU( .TRUE. )            !" output data
      CALL CLCOUT                      !" output of CPU time
#ifdef BENCH
      WRITE (6,*) '########### ELAPSE TIME SUMMARY ################'
      WRITE (6, '(" ", A16, 1PE15.6)') ' MAIN LOOP  = ', elapse
#endif
      CALL YFINE                       !" sys.dep. final treatment
*
#ifdef MPI
      DEALLOCATE ( my_jmax )
      DEALLOCATE ( my_kpt   , my_lpt     )
      DEALLOCATE ( my_kdispl, my_ldispl  )
      DEALLOCATE ( my_jh    , my_jhdispl )
      DEALLOCATE ( scounts  , sdispls    )
      DEALLOCATE ( rcounts  , rdispls    )
      DEALLOCATE ( jwho, jg2l, jl2g )
      DEALLOCATE ( istatus )
      DEALLOCATE ( gdata1d, buf1d )
#endif
#ifdef DYNAMIC
* >>> zcdim
      DEALLOCATE ( GAU, GAV, GAT, GAPS, GAQ )
      DEALLOCATE ( GBU, GBV, GBT, GBPS, GBQ )
      DEALLOCATE ( GDZS, ALAT, DLAT, ALON, DLON,
     &             SIG, SIGM, DSIG, DSIGM )
* >>> lsmtcv
      DEALLOCATE (
     &     ixy, jxy, ijxy, klnd,
     &     kvec )
      DEALLOCATE ( wsg2g )
      DEALLOCATE ( ivt, ist, isc )
      DEALLOCATE (
     &     watsat, hksat , smpsat,
     &     bch   , watdry, watopt,
     &     csol  , tksol , tkdry ,
     &     sand  , clay  ,
     &     dzsoi , zsoi  , root)
      DEALLOCATE ( lati, long )
      DEALLOCATE (
     &     begkpt, numkpt,
     &     beglpt, numlpt )
* >>> lsmtvv
      DEALLOCATE (
     &     h2osno, h2ocan, h2osoi,
     &     tv    , tg    , tsoi  ,
     &     moz   , eah   , soot  , hsno  ,
     &     fsno  , fwet  , htop  , tlai  ,
     &     tsai  , elai  , esai  , foln  ,
     &     stemb , rootb , soilc , igs   ,
     &     albd  , albi  ,
     &     albgrd, albgri,
     &     fabd  , fabi  ,
     &     ftdd  , ftid  ,
     &     ftii  , fsun  )
* >>> lsmvar
      DEALLOCATE (
     &     pgcm   , psrf   , tgcm   ,
     &     qgcm   , firgcm , qprecc ,
     &     qprecl , ugcm   , vgcm   ,
     &     hgcm   , thgcm  , egcm   ,
     &     rhogcm , co2gcm , o2gcm  ,
     &     solad  , solai  )

      DEALLOCATE (
     &     taux , tauy , fire , fsh  ,
     &     fcev , fgev , fctr , trad ,
     &     tsa  , fpsn , frm  , frg  ,
     &     fmicr, fco2 , fira , fgr  ,
     &     fsm  , tam  , tah  , fsa  ,
     &     fsr  , ndvi , frmf , dmi  )

      DEALLOCATE (
     &     bevap, btran, rsw  , qsoil,
     &     qvege, qvegt, qintr, qcdew,
     &     qceva, qsnow, qfros, qsubl,
     &     qmelt, qinfl, qseva, qtran,
     &     qdrai, qover, qdrip, qsdew )
* >>> ocnvar
      DEALLOCATE (
     &     wgocn, tsocn,
     &     wgice, tsice,
     &     hsice, hsnow )
      DEALLOCATE (
     &     seaasdir, seaaldir,
     &     seaasdif, seaaldif,
     &     sicasdir, sicaldir,
     &     sicasdif, sicaldif )
* >>> geovv
       deallocate(
     &  pblh_save,
     &  tkeout_save ,
     &  stdw_save, wstar_save, gdsh_save,
     &  gdqf_save, ts_save, qs_save
#ifdef BULK
     &  ,omega_save
     &  ,rino_save
     &  ,ri_star_save
     &  ,cbmf_save
#endif
     & )
#endif
#ifdef MPI
      CALL MPI_Finalize( ierror )
#endif
      STOP
      END
***********************************************************************
      SUBROUTINE AHSTIN         !" registration standard history output
     I         ( GDU   , GDV   , GDT   , GDPS  , GDQ   )
*
*   [PARAM] 
#ifdef DYNAMIC
      USE zcdim
#else
# include       "zcdim.F"
#endif
#include        "zpdim.F"       /* physics etc. */
      REAL       GDU   ( IJKDIM )       !" westerly u
      REAL       GDV   ( IJKDIM )       !" southern wind v
      REAL       GDT   ( IJKDIM )       !" temperature
      REAL       GDPS  ( IJDIM  )       !" surface pressure
      REAL       GDQ   ( IJKDIM, NTR )  !" humidity q
*
      CALL     HISTIN
     I       ( GDU, 'U ', 'u-velocity       ' ,'m/s  ', 'ALEV'   )
      CALL     HISTIN
     I       ( GDV, 'V ', 'v-velocity       ' ,'m/s  ', 'ALEV'   )
      CALL     HISTIN
     I       ( GDT, 'T ', 'temperature      ' ,'K    ', 'ALEV'   )
      CALL     HISTIN
     I       ( GDPS,'PS', 'surface pressure ' ,'mb   ', 'ASFC'   )
      CALL     HISTIN
     I       ( GDQ, 'Q ', 'specific humidity' ,'kg/kg', 'ALEV'   )
      IF ( ITL .GE. 2 ) THEN
         CALL     HISTIN
     I       ( GDQ(1,ITL), 'QL', 'liquid water' ,'kg/kg', 'ALEV' )
      ENDIF
*
      RETURN
      END
*********************************************************************
      SUBROUTINE ACHKV          !" valid range monitor
     I         ( GDU   , GDV   , GDT   , GDPS  , GDQ   ,
     I           HLAB  , OALL                            )
*
*   [PARAM]
#ifdef DYNAMIC
      USE zcdim
#else
# include       "zcdim.F"       /* # of grid point & wave */
#endif
#include        "zpdim.F"       /* physics etc. */
*
*   [INPUT] 
      REAL       GDU   ( IJKDIM )      !" westerly u
      REAL       GDV   ( IJKDIM )      !" southern wind v
      REAL       GDT   ( IJKDIM )      !" temperature
      REAL       GDPS  ( IJDIM  )      !" surface pressure
      REAL       GDQ   ( IJKDIM, NTR ) !" humidity q
      CHARACTER  HLAB *(*)
      LOGICAL    OALL
*
      CALL CHKVAL(GDU,     IDIM,JDIM,KMAX, -2.E2, 2.E2,'U' ,HLAB,OALL)
      CALL CHKVAL(GDV,     IDIM,JDIM,KMAX, -2.E2, 2.E2,'V' ,HLAB,OALL)
      CALL CHKVAL(GDT,     IDIM,JDIM,KMAX,  1.E2, 4.E2,'T' ,HLAB,OALL)
      CALL CHKVAL(GDPS,    IDIM,JDIM,1   ,  4.E2,11.E2,'PS',HLAB,OALL)
      CALL CHKVAL(GDQ(1,1),IDIM,JDIM,KMAX,-1.E-3,5.E-2,'Q' ,HLAB,OALL)
      IF ( ITL .GE. 2 ) THEN
        CALL CHKVAL(GDQ(1,ITL),IDIM,JDIM,KMAX,
     &                                    -1.E-3,5.E-2,'QL',HLAB,OALL)
      ENDIF
      IF ( OALL ) THEN
         CALL DIAGP ( GDU,      'GDU',  KMAX,'(G13.5)'  )
         CALL DIAGP ( GDV,      'GDV',  KMAX,'(G13.5)'  )
         CALL DIAGP ( GDT,      'GDT',  KMAX,'(G13.5)'  )
         CALL DIAGP ( GDPS,     'GDPS',   1 ,'(G13.5)'  )
         CALL DIAGP ( GDQ,      'GDQ',  KMAX,'(G13.5)'  )
         IF ( ITL .GE. 2 ) THEN
            CALL DIAGP ( GDQ(1,ITL), 'GDQL', KMAX,'(G13.5)'  )
         ENDIF
      ENDIF
*
      RETURN
      END
* PACKAGE DADMN !" dynamics: main routine
*"  [HIS] 95/03/15(numaguti) AGCM5.4.01
**********************************************************************
      SUBROUTINE DYNMCS         !"  dynamics
     M         ( GAU   , GAV   , GAT   , GAPS  , GAQ   ,
     M           GBU   , GBV   , GBT   , GBPS  , GBQ   ,
     I           DELT  , OINIT , 
     C           GDZS  , ALON  , DLON  , ALAT  , DLAT  ,
     C           SIG   , SIGM  , DSIG  , DSIGM           )
*
*"              semi-implicit time integration
*"              implicit diffusion
*
*   [PARAM]
#ifdef DYNAMIC
      USE zcdim                 !" # of grid point & wave
#ifdef BULK
      use geovv
#endif
# ifdef MPI
      USE mpipar                !" parameters for MPI parallilization
      include   'mpif.h'        !" user include file for MPI programs
# endif
#else
# include       "zcdim.F"       /* # of grid point & wave */
#endif
#include        "zpdim.F"       /* physics etc. */
#include        "zhdim.F"       /* # of char. in string */
#include        "zccom.F"       /* stand. physical const. */
*
*   [MODIFY]
* grid value(t) <GENGD>  grid value(t+dt)
*
      REAL       GAU   ( IJDIM, KMAX      )  !" westerly u
      REAL       GAV   ( IJDIM, KMAX      )  !" southern wind v
      REAL       GAT   ( IJDIM, KMAX      )  !" temperature T
      REAL       GAPS  ( IJDIM            )  !" surface pressure
      REAL       GAQ   ( IJDIM, KMAX, NTR )  !" humidity etc.
*
* grid value(t-dt) <MOVGD>  grid value(t)
*
      REAL       GBU   ( IJDIM, KMAX      )  !" westerly u
      REAL       GBV   ( IJDIM, KMAX      )  !" southern wind v
      REAL       GBT   ( IJDIM, KMAX      )  !" temperature T
      REAL       GBPS  ( IJDIM            )  !" surface pressure
      REAL       GBQ   ( IJDIM, KMAX, NTR )  !" humidity etc.
*
*   [INPUT]
*
      REAL       DELT                      !" time step delta(t)
      LOGICAL    OINIT                     !" initialized condition ?
      REAL       GDZS  ( IJDIM )           !" surface topography
      REAL       ALON  ( IJDIM )           !" longitude
      REAL       DLON  ( IJDIM )           !" weight of longitude
      REAL       ALAT  ( IJDIM )           !" latitude
      REAL       DLAT  ( IJDIM )           !" weight of latitude
      REAL       SIG   ( KMAX )            !" sigma-level (integer)
      REAL       SIGM  (KMAX+1)            !" sigma-level (half lev)
      REAL       DSIG  ( KMAX )            !" delta(sigma) (integer)
      REAL       DSIGM (KMAX+1)            !" delta(sigma) (half lev)
*
*   [INTERNAL WORK]
      REAL     GTUA  ( IJDIM, KMAX   )     !" tend. of u adv.
      REAL     GTVA  ( IJDIM, KMAX   )     !" tend. merid.adv. V
      REAL     GTKE  ( IJDIM, KMAX   )     !" kinetic energy term KE
      REAL     GTUT  ( IJDIM, KMAX   )     !" zonal adv. of temp.
      REAL     GTVT  ( IJDIM, KMAX   )     !" merid. adv. of temp.
      REAL     GTT   ( IJDIM, KMAX   )     !" tend. term of T
      REAL     GTPI  ( IJDIM         )     !" tend. term of p
      REAL     GSIGD ( IJDIM, KMAX+1 )     !" tendency of sigma
      REAL     GPID  ( IJDIM         )     !" tendency of pai
      REAL     GTUQ  ( IJDIM, KMAX, NTR )  !" zonal advection Uq
      REAL     GTVQ  ( IJDIM, KMAX, NTR )  !" meridional advection Vq
      REAL     GTQ   ( IJDIM, KMAX, NTR )  !" tend. of q
      REAL     GDOMG ( IJDIM, KMAX      )  !" p-veolocity
*
      INTEGER    IFPAR, JFPAR, IJ, K, M
      REAL       DELI
*
*   [INTERNAL PARM] 
      LOGICAL    OCHECK, OCKALL
      NAMELIST  /NMCHKD/ OCHECK, OCKALL
      DATA       OCHECK, OCKALL / .FALSE., .FALSE. /
c     SAVE       OCHECK, OCKALL
c
      LOGICAL    OFIRST
      DATA       OFIRST / .TRUE. /
      SAVE       OFIRST
*
      IF ( OFIRST ) THEN
         WRITE ( 6,* ) ' @@@ DADMN: DYNAMICS CONTROL 97/09/29'
         OFIRST = .FALSE.
*
         CALL   REWNML ( IFPAR , JFPAR )
#ifdef MPI
         IF ( myrank == 0 ) THEN
            READ   ( IFPAR, NMCHKD, IOSTAT=iostat )
            lcom(1) = OCHECK
            lcom(2) = OCKALL
         END IF
         CALL MPI_Bcast ( iostat, 1, MPI_INTEGER, 0,
     &        MPI_COMM_WORLD, ierror )
         icount = 2
         CALL MPI_Bcast ( lcom, icount, MPI_LOGICAL, 0,
     &        MPI_COMM_WORLD, ierror )
         IF ( myrank /= 0 ) THEN
            OCHECK = lcom(1)
            OCKALL = lcom(2)
         END IF
         IF ( iostat > 0 ) THEN
            WRITE (6,*) "READ ERROR: NMCHKD"
            CALL MPI_Finalize ( ierror )
            STOP
         END IF
#else
         READ   ( IFPAR, NMCHKD, END=190 )
#endif
  190    WRITE  ( JFPAR, NMCHKD )
      ENDIF
*
      CALL FIXMAS
     I         ( GBPS  , GBQ   , DLON  , DLAT  , DSIG   )
*
* << tendency term >>
*
      CALL DYNTRM
     O         ( GTUA  , GTVA  , GTKE  , GTUT  , GTVT  ,
     O           GTT   , GTPI  , GSIGD , GPID  , GTUQ  ,
     O           GTVQ  , GTQ   , GDOMG ,
     I           GAU   , GAV   , GAT   , GAPS  , GAQ   , 
     C           GDZS  , ALAT  , SIG   , SIGM  , DSIG    ) 
*     
* << time integration >>
*
      DELI = 1.0/(2.0*DELT)
      CALL HISTIF
     I   ( GBU,'DUDYN','dynamics u-tendency','m/s**2', 'ALEV',-DELI)
      CALL HISTIF
     I   ( GBV,'DVDYN','dynamics v-tendency','m/s**2', 'ALEV',-DELI)
      CALL HISTIF
     I   ( GBT,'DTDYN','dynamics T-tendency','K/s',    'ALEV',-DELI)
      CALL HISTIF
     I   ( GBPS,'DPDYN','dynamics Ps-tendency','hPa/s','ASFC',-DELI)
      CALL HISTIF
     I   ( GBQ,'DQDYN','dynamics q-tendency','1/s',    'ALEV',-DELI)
      IF ( ITL .GE. 2 ) THEN
         CALL HISTIF
     I (GBQ(1,1,ITL),'DLDYN','dynamics l-tendency','1/s','ALEV',-DELI)
      ENDIF
*
      CALL DYNSTP
     M         ( GAU   , GAV   , GAT   , GAPS  , GAQ   ,
     M           GBU   , GBV   , GBT   , GBPS  , GBQ   ,
     I           GTUA  , GTVA  , GTKE  ,
     I           GTUT  , GTVT  , GTT   , GTPI  ,
     I           GTUQ  , GTVQ  , GTQ   ,
     I           DELT  , OINIT , 
     C           GDZS  , ALAT  , SIG   , SIGM  , DSIG    )
*
      CALL HISTAD ( GAU    , 'DUDYN',  DELI )
      CALL HISTAD ( GAV    , 'DVDYN',  DELI )
      CALL HISTAD ( GAT    , 'DTDYN',  DELI )
      CALL HISTAD ( GAPS   , 'DPDYN',  DELI )
      CALL HISTAD ( GAQ    , 'DQDYN',  DELI )
      IF ( ITL .GE. 2 ) THEN
         CALL HISTAD ( GAQ(1,1,ITL), 'DLDYN',  DELI )
      ENDIF
*
* << adjust >>
*
      CALL MASFIX
     M         ( GAPS  , GAQ   ,
     I           DELT  , DLON  , DLAT  , DSIG  )
*
      IF ( OCHECK ) THEN
         CALL DCHKT             !" valid range monitor
     I         ( GTUA  , GTVA  , GTKE  ,
     I           GTUT  , GTVT  , GTT   , 
     I           GTPI  , GSIGD , GPID  , 
     I           GTUQ  , GTVQ  , GTQ   , OCKALL )
      ENDIF
*
      CALL HISTIN
     I      ( GSIGD, 'SIGD  ', 'sigma-velocity  ' ,'1/s   ', 'AMLEV' )
      CALL HISTIN
     I      ( GPID , 'PID   ', 'pi-tendency     ' ,'1/s   ', 'ASFC'  )
*
      DO 2100 K = 1, KMAX
         DO 2100 IJ = 1, IJDIM
            GSIGD( IJ,K ) = ( GSIGD( IJ,K ) + GPID( IJ )*SIG( K ) )
     &                     *GAPS( IJ ) 
 2100 CONTINUE 
*
      CALL HISTIN
     I       ( GSIGD, 'OMG   ', 'p-velocity  ' ,'hPa/s', 'AMLEV'  )
*
c     DO K = 1,KMAX
c       gdpmu=amax1(1.,gaps(1)*sigm(k+1))
c       h=(alog(gdpmu         )-alog(gaps(1)*sigm(k)))
c       a=(alog(gdpmu         )-alog(gaps(1)*sig (k)))
c       b=(alog(gaps(1)*sig(k))-alog(gaps(1)*sigm(k)))
c       DO IJ = 1, IJSDIM
c         GDOMG(IJ,K)=GSIGD(IJ,K)*a/h+GSIGD(IJ,K+1)*b/h
c       ENDDO
c     ENDDO
cOMEGA correction 12 NOV. 2001, milee
c KIM [2008/07/11] omega correction
c     DO K=1,KMAX
cDO IJ=1,IJSDIM
c  GDOMG (IJ,K)=GDOMG(IJ,K)*GAPS(IJ)
cENDDO
c     ENDDO
c
      CALL HISTIN
     I       ( GDOMG, 'OMEGA ', 'p-velocity  ' ,'hPa/s', 'ALEV'   )
*
#ifdef BULK
      do k = 1, kmax
        do j = 1, jdim
          do i = 1, idim

            omega_save(i,j,k) = gdomg(idim*(j-1)+i,k)

          enddo
        enddo
      enddo
#endif
      RETURN
      END
**********************************************************************
      SUBROUTINE DCHKT          !" valid range monitor
     I         ( GTUA  , GTVA  , GTKE  ,
     I           GTUT  , GTVT  , GTT   , 
     I           GTPI  , GSIGD , GPID  , 
     I           GTUQ  , GTVQ  , GTQ   , OCKALL )
*
*   [PARAM]
#ifdef DYNAMIC
      USE zcdim                 !" # of grid point & wave
#else
# include       "zcdim.F"       /* # of grid point & wave */
#endif
#include        "zpdim.F"       /* physics etc. */
*
*   [INPUT] 
      REAL     GTUA  ( IJDIM, KMAX   )  !" tend. of u adv.
      REAL     GTVA  ( IJDIM, KMAX   )  !" tend. merid.adv. V
      REAL     GTKE  ( IJDIM, KMAX   )  !" kinetic energy term KE
      REAL     GTUT  ( IJDIM, KMAX   )  !" zonal adv. of temp.
      REAL     GTVT  ( IJDIM, KMAX   )  !" merid. adv. of temp.
      REAL     GTT   ( IJDIM, KMAX   )  !" tend. term of T
      REAL     GTPI  ( IJDIM         )  !" tend. term of p
      REAL     GSIGD ( IJDIM, KMAX+1 )  !" tendency of sigma
      REAL     GPID  ( IJDIM         )  !" tendency of pai
      REAL     GTUQ  ( IJDIM, KMAX, NTR  )  !" zonal advection Uq
      REAL     GTVQ  ( IJDIM, KMAX, NTR  )  !" meridional advection Vq
      REAL     GTQ   ( IJDIM, KMAX, NTR  )  !" tend. of q
      LOGICAL  OCKALL
*
      CALL CHKVAL(GTUA, IDIM,JDIM,KMAX,-1.E0, 1.E0, 'GTUA','D',OCKALL)
      CALL CHKVAL(GTVA, IDIM,JDIM,KMAX,-1.E0, 1.E0, 'GTVA','D',OCKALL)
      CALL CHKVAL(GTKE, IDIM,JDIM,KMAX,-1.E4, 1.E6, 'GTKE','D',OCKALL)
      CALL CHKVAL(GTUT, IDIM,JDIM,KMAX,-1.E4, 1.E4, 'GTUT','D',OCKALL)
      CALL CHKVAL(GTVT, IDIM,JDIM,KMAX,-1.E4, 1.E4, 'GTVT','D',OCKALL)
      CALL CHKVAL(GTT , IDIM,JDIM,KMAX,-1.E-1,1.E-1,'GTDT','D',OCKALL)
      CALL CHKVAL(GTPI, IDIM,JDIM,   1,-1.E-4,1.E-4,'GTPI','D',OCKALL)
      CALL CHKVAL(GSIGD,IDIM,JDIM,KMAX,-1.E-4,1.E-4,'SIGD','D',OCKALL)
      CALL CHKVAL(GPID, IDIM,JDIM,   1,-1.E-4,1.E-4,'GPID','D',OCKALL)
      CALL CHKVAL(GTUQ, IDIM,JDIM,KMAX,-1.E-1,1.E1, 'GTUQ','D',OCKALL)
      CALL CHKVAL(GTVQ, IDIM,JDIM,KMAX,-1.E-1,1.E1, 'GTVQ','D',OCKALL)
      CALL CHKVAL(GTQ , IDIM,JDIM,KMAX,-1.E-4,1.E-4,'GTDQ','D',OCKALL)
*
      RETURN
      END
      subroutine initialize
      INCLUDE   'mpif.h'        !" user include file for MPI programs
        
      CALL MPI_Init ( ierror )
      end subroutine initialize

Attachment: program_off.F90
Description: Binary data

Reply via email to