Dear Barry,

    this fixes both my example and my main code.

I am only puzzled by the pairing of DMCreateGlobalVector with DMRestoreGlobalVector. Anyway, it works both like you suggested and with DMGetGlobalVector...DMRestoreGlobalVector without leaving garbage around.

Thank you very much for your time and for the tip!

On a side note, was there a better way to handle this output case? If one wrote the full vector I guess one would obtain a Nx*Ny*Nz*Ndof data set... Would it possible to then write a xdmf file to make paraview see each dof as a separate Nx*Ny*Nz data set?

    Matteo

Il 23/07/21 01:25, Barry Smith ha scritto:
    I have run your code and looked at the output in Matlab and then looked at 
your code.

    It seems you expect the output to be independent of the number of MPI 
ranks? It will not be because you have written

  ierr = VecGetSubVector(U0,is[0],&uField); CHKERRQ(ierr);
   PetscObjectSetName((PetscObject) uField, "S");
   ierr = VecView(uField,viewer); CHKERRQ(ierr);
   ierr = VecRestoreSubVector(U0,is[0],&uField); CHKERRQ(ierr);
The subvector you create loses the DMDA context (information that the vector is 
associated with a 2d array sliced up among the MPI ranks) since you just taking 
a part of the vector out via VecGetSubVector() which only sees an IS so has no 
way to connect the new subvector to a DMDA of dimension 2 (with a single field) 
and automatically do the reordering to natural ordering when the vector is 
directly associated with a DMDA and then viewed to a file.

In order to get the behavior you hope for, you need to have your subvector be associated with the 
DMDA that comes out as the final argument to DMCreateFieldDecomposition(). There are multiple ways 
you can do this, none of them are particularly "clear", because it appears no one has 
bothered to include in the DM API a clear way to transfer vectors and parts of vectors between DMs 
and sub-DMs (that is a DM with the same topology as the original DM but fewer or more 
"fields").

   I would suggest using

    DMCreateGlobalVector(daField[0], &uField);
    VecStrideGather(U0,0,uField,INSERT_VALUES);
    PetscObjectSetName((PetscObject) uField, "S");
    ierr = VecView(uField,viewer); CHKERRQ(ierr);
    DMRestoreGlobalVector(daField[0], &uField);

   For the second field you would use U0,1 instead of U0,0.

    This will be fine for DMDA but I cannot say if it is appropriate for all 
types of DMs in all circumstances.

Barry



On Jul 15, 2021, at 10:44 AM, Matteo Semplice <[email protected]> 
wrote:

Hi.

When I write (HDF5 viewer) a vector associated to a DMDA with 1 dof, the output 
is independent of the number of cpus used.

However, for a DMDA with dof=2, the output seems to be correct when I run on 1 
or 2 cpus, but is scrambled when I run with 4 cpus. Judging from the ranges of 
the data, each field gets written to the correct part, and its the data witin 
the field that is scrambled. Here's my MWE:

#include <petscversion.h>
#include <petscdmda.h>
#include <petscviewer.h>
#include <petscsys.h>
#include <petscviewerhdf5.h>

int main(int argc, char **argv) {

   PetscErrorCode ierr;
   ierr = PetscInitialize(&argc,&argv,(char*)0,help); CHKERRQ(ierr);
   PetscInt Nx=11;
   PetscInt Ny=11;
   PetscScalar dx = 1.0 / (Nx-1);
   PetscScalar dy = 1.0 / (Ny-1);
   DM dmda;
   ierr = DMDACreate2d(PETSC_COMM_WORLD,
                       DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,
                       DMDA_STENCIL_STAR,
                       Nx,Ny, //global dim
                       PETSC_DECIDE,PETSC_DECIDE, //n proc on each dim
                       2,1, //dof, stencil width
                       NULL, NULL, //n nodes per direction on each cpu
                       &dmda);      CHKERRQ(ierr);
   ierr = DMSetFromOptions(dmda); CHKERRQ(ierr);
   ierr = DMSetUp(dmda); CHKERRQ(ierr); CHKERRQ(ierr);
   ierr = DMDASetUniformCoordinates(dmda, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0); 
CHKERRQ(ierr);
   ierr = DMDASetFieldName(dmda,0,"s"); CHKERRQ(ierr);
   ierr = DMDASetFieldName(dmda,1,"c"); CHKERRQ(ierr);
   DMDALocalInfo daInfo;
   ierr = DMDAGetLocalInfo(dmda,&daInfo); CHKERRQ(ierr);
   IS *is;
   DM *daField;
   ierr = DMCreateFieldDecomposition(dmda,NULL, NULL, &is, &daField); 
CHKERRQ(ierr);
   Vec U0;
   ierr = DMCreateGlobalVector(dmda,&U0); CHKERRQ(ierr);

   //Initial data
   typedef struct{ PetscScalar s,c;} data_type;
   data_type **u;
   ierr = DMDAVecGetArray(dmda,U0,&u); CHKERRQ(ierr);
   for (PetscInt j=daInfo.ys; j<daInfo.ys+daInfo.ym; j++){
     PetscScalar y = j*dy;
     for (PetscInt i=daInfo.xs; i<daInfo.xs+daInfo.xm; i++){
       PetscScalar x = i*dx;
       u[j][i].s = x+2.*y;
       u[j][i].c = 10. + 2.*x*x+y*y;
     }
   }
   ierr = DMDAVecRestoreArray(dmda,U0,&u); CHKERRQ(ierr);

   PetscViewer viewer;
   ierr = 
PetscViewerHDF5Open(PETSC_COMM_WORLD,"solutionSC.hdf5",FILE_MODE_WRITE,&viewer);
 CHKERRQ(ierr);
   Vec uField;
   ierr = VecGetSubVector(U0,is[0],&uField); CHKERRQ(ierr);
   PetscObjectSetName((PetscObject) uField, "S");
   ierr = VecView(uField,viewer); CHKERRQ(ierr);
   ierr = VecRestoreSubVector(U0,is[0],&uField); CHKERRQ(ierr);
   ierr = VecGetSubVector(U0,is[1],&uField); CHKERRQ(ierr);
   PetscObjectSetName((PetscObject) uField, "C");
   ierr = VecView(uField,viewer); CHKERRQ(ierr);
   ierr = VecRestoreSubVector(U0,is[1],&uField); CHKERRQ(ierr);
   ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr);

   ierr = PetscFinalize();
   return ierr;
}

and my xdmf file

<?xml version="1.0" ?>
<Xdmf 
xmlns:xi="https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.w3.org%2F2001%2FXInclude&amp;data=04%7C01%7Cmatteo.semplice%40uninsubria.it%7C084a04da29a743a31cdb08d94d6802d3%7C9252ed8bdffc401c86ca6237da9991fa%7C0%7C0%7C637625931400633168%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=PnSk5r9fVFc4I2GEaGfO2NRVwXZg8Im1Am8dHcc2Kac%3D&amp;reserved=0";
 Version="2.0">
   <Domain>
     <Grid GridType="Collection" CollectionType="Temporal">
       <Time TimeType="List">
         <DataItem Dimensions="1">1.0</DataItem>
       </Time>
       <Grid GridType="Uniform" Name="domain">
         <Topology TopologyType="2DCoRectMesh" Dimensions="11 11"/>
         <Geometry GeometryType="ORIGIN_DXDY">
           <DataItem Format="XML" NumberType="Float" Dimensions="2">0.0 
0.0</DataItem>
           <DataItem Format="XML" NumberType="Float" Dimensions="2">0.1 
0.1</DataItem>
         </Geometry>
         <Attribute Name="S" Center="Node" AttributeType="Scalar">
           <DataItem Format="HDF" Precision="8" Dimensions="11 
11">solutionSC.hdf5:/S</DataItem>
         </Attribute>
         <Attribute Name="C" Center="Node" AttributeType="Scalar">
           <DataItem Format="HDF" Precision="8" Dimensions="11 
11">solutionSC.hdf5:/C</DataItem>
         </Attribute>
       </Grid>
     </Grid>
   </Domain>
</Xdmf>

Steps to reprduce: run code and open the xdmf with paraview. If the code was 
run with 1,2 or 3 cpus, the data are correct (except the plane xy has become 
the plane yz), but with 4 cpus the data are scrambled.

Does anyone have any insight?

(I am using Petsc Release Version 3.14.2, but I can compile a newer one if you 
think it's important.)

Best

     Matteo

Reply via email to