You need to cut and paste and send the entire error message: "not working" 
makes it very difficult for us to know what has gone wrong.
Based on the code fragment you sent I guess one of your problems is that the 
viewer communicator is not the same as the matrix communicator. Since the  
matrix is on 16 processors (I am guessing PETSC_COMM_WORLD) the viewer 
communicator must also be the same (also PETSC_COMM_WORLD). 
The simplest code you can do is

> PetscViewerASCIIOpen(PETSC_COMM_WORLD,"stdout",&viewer);
>        MatView(impOP,viewer);

  but you can get a similar effect with the command line option -mat_view and 
not write any code at all (the less code you have to write the better).

  Barry


> On Jun 19, 2015, at 10:42 PM, Longyin Cui <[email protected]> wrote:
> 
> Hi dear whoever reads this:
> 
> I have a quick question:
> After matrix assembly, suppouse I have matrix A. Assuming I used 16 
> processors, if I want each processor to print out their local contents of the 
> A, how do I proceed? (I simply want to know how the matrix is stored from 
> generating to communicating to solving, so I get to display all the time to 
> get better undersdtanding)
> 
> I read the examples, and I tried things like below and sooooo many different 
> codes from examples, it still is not working.
>        PetscViewer viewer;
>        PetscMPIInt my_rank;
>        MPI_Comm_rank(PETSC_COMM_WORLD,&my_rank);   
>        PetscPrintf(MPI_COMM_SELF,"[%d] rank\n",my_rank);
>        PetscViewerASCIIOpen(MPI_COMM_SELF,NULL,&viewer);
>        PetscViewerPushFormat(viewer,PETSC_VIEWER_ASCII_INFO);
>        MatView(impOP,viewer);
> 
> Plea......se give me some hints
> 
> Thank you so very much!
> 
> 
> Longyin Cui (or you know me as Eric);
> Student from C.S. division;
> Cell: 7407047169;
> return 0;
> 

Reply via email to