I see. I will have a look. Regarding memory there would be no problem to put 
everything on process 0 I believe. If I can't figure out how to process with 
your routine, I will go back to the initial try with mpi_allgather.

Thx

Timothee

Sent from my iPhone

> On 2015/08/29, at 12:40, Barry Smith <[email protected]> wrote:
> 
> 
>  I wrote a routine DMDAGetRay() that pulls a 1 dimensional slice out of a 2d 
> DMDA and puts it on process 0. It uses AOApplicationToPetsc() so is not truly 
> scalable but perhaps you could take a look at that. Since you say  "(I have 
> lots of points in phi, so this can happen easily)" it may be ok for you to 
> just stick the 2d slice on process 0 and then save it?
> 
>  Barry
> 
> Without using AOApplicationToPetsc() or something similar yes it is in 
> general a nightmare.
> 
> 
>> On Aug 28, 2015, at 10:15 PM, Timothée Nicolas <[email protected]> 
>> wrote:
>> 
>> Hi,
>> 
>> I have been thinking for several hours about this problem and can't find an 
>> efficient solution, however I imagine this must be possible somehow with 
>> Petsc.
>> 
>> My problem is the following :
>> 
>> I work in 3D (R,Z,Phi) which makes my data quite heavy and I don't want to 
>> save all the data of all my fields, even just once in a while. Instead, I 
>> would like to save in a binary file a slice at a given angle, say phi=0.
>> 
>> As I did not find if it's natively possible in Petsc, I considered creating 
>> a second 2D DMDA, on which I can create 2D vectors and view them with the 
>> binary viewer. So far so good. However, upon creating the 2D DMDA, naturally 
>> the distribution of processors does not correspond to the distribution of 
>> the 3D DMDA. So I was considering creating global arrays, filling them with 
>> the data of the 3D array in phi=0, then doing an MPI_allgather to give the 
>> information to all the processors, to be able to read the array and fill the 
>> 2D Petsc Vector with it. So the code would be something along the lines of :
>> 
>>  PetscScalar, pointer :: gX2D(:,:,:)
>>  PetscScalar, pointer :: gX(:,:,:,:)
>>  ! LocalArray is locally filled                                              
>>                                                                              
>>                                             
>>  ! It is transmitted to GlobalArray via MPI_Allgather                        
>>                                                                              
>>                                             
>>  real(8)          :: LocalArray(user%dof,user%mr,user%mz)
>>  real(8)          :: GlobalArray(user%dof,user%mr,user%mz)
>> 
>>  call DMDAVecGetArrayF90(da_phi0,X2D,gX2D,ierr)
>>  call DMDAVecGetArrayF90(da,X,gX,ierr)
>> 
>>  do k = user%phis,user%phie
>>     do j = user%zs,user%ze
>>        do i = user%rs,user%re
>>           do l=1,user%dof
>>              if (k.eq.phi_print) then
>>                 ! Numbering obtained with DMDAGetArrayF90 differs from usual 
>>                                                                              
>>                                             
>>                 LocalArray(l,i,j) = gX(l-1,i-1,j-1,k-1)
>>              end if
>>           end do
>>        end do
>>     end do
>>  end do
>> 
>>  nvals = user%dof*user%rm*user%zm
>> 
>>  call MPI_AllGather(LocalArray(1,user%rs,user%zs),        &
>>       &             nvals,MPI_REAL,                        &
>>       &             GlobalArray,       &
>>       &             nvals,MPI_REAL,MPI_COMM_WORLD,ierr)
>> 
>>  do j = zs2D,ze2D
>>     do i = rs2D,re2D
>>        do l=1,user%dof
>>           gX2D(l-1,i-1,j-1) = GlobalArray(l,i,j)
>>        end do
>>     end do
>>  end do
>> 
>> call DMDAVecRestoreArrayF90(da_phi0,X2D,gX2D,ierr)
>>  call DMDAVecRestoreArrayF90(da,X,gX,ierr)
>> 
>> The problem is that MPI_allgather is not at all that simple. Exchanging 
>> array information is much more complicated that I had anticipated ! See this 
>> long post on stackoverflow : 
>> 
>> http://stackoverflow.com/questions/17508647/sending-2d-arrays-in-fortran-with-mpi-gather
>> 
>> I could probably get it to work eventually, but it's pretty complicated, and 
>> I was wondering if there was not a simpler alternative I could not see. 
>> Besides, I am concerned about what could happen if the number of processors 
>> is so large that the 2D Vector gets less than 2 points per processor (I have 
>> lots of points in phi, so this can happen easily). Then Petsc would complain.
>> 
>> Does anybody have ideas ?
>> 
>> Best
>> 
>> Timothée
> 

Reply via email to