MPI_Comm_size(PETSC_COMM_WORLD,&Np);

   Use Np and MPI_COMM_NULL. Note that you should be using PETSc from 
https://bitbucket.org/petsc/petsc

   Barry

Yes, the phrasing  is a little confusing.

On Sep 6, 2013, at 5:32 PM, Lukasz Kaczmarczyk 
<[email protected]> wrote:

> On 6 Sep 2013, at 23:00, Barry Smith <[email protected]> wrote:
> 
>> 
>> I would use MatGetRedundantMatrix()
>> 
>> 
>> On Sep 6, 2013, at 4:52 PM, Matthew Knepley <[email protected]> wrote:
>> 
>>> On Fri, Sep 6, 2013 at 4:25 PM, Lukasz Kaczmarczyk 
>>> <[email protected]> wrote:
>>> Hello,
>>> 
>>> I solve system of eq. generated by finite element method.
>>> 
>>> I  apply some projection matrix to stiffness matrix K,
>>> P=I-CT[(CTC)^-1]C
>>> where C  is some not square matrix.
>>> 
>>> Resulting stiffness matrix K' has form
>>> K' = PT K P,
>>> with that at hand I solve problem K' *x = f'
>>> 
>>> I manage to build shell matrix where I use sub ksp solver to get solution 
>>> for (CTC)*b = C*x, where [ b = (CTC^-1*C*x)] . Using penalised matrix for 
>>> preconditioner, where  K_prec = alpha*CCT + K, where alpha is penalty I can 
>>> get solution in efficient way.
>>> 
>>> Now I like to avoid penalty parameter, in order to do that I will need to 
>>> apply penalty matrix for each individual finite element  matrix before it 
>>> is assembled into K. No problem with that, using scattering it can be done.
>>> 
>>> Problem is with  solution (CTC)*b = C*x, C and CTC matrices are parallel, 
>>> since I have parallelised assembly functions, problem (CTC)*b = C*x need to 
>>> be solved on each processor independently without communication. It is not 
>>> problem, but to do that I need to transform C and CTC matrix form MATMPIAIJ 
>>> to MATAIJ.
>>> 
>>> I think this might be what you want:
>>> 
>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetSubMatrices.html
>>> 
>>>  Thanks,
>>> 
>>>     Matt
>>> 
>>> I know that MatConvert will not do it. I wonder it is any other way that 
>>> form very beginning to assemble matrix C as a serial matrix.
>>> 
>>> Regards,
>>> Lukasz
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> What most experimenters take for granted before they begin their 
>>> experiments is infinitely more interesting than any results to which their 
>>> experiments lead.
>>> -- Norbert Wiener
>> 
> 
> Thanks Matt and Barry,
> 
> Sorry for taking your time, I should find that by myself. 
> How to use MatGetSubMatrices is clear. 
> 
> I have feeling that MatGetRedundantMatrix could be more efficient. However,  
> apologise my ignorance, I have problem with MatGetRedundantMatrix,
> 
> *) is this should be equal to number of process  in the communicator group, 
> in my case 1 since I like to use PETSC_COMM_SELF?
> nsubcomm      - the number of subcommunicators (= number of redundant 
> parallel or sequential matrices) 
> subcomm       - MPI communicator split from the communicator where mat 
> resides in
> 
> *) it could be equal to total number of rows in MPIAIJ? What if this number 
> is smaller, the first mlocal_red are stored in redundant matrix?
> mlocal_red    - number of local rows of the redundant matrix
> 
> 
> Regards,
> Lukasz 
> 
> 

Reply via email to