Re: [deal.II] get the partition of the system matrix A associated with the unconstrained dofs

2022-08-22 Thread Wolfgang Bangerth

On 8/22/22 10:08, Simon Wiesheier wrote:


As stated, what I tried is to use the operator= according to
LAPACKFullMatrix new_matrix = my_system_matrix .
However, there is an error message
"error: conversion from ‘dealii::SparseMatrix’ to non-scalar 
type ‘dealii::LAPACKFullMatrix’ requested

    LAPACKFullMatrix new_matrix = tangent_matrix"


There doesn't appear a copy-constructor from FullMatrix (which is what 
the compiler is looking for here), but there is a copy operator. Just write

  LAPACKFullMatrix new_matrix;
  new_matrix = tangent_matrix

Even better, of course, if you wrote a patch to add the copy constructor!

Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/d2afdc20-c9a1-988b-2828-6e01491bef7b%40colostate.edu.


Re: [deal.II] get the partition of the system matrix A associated with the unconstrained dofs

2022-08-22 Thread Simon Wiesheier
Thanks for your input.
In the meantime, I replaced the matrix multiplication
res = A^_{-1}*B
by solving 'p' linear systems
A*res[p] = B[p],
where p is the number of columns of the matrix B.

" That's one way to go. FullMatrix::gauss_jordan() also computes the
inverse of a matrix."

As stated, what I tried is to use the operator= according to
LAPACKFullMatrix new_matrix = my_system_matrix .
However, there is an error message
"error: conversion from ‘dealii::SparseMatrix’ to non-scalar type
‘dealii::LAPACKFullMatrix’ requested
   LAPACKFullMatrix new_matrix = tangent_matrix"

How can I fix this?

Best
Simon


Am Mo., 22. Aug. 2022 um 08:45 Uhr schrieb Wolfgang Bangerth <
bange...@colostate.edu>:

> On 8/19/22 13:14, Simon Wiesheier wrote:
> >
> > I also need the system matrix A for a second purpose, namely
> > to compute a matrix multiplication:
> > res = A^{-1} * B ,
> > where B is another matrix.
> > -To be more precise, I need the inverse of the 19x19 submatrix
> > corresponding to the unconstrained DoFs only -- not the inverse of the
> > full system matrix..
>
> Right. But the inverse of the 19x19 matrix is the 19x19 subblock of the
> 20x20 big matrix. That's because after zeroing out the row and column,
> you have a block diagonal matrix where the inverse of the matrix
> consists of the inverses of the individual blocks.
>
> > I could not find a function which computes the inverse of a sparse
> > matrix directly (without solving a linear system).
> > What I tried is,
> > LAPACKFullMatrix new_matrix = my_system_matrix ,
> > thence calling the invert function.
> > But I am not sure if this is the right way to go.
>
> That's one way to go. FullMatrix::gauss_jordan() also computes the
> inverse of a matrix.
>
>
> > -Also, after calling constraints.distribute_local_to_global(),
> > does it make sense at all to compute an inverse matrix, given that some
> > rows and columns were set to zero?
>
> Yes -- as mentioned above, you should consider the resulting matrix as
> block diagonal.
>
> Best
>   W.
>
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/dealii/4Swu5xeNU3U/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/593cd135-fa19-1572-23a9-53bcfb34aad9%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAM50jEtTnxmS5Wwnr9QMcL95uEx%2BJqQmbRuXFyLYb5bQrrZh2Q%40mail.gmail.com.


Re: [deal.II] Re: MPI, synchronize processes

2022-08-22 Thread Wolfgang Bangerth

On 8/22/22 09:55, Uclus Heis wrote:
Would be also a poddible solution to export my testvec as it is right 
now (which contains the global solution) but instead of exporting with 
all the preocess, call the print function only for one process?


Yes. But that runs again into the same issue mentioned before: If you 
have a large number of processes (say, 1000), then you have one process 
doing a lot of work (1000x as much as necessary) and 999 doing nothing. 
This is bound to take a long time.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/7aedd158-0f8c-8c7e-4682-fcdb687b174d%40colostate.edu.


Re: [deal.II] Re: MPI, synchronize processes

2022-08-22 Thread Uclus Heis
Dear Wolfgang,

Thank you very much for the suggestion.
Would be also a poddible solution to export my testvec as it is right now
(which contains the global solution) but instead of exporting with all the
preocess, call the print function only for one process?

Thank you

El El lun, 22 ago 2022 a las 16:51, Wolfgang Bangerth <
bange...@colostate.edu> escribió:

> On 8/21/22 04:29, Uclus Heis wrote:
> > //
> > /testvec.print(outloop,9,true,false);/
> >
> > It is clear that the problem I have now is that I am exporting the
> > completely_distributed_solution and that is not what I want.
> > Could you please informe me how to obtain the locally own solution? I
> > can not find the way of obtaining that,
>
> I don't know what data type you use for testvec, but it seems like this
> vector is not aware of the partitioning and as a consequence it just
> outputs everything it knows. You need to write the loop yourself, as in
> something along the lines of
>for (auto i : locally_owned_dofs)
>  outloop << testvec(i);
> or similar.
>
> Best
>   W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/9baaf996-f9b4-65a9-5855-71897421b041%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAEt%2B5Lvx%3DYnF1Wk96fS6-On-8xQz5qwGPfydW1BvOBARHeCSbQ%40mail.gmail.com.


Re: [deal.II] Re: MPI, synchronize processes

2022-08-22 Thread Wolfgang Bangerth

On 8/21/22 04:29, Uclus Heis wrote:

//
/testvec.print(outloop,9,true,false);/

It is clear that the problem I have now is that I am exporting the 
completely_distributed_solution and that is not what I want.
Could you please informe me how to obtain the locally own solution? I 
can not find the way of obtaining that,


I don't know what data type you use for testvec, but it seems like this 
vector is not aware of the partitioning and as a consequence it just 
outputs everything it knows. You need to write the loop yourself, as in 
something along the lines of

  for (auto i : locally_owned_dofs)
outloop << testvec(i);
or similar.

Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9baaf996-f9b4-65a9-5855-71897421b041%40colostate.edu.


Re: [deal.II] Memory error from utilities.cc

2022-08-22 Thread Wolfgang Bangerth

On 8/20/22 12:56, Raghunandan Pratoori wrote:


         for (unsigned int i=0; i 
local_history_values_at_qpoints[i][j].reinit(qf_cell.size());
 
local_history_fe_values[i][j].reinit(history_fe.dofs_per_cell);

history_field_strain[i][j].reinit(history_dof_handler.n_dofs());
             }


This does not look crazy, unless you have a large number of degrees of 
freedom. How large is history_dof_handler.n_dofs() in your case?


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/40e5d166-8160-7548-19ef-eefd2b9f04a6%40colostate.edu.


Re: [deal.II] get the partition of the system matrix A associated with the unconstrained dofs

2022-08-22 Thread Wolfgang Bangerth

On 8/19/22 13:14, Simon Wiesheier wrote:


I also need the system matrix A for a second purpose, namely
to compute a matrix multiplication:
res = A^{-1} * B ,
where B is another matrix.
-To be more precise, I need the inverse of the 19x19 submatrix
corresponding to the unconstrained DoFs only -- not the inverse of the 
full system matrix..


Right. But the inverse of the 19x19 matrix is the 19x19 subblock of the 
20x20 big matrix. That's because after zeroing out the row and column, 
you have a block diagonal matrix where the inverse of the matrix 
consists of the inverses of the individual blocks.


I could not find a function which computes the inverse of a sparse 
matrix directly (without solving a linear system).

What I tried is,
LAPACKFullMatrix new_matrix = my_system_matrix ,
thence calling the invert function.
But I am not sure if this is the right way to go.


That's one way to go. FullMatrix::gauss_jordan() also computes the 
inverse of a matrix.




-Also, after calling constraints.distribute_local_to_global(),
does it make sense at all to compute an inverse matrix, given that some 
rows and columns were set to zero?


Yes -- as mentioned above, you should consider the resulting matrix as 
block diagonal.


Best
 W.


--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/593cd135-fa19-1572-23a9-53bcfb34aad9%40colostate.edu.


[deal.II] Re: Solving the linear system of equations using PETSc BlockSparseMatrix

2022-08-22 Thread Bruno Turcksin
Hi,

If you search for "block solver" here 
https://dealii.org/developer/doxygen/deal.II/Tutorial.html, you will see 
all the tutorials that use block solvers. I think that only deal.II's own 
solvers support BlockSparseMatrix directly.

Best,

Bruno

On Monday, August 22, 2022 at 9:02:28 AM UTC-4 masou...@gmail.com wrote:

> Dear All,
>
> The following system of equations:
> KQ=R
> where,
> [image: Screenshot from 2022-08-22 13-45-45.png]
> were solved using BlockSparseMatrix to form tangent matrix K. It was 
> solved by:
>
> SparseDirectUMFPACK A_direct; 
> A_direct.initialize(K); 
> A_direct.vmult(Q_stp, R);
>
> Now, I'm trying to run my code with MPI using PETSc/Trilinos, but the 
> solver does not except  PETSc/Trilinos BlockSparseMatrix. How do we solve 
> such a general system of equations? 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9c70b7e9-08ed-4cd6-bfd7-7eefe5eb2159n%40googlegroups.com.


Re: [deal.II] Issue encountered while solving Step-40 in 1 dimension

2022-08-22 Thread Daniel Arndt
Syed,

Yes, you should be able to use parallel::shared::Triangulation instead.

Best,
Daniel

On Sat, Aug 20, 2022 at 5:25 AM syed ansari  wrote:

> Thanks Daniel for your quick reply. Is it possible to solve the
> same problem with parallel::shared::Triangulation for dim ==1?
>
> On Fri, 19 Aug 2022, 8:18 pm Daniel Arndt,  wrote:
>
>> Syed,
>>
>> parallel::distributed::Triangulation is just not implemented for dim==1
>> so you can't run step-40 for the one-dimensional case.
>>
>> Best,
>> Daniel
>>
>> On Fri, Aug 19, 2022 at 7:07 AM syed ansari  wrote:
>>
>>> Dear all,
>>> I was trying to run step-40 in 1 dimension and encountered
>>> the error corresponding to MeshSmoothing in the constructor. The details of
>>> the error are as follows:
>>> 
>>> An error occurred in line <3455> of file
>>> 
>>> in function
>>> dealii::parallel::distributed::Triangulation<1,
>>> spacedim>::Triangulation(ompi_communicator_t* const&, typename
>>> dealii::Triangulation<1, spacedim>::MeshSmoothing,
>>> dealii::parallel::distributed::Triangulation<1, spacedim>::Settings) [with
>>> int spacedim = 1; MPI_Comm = ompi_communicator_t*; typename
>>> dealii::Triangulation<1, spacedim>::MeshSmoothing =
>>> dealii::Triangulation<1, 1>::MeshSmoothing]
>>> The violated condition was:
>>> false
>>> Additional information:
>>> You are trying to use functionality in deal.II that is currently not
>>> implemented. In many cases, this indicates that there simply didn't
>>> appear much of a need for it, or that the author of the original code
>>> did not have the time to implement a particular case. If you hit this
>>> exception, it is therefore worth the time to look into the code to
>>> find out whether you may be able to implement the missing
>>> functionality. If you do, please consider providing a patch to the
>>> deal.II development sources (see the deal.II website on how to
>>> contribute).
>>>
>>> Stacktrace:
>>> ---
>>> #0  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
>>> dealii::parallel::distributed::Triangulation<1,
>>> 1>::Triangulation(ompi_communicator_t* const&, dealii::Triangulation<1,
>>> 1>::MeshSmoothing, dealii::parallel::distributed::Triangulation<1,
>>> 1>::Settings)
>>> #1  ./step-40: Step40::LaplaceProblem<1>::LaplaceProblem()
>>> #2  ./step-40: main
>>> 
>>>
>>> Calling MPI_Abort now.
>>> To break execution in a GDB session, execute 'break MPI_Abort' before
>>> running. You can also put the following into your ~/.gdbinit:
>>>   set breakpoint pending on
>>>   break MPI_Abort
>>>   set breakpoint pending auto
>>>
>>> 
>>> An error occurred in line <3455> of file
>>> 
>>> in function
>>> dealii::parallel::distributed::Triangulation<1,
>>> spacedim>::Triangulation(ompi_communicator_t* const&, typename
>>> dealii::Triangulation<1, spacedim>::MeshSmoothing,
>>> dealii::parallel::distributed::Triangulation<1, spacedim>::Settings) [with
>>> int spacedim = 1; MPI_Comm = ompi_communicator_t*; typename
>>> dealii::Triangulation<1, spacedim>::MeshSmoothing =
>>> dealii::Triangulation<1, 1>::MeshSmoothing]
>>> The violated condition was:
>>> false
>>> Additional information:
>>> You are trying to use functionality in deal.II that is currently not
>>> implemented. In many cases, this indicates that there simply didn't
>>> appear much of a need for it, or that the author of the original code
>>> did not have the time to implement a particular case. If you hit this
>>> exception, it is therefore worth the time to look into the code to
>>> find out whether you may be able to implement the missing
>>> functionality. If you do, please consider providing a patch to the
>>> deal.II development sources (see the deal.II website on how to
>>> contribute).
>>>
>>> Stacktrace:
>>> ---
>>> #0  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
>>> dealii::parallel::distributed::Triangulation<1,
>>> 1>::Triangulation(ompi_communicator_t* const&, dealii::Triangulation<1,
>>> 1>::MeshSmoothing, dealii::parallel::distributed::Triangulation<1,
>>> 1>::Settings)
>>> #1  ./step-40: Step40::LaplaceProblem<1>::LaplaceProblem()
>>> #2  ./step-40: main
>>> 
>>>
>>> Calling MPI_Abort now.
>>> To break execution in a GDB session, execute 'break MPI_Abort' before
>>> running. You can also put the following into your ~/.gdbinit:
>>>   set breakpoint pending on
>>>   break MPI_Abort
>>>   set breakpoint pending auto
>>>
>>> 
>>> An error occurred in line <3455> of file
>>> 
>>> in function
>>> dealii::parallel::distributed::Triangulation<1,
>>> spacedim>::Triangulation(ompi_communicator_t* const&, typename
>>> 

[deal.II] Solving the linear system of equations using PETSc BlockSparseMatrix

2022-08-22 Thread Masoud Ahmadi
Dear All,

The following system of equations:
KQ=R
where,
[image: Screenshot from 2022-08-22 13-45-45.png]
were solved using BlockSparseMatrix to form tangent matrix K. It was solved 
by:

SparseDirectUMFPACK A_direct; 
A_direct.initialize(K); 
A_direct.vmult(Q_stp, R);

Now, I'm trying to run my code with MPI using PETSc/Trilinos, but the 
solver does not except  PETSc/Trilinos BlockSparseMatrix. How do we solve 
such a general system of equations? 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/10c07791-631f-450b-9864-dfd7e4c229c6n%40googlegroups.com.