Re: [deal.II] Providing read access to a distributed matrix to more than one thread on the same process

2024-05-24 Thread Kyle Schwiebert
I do actually have one question here that may be relevant. Whenever I am 
checking things out in the gdb, it claims I have twice as many threads 
running as I asked for using MultithreadInfo::set_max_threads(). Is this 
possibly germane to my issue here and what is the cause of this? As is 
common my CPU has two logical cores per physical core, but the CPU 
utilization suggests that only one thread of each core is ever being used 
at any given time.

On Friday, May 24, 2024 at 7:28:05 PM UTC-4 Kyle Schwiebert wrote:

> Dr. Bangerth,
>
> Thank you very much for the help. My problem size is not so large as to 
> make copying the matrix impossible, but it is still undesirable and would 
> require a significant rewrite of my codes--and I plan to eventually scale 
> so this is not a long term solution.
>
> I also am not sure MPI_THREAD_MULTIPLE works anyway. I attempted an 
> alternative approach: Protecting all calls to objects with a shared 
> communicator by a mutex, and I ran into an odd issue: The 
> TrilinosWrappers::MPI::Vector::Vector() constructor seems to be causing 
> segfaults--I do not see how this could be the users' fault since there are 
> no input parameters. This happens even when I run with only one thread 
> unless I switch back to MPI_THREAD_SERIALIZED. I was able to verify that 
> the MPI implementation I have is able to provide MPI_THREAD_MULTIPLE. As I 
> am somewhat new to MPI I think I'm at the end of what I can try.
>
> Thank you again for you time in addressing my question.
>
> Regards,
> Kyle
>
> On Tuesday, May 21, 2024 at 11:42:12 AM UTC-4 Wolfgang Bangerth wrote:
>
>>
>> Kyle: 
>>
>> > I have a question to which I think the answer may be "no" but I thought 
>> I 
>> > would ask. I'll just ask and then explain the "why?" at the end in case 
>> there 
>> > is a better work around from the outset. 
>> > 
>> > I am initializing MPI myself with MPI_THREAD_MULTIPLE so that threads 
>> can each 
>> > call MPI functions without interfering. To the extent possible each 
>> thread has 
>> > its own copy of MPI_COMM_WORLD so that simultaneous calls do not get 
>> > convoluted. However, I have a matrix of type 
>> TrilinosWrappers::SparseMatrix 
>> > which BOTH threads need simultaneous access to. Since you must give one 
>> and 
>> > only one MPI_Comm object in the constructor, these sorts of conflicts 
>> are 
>> > inevitable. 
>> > 
>> > For obvious reasons I would not like to require a copy of this matrix 
>> for each 
>> > thread. The other obvious solution is a mutex on the matrix, but this 
>> could 
>> > easily get costly as both threads are calling matrix.vmult(...) in an 
>> > iterative solver. I thus have two questions: 
>> > 
>> > 1) Is initializing MPI with MPI_THREAD_MULTIPLE going to break the 
>> deal.ii 
>> > internals for some reason and I should just not investigate this 
>> further? 
>>
>> This should work. deal.II uses MPI_THREAD_SERIALIZED internally. 
>>
>>
>> > 2) I think the best solution, if possible, would be to get pointers to 
>> the 
>> > internal data of my matrix which I can then associate with different 
>> MPI_Comm 
>> > objects. Is this possible? 
>>
>> No. You should never try to use anything but the public interfaces of 
>> classes. 
>> Everything is bound to break things in unpredictable ways sooner or 
>> later. 
>> Probably sooner. 
>>
>>
>> > Why am I doing this? 
>> > This is a bit of a simplification, but imagine that I am solving a 
>> linear 
>> > deferred correction problem. This means at each time step I solve A . 
>> x_1 = 
>> > b_1 and A . x_2 = b_2. Let us assume that the matrix A does not have a 
>> > well-known preconditioner which scales nicely with the number of 
>> processors. 
>> > Then instead of using 2n processors on each linear system in series, we 
>> could 
>> > instead use n processors on each linear system simultaneously and 
>> expect this 
>> > to be faster. I hope this makes sense. 
>>
>> Yes, this makes sense. But you should not expect to be able to solve 
>> multiple 
>> linear systems at the same time over the same communicator. Each step in 
>> a 
>> linear solver (vector dot products, matrix-vector products, etc.) 
>> consists of 
>> multiple MPI messages where process wait for data to be sent from other 
>> processes. If you have multiple solves running on the same process, you 
>> will 
>> receive me

Re: [deal.II] Providing read access to a distributed matrix to more than one thread on the same process

2024-05-24 Thread Kyle Schwiebert
Dr. Bangerth,

Thank you very much for the help. My problem size is not so large as to 
make copying the matrix impossible, but it is still undesirable and would 
require a significant rewrite of my codes--and I plan to eventually scale 
so this is not a long term solution.

I also am not sure MPI_THREAD_MULTIPLE works anyway. I attempted an 
alternative approach: Protecting all calls to objects with a shared 
communicator by a mutex, and I ran into an odd issue: The 
TrilinosWrappers::MPI::Vector::Vector() constructor seems to be causing 
segfaults--I do not see how this could be the users' fault since there are 
no input parameters. This happens even when I run with only one thread 
unless I switch back to MPI_THREAD_SERIALIZED. I was able to verify that 
the MPI implementation I have is able to provide MPI_THREAD_MULTIPLE. As I 
am somewhat new to MPI I think I'm at the end of what I can try.

Thank you again for you time in addressing my question.

Regards,
Kyle

On Tuesday, May 21, 2024 at 11:42:12 AM UTC-4 Wolfgang Bangerth wrote:

>
> Kyle:
>
> > I have a question to which I think the answer may be "no" but I thought 
> I 
> > would ask. I'll just ask and then explain the "why?" at the end in case 
> there 
> > is a better work around from the outset.
> > 
> > I am initializing MPI myself with MPI_THREAD_MULTIPLE so that threads 
> can each 
> > call MPI functions without interfering. To the extent possible each 
> thread has 
> > its own copy of MPI_COMM_WORLD so that simultaneous calls do not get 
> > convoluted. However, I have a matrix of type 
> TrilinosWrappers::SparseMatrix 
> > which BOTH threads need simultaneous access to. Since you must give one 
> and 
> > only one MPI_Comm object in the constructor, these sorts of conflicts 
> are 
> > inevitable.
> > 
> > For obvious reasons I would not like to require a copy of this matrix 
> for each 
> > thread. The other obvious solution is a mutex on the matrix, but this 
> could 
> > easily get costly as both threads are calling matrix.vmult(...) in an 
> > iterative solver. I thus have two questions:
> > 
> > 1) Is initializing MPI with MPI_THREAD_MULTIPLE going to break the 
> deal.ii 
> > internals for some reason and I should just not investigate this further?
>
> This should work. deal.II uses MPI_THREAD_SERIALIZED internally.
>
>
> > 2) I think the best solution, if possible, would be to get pointers to 
> the 
> > internal data of my matrix which I can then associate with different 
> MPI_Comm 
> > objects. Is this possible?
>
> No. You should never try to use anything but the public interfaces of 
> classes. 
> Everything is bound to break things in unpredictable ways sooner or later. 
> Probably sooner.
>
>
> > Why am I doing this?
> > This is a bit of a simplification, but imagine that I am solving a 
> linear 
> > deferred correction problem. This means at each time step I solve A . 
> x_1 = 
> > b_1 and A . x_2 = b_2. Let us assume that the matrix A does not have a 
> > well-known preconditioner which scales nicely with the number of 
> processors. 
> > Then instead of using 2n processors on each linear system in series, we 
> could 
> > instead use n processors on each linear system simultaneously and expect 
> this 
> > to be faster. I hope this makes sense.
>
> Yes, this makes sense. But you should not expect to be able to solve 
> multiple 
> linear systems at the same time over the same communicator. Each step in a 
> linear solver (vector dot products, matrix-vector products, etc.) consists 
> of 
> multiple MPI messages where process wait for data to be sent from other 
> processes. If you have multiple solves running on the same process, you 
> will 
> receive messages in unpredictable orders that may or may not belong to the 
> current solve. Nothing good can come out of this.
>
> But if the linear solve is the bottleneck, you can always build the matrix 
> multiple times (or create copies), with different (sub-)communicators and 
> run 
> one solve on each communicator.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f00c06f7-6f0a-4b4d-ace7-eec24624bffbn%40googlegroups.com.


[deal.II] Providing read access to a distributed matrix to more than one thread on the same process

2024-05-21 Thread Kyle Schwiebert
Hello,

I have a question to which I think the answer may be "no" but I thought I 
would ask. I'll just ask and then explain the "why?" at the end in case 
there is a better work around from the outset.

I am initializing MPI myself with MPI_THREAD_MULTIPLE so that threads can 
each call MPI functions without interfering. To the extent possible each 
thread has its own copy of MPI_COMM_WORLD so that simultaneous calls do not 
get convoluted. However, I have a matrix of type 
TrilinosWrappers::SparseMatrix which BOTH threads need simultaneous access 
to. Since you must give one and only one MPI_Comm object in the 
constructor, these sorts of conflicts are inevitable.

For obvious reasons I would not like to require a copy of this matrix for 
each thread. The other obvious solution is a mutex on the matrix, but this 
could easily get costly as both threads are calling matrix.vmult(...) in an 
iterative solver. I thus have two questions:

1) Is initializing MPI with MPI_THREAD_MULTIPLE going to break the deal.ii 
internals for some reason and I should just not investigate this further?

2) I think the best solution, if possible, would be to get pointers to the 
internal data of my matrix which I can then associate with different 
MPI_Comm objects. Is this possible?

Why am I doing this?
This is a bit of a simplification, but imagine that I am solving a linear 
deferred correction problem. This means at each time step I solve A . x_1 = 
b_1 and A . x_2 = b_2. Let us assume that the matrix A does not have a 
well-known preconditioner which scales nicely with the number of 
processors. Then instead of using 2n processors on each linear system in 
series, we could instead use n processors on each linear system 
simultaneously and expect this to be faster. I hope this makes sense.

Thanks for any tips you can offer.

Regards,
Kyle

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9947c725-d102-4868-9b6c-939168bad21an%40googlegroups.com.


Re: [deal.II] Possible typo in step 26 tutorial

2024-03-22 Thread Kyle Schwiebert
Daniel,

Thanks. I wasn't sure if that was the procedure for such a small change. 
Went ahead and did this.

-Kyle

On Friday, March 22, 2024 at 10:36:43 AM UTC-4 d.arnd...@gmail.com wrote:

> Kyle,
>
> Why don't you just submit a pull request for this fix?
>
> Best,
> Daniel
>
> On Fri, Mar 22, 2024 at 9:10 AM Kyle Schwiebert  wrote:
>
>> Hello,
>>
>> I'm sorry for not just mentioning this on github. If I'm honest I wasn't 
>> sure of the procedure for such a small change. I think there is a typo in: 
>> The 
>> deal.II Library: The step-26 tutorial program (dealii.org) 
>> <https://www.dealii.org/current/doxygen/deal.II/step_26.html#WhatcouldpossiblygowrongVerifyingwhetherthecodeiscorrect>.
>>  
>> Regarding the first test case in "What could possibly go wrong?" it seems 
>> that the coefficient on a(t) should be +1 rather than -1 (or the initial 
>> condition should be a(0) = -1).
>>
>> Regards,
>> Kyle
>>
>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "deal.II User Group" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to dealii+un...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/dealii/ad2938d8-e04f-4695-a2e0-1668ae67b4f9n%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/dealii/ad2938d8-e04f-4695-a2e0-1668ae67b4f9n%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/19f7747d-2f8a-4337-95a0-c18ce140684cn%40googlegroups.com.


[deal.II] Possible typo in step 26 tutorial

2024-03-22 Thread Kyle Schwiebert
Hello,

I'm sorry for not just mentioning this on github. If I'm honest I wasn't 
sure of the procedure for such a small change. I think there is a typo in: The 
deal.II Library: The step-26 tutorial program (dealii.org) 
.
 
Regarding the first test case in "What could possibly go wrong?" it seems 
that the coefficient on a(t) should be +1 rather than -1 (or the initial 
condition should be a(0) = -1).

Regards,
Kyle

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ad2938d8-e04f-4695-a2e0-1668ae67b4f9n%40googlegroups.com.


Re: [deal.II] How can I reconfigure deal.IIConfig.cmake?

2024-03-15 Thread Kyle Schwiebert
Alexander,

Thank you. That makes a lot of sense. I've been thinking it's about time to 
update to some version of 9.5 anyway and I'm thinking unlike when I first 
got deal.ii, I might be able to handle building from source like you 
suggest. I'll give it a go.  If it were possible to apt-get install the 
newest version of deal.II 9.5.1 but force it to use clang I would probably 
just do that, but despite looking all day, I have been unable to learn much 
about how that might be done.

Any other approach will not work as you pointed out. As you suggested, 
there are issues with the ABI and even if I compile my project with clang 
it's a big issue trying to link with deal.II which was compiled with gcc.

Regards,
Kyle

On Friday, March 15, 2024 at 3:46:05 PM UTC-4 Alexander Kiselyov wrote:

> Hi Kyle,
>
> Disclaimer: I'm not an expert in C++, but have some experience in 
> maintaining packages, including ones of C++ software.
>
> First, it's a bad idea to patch *Config.cmake files, they are meant to 
> accurately describe the currently installed software. This is especially 
> important because GCC and Clang ABIs aren't compatible, generally speaking. 
> Bear 
> in mind that C++ doesn't have any formal ABI standard, only best-effort 
> compatibility within a single compiler vendor. I'd really advise against 
> such experiments compared to a one-time deal.II rebuild with clang, even if 
> it takes a couple of hours. You'll likely need to rebuild deal.II 
> dependencies like MPI with clang, again because of potential ABI 
> incompatibilities.
>
> Concerning warnings and other flags, you can enable additional flags (e.g. 
> `-fdiagnostics-color=always`) in CMakeLists.txt of your project using 
> `target_compile_options` function after calling `DEAL_II_SETUP_TARGET`. 
> However, deal.II should select supported flags during its build with the 
> same compiler.
>
> Finally, a practical advice about building deal.II yourself. Obviously 
> it's very convenient to build in parallel (`-j ` option to `cmake 
> --build` or `make`), but to be safe you need to have at least 2 GB of free 
> RAM per build process.
>
> Hope that helps.
>
> Best regards,
> Alexander
>
> On Fri, 2024-03-15 at 08:41 -0700, Kyle Schwiebert wrote:
>
> I decided to switch from compiling my user codes with clang instead of 
> gcc, and I can successfully get the build to use clang with either
>
> cmake . -DDEAL_II_CXX_COMPILER=clang
> or
> cmake . -DCMAKE_CXX_COMPILER=clang
>
> but I get a bunch of warnings with like "warning: unknown warning option 
> '-Wno-placement-new' [-Wunknown-warning-option]"
>
> I would like to get the default clang flags that deal.II would have set up 
> on installation. If it is relevant, I am running deal.II on ubuntu 20.04 
> through WSL1. I installed the back port of 9.4.0 with "sudo apt-get ..." I 
> know that I could just edit the file, but I'm worried this would mess up 
> something unrelated and I think I would benefit from using the flags the 
> deal.ii devs think wise for clang.
>
> Thank you for any insight you can offer.
>
> Regards,
> Kyle
>
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to dealii+un...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/dealii/c46c5fd3-e964-41dd-a17d-5851ee9c63c8n%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/dealii/c46c5fd3-e964-41dd-a17d-5851ee9c63c8n%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/1af91de7-f7cc-4202-9bd5-d2f0ee044632n%40googlegroups.com.


[deal.II] How can I reconfigure deal.IIConfig.cmake?

2024-03-15 Thread Kyle Schwiebert
I decided to switch from compiling my user codes with clang instead of gcc, 
and I can successfully get the build to use clang with either

cmake . -DDEAL_II_CXX_COMPILER=clang
or
cmake . -DCMAKE_CXX_COMPILER=clang

but I get a bunch of warnings with like "warning: unknown warning option 
'-Wno-placement-new' [-Wunknown-warning-option]"

I would like to get the default clang flags that deal.II would have set up 
on installation. If it is relevant, I am running deal.II on ubuntu 20.04 
through WSL1. I installed the back port of 9.4.0 with "sudo apt-get ..." I 
know that I could just edit the file, but I'm worried this would mess up 
something unrelated and I think I would benefit from using the flags the 
deal.ii devs think wise for clang.

Thank you for any insight you can offer.

Regards,
Kyle

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c46c5fd3-e964-41dd-a17d-5851ee9c63c8n%40googlegroups.com.


Re: [deal.II] What do the AffineConstraints::distribute_local_to_global(...) functions do on a matrix/vector level?

2023-07-23 Thread Kyle Schwiebert
Luca,

Thank you very much for the reply. That is all very very helpful.

I see what you are saying about the hanging node constraints. I am using a
mesh from gmsh directly (without any sort of refinement), which I am pretty
sure is conforming. My code does include a call to
make_hanging_node_constraints just to be safe, but commenting that out does
not seem to cause any difference in behavior.

It is very helpful that you mention time-dependent constraints, which I
will certainly need to implement sooner rather than later (but the BCs in
the current problem are actually constant). I actually believe the scheme I
listed above does not totally work in that case. If we have x_i = a(t),
then RHS_bc(i) must change from having the contribution -m(j,i)a(t) ->
-m(j,i)a(t + dt). Unfortunately, I do not see a way to do that other than
to rebuild that static part of the matrix in order to call
distribute_local_to_global(...,false) again. Something like multiplying
each entry by a factor does not work since some entries in RHS_bc could
contain information from more than one constrained node.

Perhaps then I should look into using distribute_local_to_global(...,true)
and let the time_dependent assembly handle the boundary conditions.

I assume that the call to distribute_local_to_global(...,true) instead does:
1) Sets M(i,i) = L and RHS(i) = L*a.
2) Zeros out row i except for the diagonal entry.

The extra trick here is that since x_i is not being set to zero prior to
the solve step, if you are reusing the preconditioner across time steps,
you need to control L so it is always the exact same, but that is not too
difficult in this easy situation. It is difficult if one is using periodic
or other constraints which constrain one DoF to others (rather than to a
constant) there does not seem to be an easy way to get access to which DoFs
those are for a given constrained DoF. This would inhibit efficient manual
setting of the scaling of the rows. The current code does not have
time-dependent or periodic BCs, so these considerations are perhaps for a
later date for me to consider.

I found a major issue in an unrelated part of my code, so it is mostly
working now, but I still have a feeling there is still at least a minor
issue with how I am handling the BCs.

Regards,
Kyle

On Fri, Jul 21, 2023 at 8:54 PM Wolfgang Bangerth 
wrote:

> On 7/21/23 04:48, Luca Heltai wrote:
> > Boundary conditions and hanging nodes have a very delicate interplay.
>
> As just another data point, step-26 re-builds the matrix in every time
> step
> just so that we can apply boundary conditions and hanging node constraints
> from scratch.
>
> Best
>   W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/dealii/B5IJB1O-BSw/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/03b57226-f66e-011a-3adb-58cdf995d661%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAEt%3DwjN%2BvJjPm%3Du-N7Lz16v3vh5bDYJLBSG-Y5z-q6Tq6PFrjw%40mail.gmail.com.


[deal.II] What do the AffineConstraints::distribute_local_to_global(...) functions do on a matrix/vector level?

2023-07-20 Thread Kyle Schwiebert
Hello all,

Thank you for taking the time to look at my question. First, I'll ask a 
couple of basic questions about the built-in functions, and then I'll give 
a few details of why I ask.

Does the inhomogeneity-handling call distribute_local_to_global(...,false) 
do the following:

Say that we have a constraint x_i = a. I would guess that the function 
takes the following steps: (matrix is M, RHS is right hand side)
1) Set M(i,i) = L and set RHS(i) = 0, where L is somehow "nice" for the 
matrix's spectrum.
2) Set the rest of the entries of the ith row and column of M = 0. Perhaps 
technically we don't bother to zero the column since we plan to call 
AffineConstraints::set_zero.
3) For each DoF j which is coupled to DoF i and is not itself constrained, 
we do RHS(j) -= a*M(j,i).

Now considering the same constraint with instead a call to 
distribute_local_to_global(local_rhs,dof_inds,global_rhs) I would guess 
that we just set RHS(i) = 0 and do nothing else special.

Most importantly, is the above correct?

If so, I am having trouble understanding where I might be going wrong. I'm 
solving a time dependent coupled problem with a decoupled solver. By 
decoupled solver I mean that information from the second domain only 
appears in the right hand side. To make matters one step more complex, one 
term in my matrix is time-dependent and so I do not recompute the whole 
matrix. My procedure is this:

1) Assemble the static part of the matrix with a zero right hand side. This 
matrix will never be touched again. The vector, RHS_bc, which will also 
never be touched again contains no nonzero entries except for those created 
by the call to distribute_local_to_global(...,false).

2) I then assemble the time-dependent terms into a separate matrix and 
vector, RHS_td, using distribute_local_to_global(...,false) and add the 
RHS_bc to RHS_td.

3) I then assemble the coupled terms from the other solver into RHS_c with 
distribute_local_to_global(local_rhs,dof_inds,RHS_c).

4) I then add RHS_c to RHS_td and solve (being careful to call 
constraints.set_zero(solution) before the solve and 
constraints.distribute(solution) after the solve). I am of course using a 
custom class similar to the linear operator classes to bundle the vmult() 
outputs of the static and time-dependent matrices.

However, I am seeing blowup in my solution and I'm seeing some indication 
that the inhomogeneous boundary conditions are to blame. Note that I also 
tried calling constraints.set_zero(RHS_c) in step 4 above and as I expected 
that had no change (since presumably constrained entries are already zero).

Thank you again for taking the time to engage with my question and in 
advance for any tips you are able to offer.

-Kyle

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0589f784-e1ed-4699-bf30-a7358b3acee1n%40googlegroups.com.


[deal.II] Trouble with including deal.II functions into user-written library

2023-06-07 Thread Kyle Schwiebert
Hello,

I'm just beginning to combine several files from my work into a more 
cohesive whole. To do this I'm trying to modularize the code and am thus 
taking a first stab at linking a user-defined library to deal.II.

I have the following in my CMakeLists.txt:
CMAKE_MINIMUM_REQUIRED(VERSION 3.16.3)

FIND_PACKAGE(deal.II 9.2.0 REQUIRED)

DEAL_II_INITIALIZE_CACHED_VARIABLES()

PROJECT(myproject)

ADD_LIBRARY(les ./Helper/BlockHelpers.cpp ./Solvers/direct_solver.cpp 
./Models/ADM.cpp)
DEAL_II_SETUP_TARGET(les)

ADD_EXECUTABLE(channel_flow ./Test_Cases/turbulent_channel.cpp)
DEAL_II_SETUP_TARGET(channel_flow)
target_link_libraries(les)

This seems to match the CMake documentation example. However, upon running 
'make les' I immediately get the following error while it's compiling 
BlockHelpers.cpp:

error: expected class-name before ‘{’ token

regarding this class definition in the associated header, BlockHelpers.hpp:
template< typename MatrixType, typename VectorType>
class SystemMatrixWrapper : public Subscriptor
{
//class def
...
}

As far as I can tell, this issue is being caused by an issue linking to the 
deal.II files and thus the compiler does not recognize Subscriptor as a 
class. When I comment out ': public Subscriptor' this error disappears.

The file does include the subscriptor class with the line:
#include 

about which the compiler/preprocessor does not complain.

How can I resolve this issue? Any advice would be most appreciated. I 
suspect there is an issue with my CMakeLists.txt file.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b93cd1a0-727b-4417-a7f4-5590f754cc63n%40googlegroups.com.


[deal.II] Is barycenter refinement for simplices implemented?

2023-04-24 Thread Kyle Schwiebert
I'm sorry to ask such a basic question here, as it feels like I should be 
able to answer this for myself, but having a look at some relevant git and 
source code, I could not find the answer. I also could not find anything in 
the documentation and it seems the question is also unaddressed on this 
forum:

How does global refinement work in the case of simplices, and in 
particular, is it barycenter refinement? I suspect it would not be, since 
iterated barycenter refinement eventually produces a very bad mesh. Is if 
it is not the default, is it implemented at all? Or is this a case of one 
being better off importing a mesh from another program?

Nonetheless, barycenter refinement seems crucial for using Scott-Vogelius 
finite element spaces in either 2D or 3D. All the other necessary 
components (FE_SimplexP and FE-SimplexDGP are already present).

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/3043a4a0-9860-47cd-bc69-74799c70a10an%40googlegroups.com.


Re: [deal.II] Unexpected GMRES behavior

2022-06-07 Thread Kyle Schwiebert
Timo,

Thank you for the very helpful advice. You're right about right
preconditioning: As soon as I made the switch it became clear that the true
residual is not decreasing at all.

I have two brief follow-up questions if you happen to have the time. Do you
have a reference or other suggestions for how to "leave the null space
alone" in the case of constant pressure and periodic BCs? You guess
correctly that constant pressures are in the null space. I suppose this
means something like, the average of the pressure vector must be
preserved by the preconditioner.

I understand your comment about AffineConstraints::distribute() in the case
of dirichlet boundary conditions. My problem has homogeneous dirichlet
conditions and periodic conditions. I don't understand why in that case
AffineConstraints::distribute() should change a right hand side vector
created with the full linear system call to
AffineConstraints::distribute_local_to_global() in the case of periodic
boundary conditions. In both cases, the correct RHS vector entry should be
zero, right? So does this mean I'm imposing some constraints I don't mean
to?

Regards,
Kyle

On Sat, Jun 4, 2022 at 10:08 AM Timo Heister  wrote:

> Hi Kyle,
>
> > I'm solving a nonlinear, time-dependent problem with a combination of
> periodic and Dirichlet boundary conditions.
>
> Is your linear system singular (has a nullspace) like constant
> pressures or due to periodicity? If yes, that can cause an issue if
> your preconditioner does not exactly leave this nullspace alone.
> > // Restart after 30 gmres iterations, using left preconditioning.
> > SolverGMRES>::AdditionalData
> additional_data(30,false);
>
> I always recommend using right preconditioning, because the reported
> residuals are otherwise preconditioned residuals and depend on the
> preconditioner itself.
>
> > Additionally, and most surprisingly, even though convergence is
> indicated, the solver does not seem to be converging to the desired
> tolerance.
>
> This is typical with left preconditioning.
>
> > As a side question, can anyone explain why the RHS vector changes when I
> apply the affine constraints object to it, even though I assembled the
> matrix with
>  
> constraints.distribute_local_to_global(local_matrix,local_rhs,local_dof_indices,system_matrix,system_rhs);
>
> distribute() sets constrained values to the correct value (for example
> non-homogeneous Dirichlet conditions.
>
>
> Best,
> Timo
>
> --
> Timo Heister
> http://www.math.clemson.edu/~heister/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/dealii/2dIRug-3MLM/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/CAMRj59HXkGQ_9uTRq6YmS8CjdQbtqGqJ1oJRJ%3DsEqYOg_0hD_A%40mail.gmail.com
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAEt%3DwjOXJX8vjU3%3DKBdUfi3M_FZ3EFN8gNZdWK9XnTLhzu3nUg%40mail.gmail.com.


[deal.II] Unexpected GMRES behavior

2022-06-02 Thread Kyle Schwiebert
Hello all,

I'm experiencing an interesting issue with GMRES (and perhaps to some 
extent UMFPACK).

I'm solving a nonlinear, time-dependent problem with a combination of 
periodic and Dirichlet boundary conditions.

I solve the system with GMRES using a homemade preconditioner class 
(declared with Class myPreconditioner : public Subscripter):


// Maximum 1 iterations, relative tolerance of 1e-8.
SolverControl solver_control(1, system_rhs.l2_norm() * 1e-8, 
true);

// Restart after 30 gmres iterations, using left preconditioning.
SolverGMRES>::AdditionalData 
additional_data(30,false);
SolverGMRES>  
GMRES_solver(solver_control,additional_data);
  
 
GMRES_solver.solve(system_matrix,NSw_current_solution,system_rhs,preconditioner_with_memory_save);
constraints.distribute(NSw_current_solution);

The solution seems to be blowing up across linear solves within each 
nonlinear solve. This does NOT happen when I use UMFPACK, so it is unlikely 
(unless I'm missing something) that either the nonlinear solver choice 
(Newton's method) or initial guess is the issue.

Additionally, and most surprisingly, even though convergence is indicated, 
the solver does not seem to be converging to the desired tolerance. Even 
more shocking, the old solve function, seems to have the same problem.

As a side question, can anyone explain why the RHS vector changes when I 
apply the affine constraints object to it, even though I assembled the 
matrix with
 
constraints.distribute_local_to_global(local_matrix,local_rhs,local_dof_indices,system_matrix,system_rhs);
 

The poor solver behavior is the same either way, so I don't think this is 
related. To help, I am using the following diagnostic code to help me track 
down what's wrong. Here it is:

std::cout << "Magnitude of RHS vector is " << 
system_rhs.l2_norm() << std::endl;
constraints.distribute(system_rhs);
BlockVector tmp0(NSw_current_solution);
solve(stepFlag); BlockVector tmp(NSw_current_solution);
system_matrix.vmult(tmp, NSw_current_solution); tmp -= 
system_rhs;
std::cout << "Computed residual of UMFPACK solution is " << 
tmp.l2_norm() << std::endl;
NSw_current_solution = tmp0;
std::cout << "Magnitude of RHS vector is " << 
system_rhs.l2_norm() << std::endl;
solve_memory_conservation(init,stepFlag);
tmp -= NSw_current_solution;
std::cout << "L2 norm of difference between solution methods is 
" << tmp.l2_norm() << std::endl;
system_matrix.vmult(tmp,NSw_current_solution); tmp -= 
system_rhs;
std::cout << "Computed residual of GMRES solution is " << 
tmp.l2_norm() << std::endl;
std::cout << "Magnitude of RHS vector is " << 
system_rhs.l2_norm() << std::endl;

Where solve uses UMFPACK and solve_memory_conservation uses GMRES. Here is 
the relevant output for the first solve:

Magnitude of RHS vector is 8895.38
Computed residual of UMFPACK solution is 16761.5
Magnitude of RHS vector is 9341.73
The solver took 118 iterations.
L2 norm of difference between solution methods is 16583.4
Computed residual of GMRES solution is 3.96261e+06

Thanks for any help you can provide.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/99dbc519-c857-4716-b376-5dc9613c0255n%40googlegroups.com.


[deal.II] Trouble with vector valued matrix assembly optimizations

2022-02-18 Thread Kyle Schwiebert
Hello all,

I am solving what is essentially a Navier-Stokes system with deal.II. 
However, when I attempt to implement the optimizations suggested in the 
"Handling 
vector-valued problems" module 
. 
However, I am not getting the same matrix when I assemble in the optimized 
fashion.

My understanding thus far: Given that my finite element space is just a 
stack of primitive finite elements (Q_k for velocity P_{k-1} discontinuous 
for pressure), I can recover the nonzero component of the current shape 
function with something like:

component_i = fe.system_to_component_index(i).first;

I also understand that mass and Laplace matrices are zero except when the 
components are equal.

Thus I cannot figure out why this code does not work properly. All of the 
other usual infrastructure is there such as a loop over the cells and 
quadrature points, multiplying by JxW[q], and so on.

for (unsigned int i = 0; i < dofs_per_cell; ++i){
// We don't need to even do anything if the component 
doesn't correspond to a velocity dof.
const unsigned int component_i = 
fe.system_to_component_index(i).first;

for (unsigned int j = 0; j < dofs_per_cell; ++j){
// We don't need to even do anything if the component 
doesn't correspond to a velocity dof.
const unsigned int component_j = 
fe.system_to_component_index(j).first;

if (component_i < 3 && component_j < 3){

// Just do some checking...
if (component_i!=component_j){
if ((1.0 / dt) * phi_u[j] * phi_u[i] + (1.0 / 
dt) * delta2 * scalar_product(grad_phi_u[j], grad_phi_u[i])
+ 0.5 * viscosity * 
scalar_product(grad_phi_u[j], grad_phi_u[i]) != 0.0){
std::cout << "components " << 
(int)component_i << " and " << (int)component_j << '\n';
std::cout << "(" << phi_u[i][0] << "," << 
phi_u[i][1] << "," << phi_u[i][2] << ")\n";
std::cout << "(" << phi_u[j][0] << "," << 
phi_u[j][1] << "," << phi_u[j][2] << ")\n";
std::cout << "dot product is " << phi_u[i] 
* phi_u[j] << std::endl;
}
}
local_matrix(i, j) += (
(component_i!=component_j) ? 0.0 : (
(1.0/dt) * phi_u[j] * phi_u[i]
+ (1.0/dt) * delta2 * scalar_product(grad_phi_u[j], 
grad_phi_u[i])
+ 0.5 * viscosity * scalar_product(grad_phi_u[j], 
grad_phi_u[i])
)

 and so on with other terms which I have not changed in my attempts to 
optimize.

My confusion is amplified by the block which I have labeled "Just do some 
checking..." There is no output coming from those print statements, which 
seems to support my understanding of what is happening.

Any help you can offer would be appreciated.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/1309f302-7a9a-4033-90d7-a7b91c4a48d6n%40googlegroups.com.


[deal.II] Trouble with vector valued matrix assembly optimizations

2022-02-18 Thread Kyle Schwiebert
Hello all,

I am solving what is essentially a Navier-Stokes system with deal.II. 
However, when I attempt to implement the optimizations suggested in the 
"Handling 
vector-valued problems" module 
. 
However, I am not getting the same matrix when I assemble in the optimized 
fashion.

My understanding thus far: Given that my finite element space is just a 
stack of primitive finite elements (Q_k for velocity P_{k-1} discontinuous 
for pressure), I can recover the nonzero component of the current shape 
function with something like:

component_i = fe.system_to_component_index(i).first;

I also understand that mass and Laplace matrices are zero except when the 
components are equal.

Thus I cannot figure out why this code does not work properly. All of the 
other usual infrastructure is there such as a loop over the cells and 
quadrature points, multiplying by JxW[q], and so on.

for (unsigned int i = 0; i < dofs_per_cell; ++i){
// We don't need to even do anything if the component 
doesn't correspond to a velocity dof.
const unsigned int component_i = 
fe.system_to_component_index(i).first;

for (unsigned int j = 0; j < dofs_per_cell; ++j){
// We don't need to even do anything if the component 
doesn't correspond to a velocity dof.
const unsigned int component_j = 
fe.system_to_component_index(j).first;

if (component_i < 3 && component_j < 3){

// Just do some checking...
if (component_i!=component_j){
if ((1.0 / dt) * phi_u[j] * phi_u[i] + (1.0 / 
dt) * delta2 * scalar_product(grad_phi_u[j], grad_phi_u[i])
+ 0.5 * viscosity * 
scalar_product(grad_phi_u[j], grad_phi_u[i]) != 0.0){
std::cout << "components " << 
(int)component_i << " and " << (int)component_j << '\n';
std::cout << "(" << phi_u[i][0] << "," << 
phi_u[i][1] << "," << phi_u[i][2] << ")\n";
std::cout << "(" << phi_u[j][0] << "," << 
phi_u[j][1] << "," << phi_u[j][2] << ")\n";
std::cout << "dot product is " << phi_u[i] 
* phi_u[j] << std::endl;
}
}
local_matrix(i, j) += (
(component_i!=component_j) ? 0.0 : (
(1.0/dt) * phi_u[j] * phi_u[i]
+ (1.0/dt) * delta2 * scalar_product(grad_phi_u[j], 
grad_phi_u[i])
+ 0.5 * viscosity * scalar_product(grad_phi_u[j], 
grad_phi_u[i])
)

 and so on with other terms which I have not changed in my attempts to 
optimize.

My confusion is amplified by the block which I have labeled "Just do some 
checking..." There is no output coming from those print statements, which 
seems to support my understanding of what is happening.

Any help you can offer would be appreciated.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c6cca5ac-193b-42d5-931f-baa6dd542e1en%40googlegroups.com.


Re: [deal.II] Reusing an LU factorization in time-dependent problems, building on part of a matrix

2021-12-04 Thread Kyle Schwiebert
Hello,

Thank you for the reply. I totally get what you are saying about the block 
structure existing purely as a tool for solvers such as Schur complements. 
However, after some thought, I wouldn't know how to do some of the "tricks" 
I'm performing when it comes to matrix assembly outside of a block context.

Before I said that my system is a 4 by 4 block system (some blocks having 
dim components, others being scalar). However, only one block changes with 
time. Thus I can save lots of time in assembly using 

system_matrix.block(0,0) = 0;

Followed by only assembling the corresponding block and skipping all other 
calculations with an if statement. I think the assembly function would 
still work if system_matrix were not a block matrix, but I haven't been 
able to discover an alternative for setting just one part of the matrix to 
zero. Unfortunately, forming only part of the matrix is such a massive 
speedup that it would be painful to have to drop that optimization. Could 
you suggest an alternative?

Regards,
Kyle

On Monday, November 29, 2021 at 11:46:32 PM UTC-5 Wolfgang Bangerth wrote:

>
> > I wasn't sure if I should start a new thread as this is a very related 
> and 
> > basic question. Before I got the impression that switching from an exact 
> LU 
> > (UMFAPCK) to an ILU would be a good first try when UMFPACK started too 
> much 
> > memory for my problem. However, it appears that the SparseILU class does 
> not 
> > play nicely with BlockSparseMatrix.
>
> That is correct. But if you want to just build a preconditioner for the 
> entire 
> matrix, then there is also no reason to substructure the matrix into 
> individual blocks -- you might as well work with one matrix and one vector 
> that correspond to all variables at once.
>
> Block matrices/vectors are a tool to build preconditioners that work on 
> specific blocks of a matrix. If that's not your goal, there is no need to 
> use 
> this tool.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/41659ef6-278b-40ce-92dd-6628ef1f2fc3n%40googlegroups.com.


Re: [deal.II] Reusing an LU factorization in time-dependent problems, building on part of a matrix

2021-11-25 Thread Kyle Schwiebert
Hello all,

I wasn't sure if I should start a new thread as this is a very related and 
basic question. Before I got the impression that switching from an exact LU 
(UMFAPCK) to an ILU would be a good first try when UMFPACK started too much 
memory for my problem. However, it appears that the SparseILU class does 
not play nicely with BlockSparseMatrix. I know there are probably better 
options out there, but I am not yet to the point of needing a highly 
optimized code with an optimal preconditioner. Are there any simple 
workarounds that would allow me to compute an incomplete LU decomposition 
of a BlockSparseMatrix interpreted as one big matrix? I hope I'm not 
missing anything too obvious: It doesn't look like BlockSparseMatrix is a 
derived class of SparseMatrix. It also doesn't look like there is an easy 
way to convert a BlockSparseMatrix to SparseMatrix. At first I thought that 
I could just apply SparseILU to each block (make a BlockSparseMatrix with 
each block being of type SparseILU), but I do not think that is equivalent, 
and perhaps may not even be very helpful at all.

Thank you.

On Monday, November 15, 2021 at 6:13:26 PM UTC-5 Wolfgang Bangerth wrote:

>
> Kyle:
>
> > Additionally, due to that structure, where most of the matrix does not 
> > change, I could probably solve much more efficiently too. Currently I am 
> > using UMFPACK. Figuring out and implementing a nice preconditioner is 
> > another project for another time. However, what I'm hoping is possible 
> > is some sort of scheme like:
> > 
> > 1) Solve the system directly once, store the LU decomposition and how 
> > long it took to solve.
> > 2) Switch to GMRES (nonsymmetric problem) and precondition the next 
> > solve with the precomputed LU decomposition. Time the computation.
> > 3) If that solve took less time that the direct one took, do go back to 
> > stop 2 for the next solve. If it took longer go to step 1.
>
> Yes, that scheme makes sense. The SparseDirectUMFPACK::factorize() and 
> SparseDirectUMFPACK::solve() functions make that possible where you 
> would call the solve() function that only takes vector(s) as argument.
>
>
> > I understand every part of this except for how to store and reuse the 
> > LU. From what (little) I understand, UMFPACK does literally store the 
> > entire LU. How can I get access to it directly.
>
> You don't. UMFPACK stores it internally, and solve() uses it.
>
>
> > If I can't for whatever 
> > reason, I was thinking about using a similar setup with an ILU 
> > decomposition. However, I have the same problem there and am back to 
> > hoping that someone can tell me how to get access to such a thing. Note 
> > that I would like to avoid making a copy of the whole system matrix--as 
> > I notice many of the preconditioner types are pass by reference, I can't 
> > change it and then use the old values.
>
> The situation is actually the same with ILUs: There are separate 
> functions that *compute* a factorization and that *use* a factorization.
>
> Best
> W.
>
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4018361c-45ad-411c-bedb-42a443aab78fn%40googlegroups.com.


[deal.II] Reusing an LU factorization in time-dependent problems, building on part of a matrix

2021-11-13 Thread Kyle Schwiebert
Hello all,

I am coding up a large time-dependent flow problem with many variables 
coupled together. The matrix is composed of 16 blocks, some of which are 
zero. However, only one of the blocks will change between time steps. Thus 
it seems I am wasting a lot of time reassembling the whole matrix. Can 
someone provide a reference to how I can assemble just one block of a 
BlockSparseMatrix? I know I can do it the obvious way--write a separate 
assembly routine, make a new matrix, fe_system, dof_handler and so on. Is 
there a better way?

Additionally, due to that structure, where most of the matrix does not 
change, I could probably solve much more efficiently too. Currently I am 
using UMFPACK. Figuring out and implementing a nice preconditioner is 
another project for another time. However, what I'm hoping is possible is 
some sort of scheme like:

1) Solve the system directly once, store the LU decomposition and how long 
it took to solve.
2) Switch to GMRES (nonsymmetric problem) and precondition the next solve 
with the precomputed LU decomposition. Time the computation.
3) If that solve took less time that the direct one took, do go back to 
stop 2 for the next solve. If it took longer go to step 1.

I understand every part of this except for how to store and reuse the LU. 
>From what (little) I understand, UMFPACK does literally store the entire 
LU. How can I get access to it directly. If I can't for whatever reason, I 
was thinking about using a similar setup with an ILU decomposition. 
However, I have the same problem there and am back to hoping that someone 
can tell me how to get access to such a thing. Note that I would like to 
avoid making a copy of the whole system matrix--as I notice many of the 
preconditioner types are pass by reference, I can't change it and then use 
the old values.

Thank you for any help that you can give. A quick copy-paste to a relevant 
example and a couple words about where to find the information would be 
more than sufficient help! I was not using the correct search terms to find 
such examples.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/48b3c60f-1c4c-4c4c-a5cf-8e88d5f2c98an%40googlegroups.com.


[deal.II] Evaluating a solution at a vertex, step-13

2021-10-20 Thread Kyle Schwiebert
Hello,

I'm having trouble resolving the following complier error:

error: ‘const class dealii::DoFCellAccessor, 
false>’ has no member named ‘vertex_indices’; did you mean ‘vertex_index’?

I don't understand the difference between the following code and the double 
for-loop of step13::PointValueEvaluation::operator()

for (const auto  : dof_handler.active_cell_iterators())
{
 // Loop over all vertices in the current cell.
 for (const auto vertex : cell->vertex_indices()){
   // Do some computation with the solution at the vertex.

Indeed, the documentation does not list a member function 
"vertex_indices()" for the active cell iterator class. However, then I do 
not understand why the above line works in step-13. Does it have anything 
to do with my solution vector being a BlockVector rather than 
Vector? Perhaps (maybe even likely) I have some misunderstanding of 
what -> is doing. Later in the code the following line compiles just fine 
even though vertex() is ALSO not a documented member function of the active 
cell iterator class:

Point<3> currentVertex = cell->vertex(vertex);

I clearly have some grave misunderstanding I haven't been able to sort out 
on my own with the documentation and examples. Can someone help me figure 
out what is going on here?

Quick note: I'm aware that evaluation at the vertices is risky in the case 
of DG methods, but I am using Q2 FEs.

Regards,
Kyle

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/bc2c80ba-8540-4ee2-aa2c-059ff0441aa5n%40googlegroups.com.


[deal.II] Time dependent boundary conditions

2021-10-01 Thread Kyle Schwiebert
I am having trouble understanding the class FunctionTime, under what 
circumstances it is need, and how it is implemented.

Consider Step-26 in the tutorial. There is no variable anywhere defined 
with class FunctionTime. However, the function get_time() is still used, 
seemingly to get the current value of the time variable. However, when I do 
similarly, it seems the the variable created in my BoundaryValues function 
is always zero. I am using the line:

const double time = this->get_time();

This is not surprising to me: If we don't have a class FunctionTime time 
variable, I would expect get_time to either error or output zero. However, 
while in my function, get_time() always returns zero, in apparently does 
not do so in step-26. Can someone help me understand this?

Thank you,
Kyle

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/89092349-4116-4d90-93b5-3d40ac10145bn%40googlegroups.com.


Re: [deal.II] Calculating an integral over the boundary

2021-09-28 Thread Kyle Schwiebert
Thank you for the help. I think that has the pretty much sorted out for me. 
One additional question: On this test problem people also like to know the 
pressure drop across the obstacle. This basically involves calculating the 
pressure at two points in the mesh and finding their difference. I see in 
step 13 this can be done by hoping that the points you need to evaluate are 
vertices of the mesh. Is there a more general solution--or a way to ensure 
a certain point is in the mesh? FYI I'm using the very helpful builtin 
meshing scheme for this problem: GridGenerator::channel_with_cylinder()

Thanks,
Kyle

On Monday, September 27, 2021 at 6:24:57 PM UTC-4 Wolfgang Bangerth wrote:

> On 9/27/21 1:43 PM, Kyle Schwiebert wrote:
> > I'm trying to replicate the simulation described in this paper 
> > <
> https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwias-berlin.de%2Fpeople%2Fjohn%2FELECTRONIC_PAPERS%2FJoh04.IJNMF.pdf=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7Ce97881b432a742a2b53008d981ef1a37%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637683686653742835%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=dDqBEMcMDHgOSMr4G2McKiWUmifKo5sSZpc3ReFfUQ8%3D=0>.
>  
>
> > How could I compute the integrals in equations 4 and 5? I'm sure it's 
> > covered in the tutorial, but I couldn't find it after searching in 
> > several different steps in the tutorial. Could someone please point me 
> > to an example in the tutorials where this type of computation with a 
> > computed solution? The only examples I could find seem to only discuss 
> > computing L2, or H1 errors with a special function that avoid manually 
> > computing the integral. Is the easiest thing to do as is done in the 
> > matrix assembly?
>
> Kyle,
> in essence you just have to loop over all cells and integrate up the 
> quantity you have in the formula for a given choice of v_l and v_d. This 
> works in a similar way to this example here where we compute the angular 
> momentum:
>
>
> https://github.com/geodynamics/aspect/blob/master/source/simulator/nullspace.cc#L357-L459
>
>
> > Finally, how would one initialize the vectors v_l and v_d? Could it be 
> > done easily with an appropriate AffineConstraints object?
>
> As long as you satisfy the boundary conditions, it's your choice. So you 
> could, for example, start with a zero vector, evaluate boundary values 
> as indicated on the boundary in question, and then just go through the 
> constraints you get as a result and apply them to your zero vector (via 
> AffineConstraints::distribute(), applied to the zero vector). That would 
> probably be what I'd do.
>
> Or you just take v_l/v_d as functions that are analytically described 
> (i.e., that are not finite element functions to begin with).
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/161d3d21-836f-43bf-b751-6da7de2a451dn%40googlegroups.com.


[deal.II] Calculating an integral over the boundary

2021-09-27 Thread Kyle Schwiebert
I'm trying to replicate the simulation described in this paper 
. How 
could I compute the integrals in equations 4 and 5? I'm sure it's covered 
in the tutorial, but I couldn't find it after searching in several 
different steps in the tutorial. Could someone please point me to an 
example in the tutorials where this type of computation with a computed 
solution? The only examples I could find seem to only discuss computing L2, 
or H1 errors with a special function that avoid manually computing the 
integral. Is the easiest thing to do as is done in the matrix assembly?

Finally, how would one initialize the vectors v_l and v_d? Could it be done 
easily with an appropriate AffineConstraints object?

Thanks in advance to anyone who can help.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b5d65d54-cc50-4885-a062-2c790dca2620n%40googlegroups.com.