[deal.II] Distance between points across periodic boundary

2023-08-17 Thread Lucas Myers
Hi everyone,

I have a problem whereby I must calculate whether points are in a 
neighborhood of some base point (e.g. within some L2 distance). In the bulk 
this is simple, but I'm wanting to impose periodic boundary conditions on 
some of the edges. In this case, it's conceivable that a base point close 
to a periodic boundary would have points on the other side of a periodic 
boundary which are within its neighborhood. Is it possible, in general, to 
calculate the distance (L2 or otherwise) between a base point and another 
point through a periodic boundary? Is this even a well-posed question?

Some additional points: 
* I'm aware of how to do this for a hyper-rectangle (just translate the 
base point by the domain dimensions), but I wonder if this can be done more 
generally.
* In finding points in a neighborhood of the base point, I traverse cells 
neighbor-by-neighbor. This gives me a cell on the base-point-side of the 
periodic boundary which is in the neighborhood of the base point, and also 
a connected subdomain between the base point cell and the periodic boundary 
cell. I'm not sure if this helps, but it could simplify the algorithm.
* I am also interested in knowing whether the base point is within some 
distance of a subdomain bounding box, particularly subdomains that are 
connected through a periodic boundary. For this false positives are okay, 
but false negatives are not.

Any help on this is very much appreciated, I'm quite stuck on the general 
case.

- Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/85bb9bf2-2b77-41cb-894a-b6ffecfae1c7n%40googlegroups.com.


Re: [deal.II] Determine location of a degree of freedom

2023-07-28 Thread Lucas Myers
Hey no worries, I have also been caught posting questions that were in the
FAQ.

- Lucas

On Fri, Jul 28, 2023, 4:59 PM Sean Johnson  wrote:

> I'm so sorry that it was in the FAQ.
>
> Thanks for the fast response. I will try this immediately. Thanks again
>
> On Friday, July 28, 2023 at 3:56:47 PM UTC-6 lucasm...@gmail.com wrote:
>
>> Hi Sean,
>>
>> Is this what you're looking for?
>>
>>
>> https://github.com/dealii/dealii/wiki/Frequently-Asked-Questions#how-to-get-the-mapped-position-of-support-points-of-my-element
>>
>> - Lucas
>>
>> On Fri, Jul 28, 2023, 4:51 PM Sean Johnson  wrote:
>>
>>> Is there a way to findout the "coordinates" of a degree of freedom? I
>>> want to strongly enforce all degrees of freedom on a boundary be 0 in a DG
>>> dof_handler. I know this is not normal for DG. It is done to avoid an edge
>>> that otherwise contains a singularity. However, the degrees of freedom are
>>> not logically on the boundary and as far as I understand they aren't even
>>> on the face of their respective cells but the interior. I can determine the
>>> cells that contain degrees of freedom that are physically on this outer
>>> boundary but not "logically" there within Deal.II. I just need to be able
>>> to differentiate between the other degrees of freedom within the cell now
>>> so that I can set the constraints.
>>>
>>> Thanks for any time spent answering this,
>>> Sean Johnson
>>>
>>> --
>>> The deal.II project is located at http://www.dealii.org/
>>> For mailing list/forum options, see
>>> https://groups.google.com/d/forum/dealii?hl=en
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "deal.II User Group" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to dealii+un...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/dealii/6a75d6c8-bef2-445c-b1c8-aa7c303b4fe1n%40googlegroups.com
>>> 
>>> .
>>>
>> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/ed7661d8-7232-4519-81bf-4107815e4730n%40googlegroups.com
> 
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALTw5sYG_xQ%2BDa%3D99fr1PjmsEqoWv7%3DHazs32hKN95qxBLpJkA%40mail.gmail.com.


Re: [deal.II] Determine location of a degree of freedom

2023-07-28 Thread Lucas Myers
Hi Sean,

Is this what you're looking for?

https://github.com/dealii/dealii/wiki/Frequently-Asked-Questions#how-to-get-the-mapped-position-of-support-points-of-my-element

- Lucas

On Fri, Jul 28, 2023, 4:51 PM Sean Johnson  wrote:

> Is there a way to findout the "coordinates" of a degree of freedom? I want
> to strongly enforce all degrees of freedom on a boundary be 0 in a DG
> dof_handler. I know this is not normal for DG. It is done to avoid an edge
> that otherwise contains a singularity. However, the degrees of freedom are
> not logically on the boundary and as far as I understand they aren't even
> on the face of their respective cells but the interior. I can determine the
> cells that contain degrees of freedom that are physically on this outer
> boundary but not "logically" there within Deal.II. I just need to be able
> to differentiate between the other degrees of freedom within the cell now
> so that I can set the constraints.
>
> Thanks for any time spent answering this,
> Sean Johnson
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/6a75d6c8-bef2-445c-b1c8-aa7c303b4fe1n%40googlegroups.com
> 
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALTw5sbFvYRn3gkh3ZpHHLS%2B%2BXjCm_2xFkX1DV8ooq1z-AtTpQ%40mail.gmail.com.


Re: [deal.II] Integrate difference isn't picking on the

2023-07-22 Thread Lucas Myers
I haven't tried this with deal.II yet, but I've recently started using the
mold linker for programs I write which have many compilation units (link to
GitHub here: https://github.com/rui314/mold). If you're only changing one
line then probably only one compilation unit is being recompiled, so the
bulk of the wait time is probably being taken by the linker.

Note that, to get a significant speedup, you'll have to recompile with all
of your cores so the linking can be parallelized. I've found that using all
cores to compile sometimes causes make to get killed because it uses too
much memory, but if most things are compiled and you're just on the linking
stage you don't really run into this.

Alternatively, I think that some other folks use gold linker or the lld
linker (in case you can't get mold to work).

- Lucas


On Sat, Jul 22, 2023, 11:37 AM Abbas Ballout 
wrote:

> If I change a single line in deal I'll have to wait a good amount of time
> for it to compile.
> Do you all go through this? All I have is a 4 cores and 16 GBs of RAM on
> my Laptop.
>
> On Monday, July 17, 2023 at 9:48:00 AM UTC+2 Abbas Ballout wrote:
>
>> YES. Would take me a while but I ll do it.
>> Be prepared for many questions.
>>
>> On Monday, July 17, 2023 at 2:44:01 AM UTC+2 Wolfgang Bangerth wrote:
>>
>>> On 7/16/23 07:22, Abbas Ballout wrote:
>>> > I get this error when I call the
>>> exact_solution_vector_function.gradient(p1):
>>> > !ERROR Message:
>>> > /An error occurred in line <135> of file
>>> >
>>> 
>>> in function
>>> > dealii::Tensor<1, dim, Number> dealii::Function>> > RangeNumberType>::gradient(const dealii::Point&, unsigned int)
>>> const
>>> > [with int dim = 2; RangeNumberType = double]
>>> > The violated condition was:
>>> > false
>>> > Additional information:
>>> > You (or a place in the library) are trying to call a function that
>>> is
>>> > declared as a virtual function in a base class but that has not
>>> been
>>> > overridden in your derived class./
>>> > /
>>> > /
>>> > Calling value  works fine but not gradient even though I have
>>> overridden both
>>> > functions.
>>>
>>> You do in your own class, but VectorFunctionFromTensorFunction does not.
>>> That's not by design, but probably simply because nobody has ever had a
>>> need
>>> for this. Would you like to try and write a patch that adds
>>> VectorFunctionFromTensorFunction::gradient()? You could model it on
>>> VectorFunctionFromTensorFunction::value().
>>>
>>> Best
>>> Wolfgang
>>>
>>> --
>>> 
>>> Wolfgang Bangerth email: bang...@colostate.edu
>>> www: http://www.math.colostate.edu/~bangerth/
>>>
>>>
>>> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/15f0b1e7-d5ea-49ed-b58f-ea10fae84d52n%40googlegroups.com
> 
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALTw5sYqf2rOe664ZpMftqvMtQ79%3Dejt%3DzZA8Mw5GH5XJLYNcg%40mail.gmail.com.


Re: [deal.II] Weak scaling running out of memory

2023-06-07 Thread Lucas Myers
Hi Martin,

The problem that you suggest was indeed what was happening. Thanks so much
for the help! I imagine this helped me avoid a long debugging process.

As an aside, is this something that would qualify for an error message?
Something like putting an Assert line at the top of any number of functions
which operate on an AffineConstraints object using a DoFHandler. This would
query whether the triangulation is distributed, and also whether
constraints had been initialized with locally_relevant_dofs, and if it does
not it would stop the program.

If so, I can file a bug report and try to make it happen.

- Lucas

On Wed, Jun 7, 2023 at 2:54 PM Martin Kronbichler <
kronbichler.mar...@gmail.com> wrote:

> Dear Lucas,
>
> Without seeing your code, it is difficult to nail down the issue. But by
> far the most common mistake that lead to this type of problem in my codes
> is that I forgot to initialize an AffineConstraints object with an index
> set for the locally relevant DoFs, i.e., I was missing this line:
> https://github.com/dealii/dealii/blob/ea23d6bb90739b6bd2f7af96a2b9b73bb10c7298/examples/step-40/step-40.cc#L294
>
> In general, deal.II has been used for far larger problems than this, so
> most data structures should be scalable. But of course, there are many
> places that might be problematic, and as a library deal.II might have many
> things that have not been tested at scale. I found it useful to add lines
> like
> https://github.com/kronbichler/multigrid/blob/6b43f32b4758a169af5b4bb54546ad279d6fee9f/poisson_dg/program.cc#L245-L247
> with the 'print_time' function given here
> https://github.com/kronbichler/multigrid/blob/6b43f32b4758a169af5b4bb54546ad279d6fee9f/common/laplace_operator.h#L38-L52
> (the name is not really good and only done on a side branch, the motivation
> is of course to print some statistics) at various places in the code, in
> order to identify the problem. With measurements of the actual memory
> consumption you are often able to identify the problem on a smaller scale,
> even though it might take 2-3 attempts to have the timers in the right
> spots.
>
> Best,
> Martin
> On 07.06.23 21:04, Lucas Myers wrote:
>
> Hi everyone,
>
> I'm trying to run a scaling analysis on my code, but when I make the
> system too large I get a (Killed) error, which I strongly suspect has to do
> with memory issues. I'm unsure why this is happening, because I am
> increasing resources proportional to the system size.
>
> Details: I'm trying to get a sense for weak scaling in 3D, so for this I
> use a subdivided hyper-rectangle. Since the configuration is nearly
> constant in the third dimension, I use p1 = (-20, -20, -5) and p2 = (20,
> 20, 5) for my defining points. To try to keep the number of DoFs per core
> constant, I do runs with the following sets of parameters (so 5 runs total):
>
> hyper-rectangle subdivisions: (4, 4, 1), (4, 4, 2), (4, 8, 2), (4, 4, 1),
> (4, 4, 2)
> global refines: 5, 5, 5, 6, 6
> # cores: 128, 256, 512, 1024, 2048
>
> Each node is 128 cores, and so doubling the number of cores also doubles
> the amount of available memory. However, it seems that the memory is
> running out even before starting to assemble the system (so it couldn't be
> the solver that is causing this problem). Are there any data structures in
> deal.II which might scale poorly (memory-wise) in this scenario? And also
> are there any nice ways of figuring out what is eating all the memory?
>
> - Lucas
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/fee240c8-795f-432f-a1e2-c462a981c26fn%40googlegroups.com
> <https://groups.google.com/d/msgid/dealii/fee240c8-795f-432f-a1e2-c462a981c26fn%40googlegroups.com?utm_medium=email_source=footer>
> .
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/dealii/mZ7ejCvA-kU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/fbb87

[deal.II] Weak scaling running out of memory

2023-06-07 Thread Lucas Myers
Hi everyone,

I'm trying to run a scaling analysis on my code, but when I make the system 
too large I get a (Killed) error, which I strongly suspect has to do with 
memory issues. I'm unsure why this is happening, because I am increasing 
resources proportional to the system size. 

Details: I'm trying to get a sense for weak scaling in 3D, so for this I 
use a subdivided hyper-rectangle. Since the configuration is nearly 
constant in the third dimension, I use p1 = (-20, -20, -5) and p2 = (20, 
20, 5) for my defining points. To try to keep the number of DoFs per core 
constant, I do runs with the following sets of parameters (so 5 runs total):

hyper-rectangle subdivisions: (4, 4, 1), (4, 4, 2), (4, 8, 2), (4, 4, 1), 
(4, 4, 2)
global refines: 5, 5, 5, 6, 6
# cores: 128, 256, 512, 1024, 2048

Each node is 128 cores, and so doubling the number of cores also doubles 
the amount of available memory. However, it seems that the memory is 
running out even before starting to assemble the system (so it couldn't be 
the solver that is causing this problem). Are there any data structures in 
deal.II which might scale poorly (memory-wise) in this scenario? And also 
are there any nice ways of figuring out what is eating all the memory?

- Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/fee240c8-795f-432f-a1e2-c462a981c26fn%40googlegroups.com.


Re: [deal.II] Error in using FESystem usage

2023-05-16 Thread Lucas Myers
Hi Vinayak,

If you do not manually initialize member variables in a constructor, they
are automatically initialized with their default constructor (i.e. one
which takes no arguments). However, the FESystem class is such that the
default constructor is manually deleted, and so can't be constructed
without explicitly passing in some arguments. In order to get your code to
compile, you'll need to call a non-default constructor for your FESystem
member variable in the constructor of Network.

See the FESystem documentation or step-8 (
https://www.dealii.org/current/doxygen/deal.II/step_8.html#ElasticProblemElasticProblemconstructor)
for an example of how to properly construct an FESystem object.


On Tue, May 16, 2023, 9:12 AM Vinayak Vijay  wrote:

> Hello,
>
> I am trying to initialise FESystem in a class called "Network" (definition
> present in a header file), however when i am getting the following error:
>
> In constructor ‘Network::Network(const string&)’:
> /home/Network.cpp:21:57: error: use of deleted function
> ‘dealii::FESystem::FESystem() [with int dim = 1; int
> spacedim = 3]’
>21 | Network::Network(const std::string _file)
>   | ^
> In file included from /home/Network.h:16,
>  from /home/Network.cpp:1:
> /home/user/spack/opt/spack/linux-ubuntu20.04-icelake/gcc-9.4.0/dealii-9.4.2-4haiezd5dty7b537inhxmeebwxllx4gs/include/deal.II/fe/fe_system.h:215:3:
> note: declared here
>   215 |   FESystem() = delete;
>
> I am not able to figure out the error. Can someone help?
>
> Thanks
> Vinayak
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/de162421-9d6d-4722-a3e6-6c92fa316319n%40googlegroups.com
> 
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALTw5sZZB6Ey8O_3sUh67oj25BkatbNdKb_AS1qwzW%2B3onD%3DkQ%40mail.gmail.com.


[deal.II] Internal deal.II error when instantiating TrilinosWrappers::block_operator

2023-04-27 Thread Lucas Myers
Hi folks,

I'm trying to use the block_operator technique on Trilinos vectors and 
matrices, but I can't quite get it to compile. For simplicity, the relevant 
lines of code are:
```
#include 
#include 

using vec = dealii::LinearAlgebraTrilinos::MPI::Vector;
using block_vec = dealii::LinearAlgebraTrilinos::MPI::BlockVector;
const auto B = dealii::TrilinosWrappers::linear_operator(system_matrix.block(0, 0));
const auto P_inv = dealii::TrilinosWrappers::block_operator<1, 1, 
block_vec>({B});
```
where system_matrix is of type 
`dealii::LinearAlgebraTrilinos::MPI::BlockSparseMatrix`.

The first error that comes up looks like:

`deal.II/lac/block_linear_operator.h:679:37: error: 
‘populate_linear_operator_functions’ was not declared in this scope; did 
you mean 
‘dealii::internal::BlockLinearOperatorImplementation::populate_linear_operator_functions’?`

Is this a bug in deal.II? That would be my first inclination, given that I 
cannot see how it would be template-related.

There are a few other errors that come up, so I'm attaching the error 
message in a text file. 

Any tips on debugging this? I'm finding the compiler output basically 
inscrutable. Any help is appreciated, so sorry for all the spam!

- Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/8a6be404-f323-4edc-9720-fe603c92dfe6n%40googlegroups.com.
In file included from 
/home/lucas/Application-Data/dealii/install/dealii-9.4/include/deal.II/lac/linear_operator_tools.h:25,
 from 
/home/lucas/Documents/research/phase-field-crystals/src/phase_field_crystal_systems/phase_field_crystal_system_mpi.cpp:35:
/home/lucas/Application-Data/dealii/install/dealii-9.4/include/deal.II/lac/block_linear_operator.h:
 In instantiation of ‘dealii::BlockLinearOperator 
dealii::block_operator(const 
std::array, n>, m>&) [with 
long unsigned int m = 1; long unsigned int n = 1; Range = 
dealii::TrilinosWrappers::MPI::BlockVector; Domain = 
dealii::TrilinosWrappers::MPI::BlockVector; BlockPayload = 
dealii::TrilinosWrappers::internal::BlockLinearOperatorImplementation::TrilinosBlockPayload;
 typename BlockPayload::BlockType = 
dealii::TrilinosWrappers::internal::LinearOperatorImplementation::TrilinosPayload;
 typename Domain::BlockType = dealii::TrilinosWrappers::MPI::Vector; typename 
Range::BlockType = dealii::TrilinosWrappers::MPI::Vector]’:
/home/lucas/Application-Data/dealii/install/dealii-9.4/include/deal.II/lac/trilinos_linear_operator.h:228:69:
   required from ‘dealii::BlockLinearOperator
 > dealii::TrilinosWrappers::block_operator(const 
std::array,
 n>, m>&) [with long unsigned int m = 1; long unsigned int n = 1; Range = 
dealii::TrilinosWrappers::MPI::BlockVector; Domain = 
dealii::TrilinosWrappers::MPI::BlockVector; typename Domain::BlockType = 
dealii::TrilinosWrappers::MPI::Vector; typename Range::BlockType = 
dealii::TrilinosWrappers::MPI::Vector]’
/home/lucas/Documents/research/phase-field-crystals/src/phase_field_crystal_systems/phase_field_crystal_system_mpi.cpp:574:40:
   required from ‘void PhaseFieldCrystalSystemMPI::solve_and_update() 
[with int dim = 2]’
/home/lucas/Documents/research/phase-field-crystals/src/phase_field_crystal_systems/phase_field_crystal_system_mpi.cpp:693:16:
   required from here
/home/lucas/Application-Data/dealii/install/dealii-9.4/include/deal.II/lac/block_linear_operator.h:679:37:
 error: ‘populate_linear_operator_functions’ was not declared in this scope; 
did you mean 
‘dealii::internal::BlockLinearOperatorImplementation::populate_linear_operator_functions’?
  679 |   populate_linear_operator_functions(return_op);
  |   ~~^~~
  |   
dealii::internal::BlockLinearOperatorImplementation::populate_linear_operator_functions
/home/lucas/Application-Data/dealii/install/dealii-9.4/include/deal.II/lac/block_linear_operator.h:359:5:
 note: 
‘dealii::internal::BlockLinearOperatorImplementation::populate_linear_operator_functions’
 declared here
  359 | populate_linear_operator_functions(
  | ^~
/home/lucas/Application-Data/dealii/install/dealii-9.4/include/deal.II/lac/block_linear_operator.h:
 In instantiation of ‘dealii::BlockLinearOperator::BlockLinearOperator(const BlockPayload&) [with Range = 
dealii::TrilinosWrappers::MPI::BlockVector; Domain = 
dealii::TrilinosWrappers::MPI::BlockVector; BlockPayload = 
dealii::TrilinosWrappers::internal::BlockLinearOperatorImplementation::TrilinosBlockPayload]’:
/home/lucas/Application-Data/dealii/install/dealii-9.4/include/deal.II/lac/block_linear_operator.h:666:52:
   required 

[deal.II] Preconditioners not lowering the number of GMRES iterations

2023-04-25 Thread Lucas Myers
Hi everyone,

TL;DR:
I'm trying to precondition my system which can be solved with GMRES (with 
increasing iteration number for increasing size) but the standard 
preconditioners are either increasing the number of iterations, or causing 
the solver not to converge.

Details:
I'm trying to solve a phase-field crystal system, whose time evolution 
equation I'm attaching as `pfc_time_evolution.png`. By introducing 
auxiliary variables (`auxiliary_variables.png`), it can be written as a 
second-order coupled equation (`reduced_pfc_time_evolution.png`). This can 
be discretized in time and linearized such that the linear operator looks 
like `block_continuous_operator.png` with dt the timestep and theta a 
time-stepping discretization parameter. Finally, this can be discretized in 
space so that the block-form of the relevant linear operator is given by 
`block_discrete_operator.png`. The M blocks are mass matrices and the L 
blocks are Laplacian operators. The rest are some sum of Laplacians, mixed 
mass matrices, and nonlinear terms which couple to the fields.

As a first pass, I'm trying to solve the entire matrix in one go, and this 
works fine using the GMRES method. However, the number of GMRES iterations 
increase as the number of DoFs increase, so I would like to precondition to 
mitigate this. It's my understanding that the AMG preconditioner is a good 
black-box for most problems, but when I try to use it the GMRES solver does 
not converge (I've set max iterations to n_dofs). Additionally, a simple 
Jacobi preconditioner increases the number of iterations relative to no 
preconditioner for two different grid sizes. Finally, the ILU 
preconditioner decreases the number of GMRES iterations for a small grid, 
but then fails to converge for a large grid. What could be going on here?

This is for the entire matrix, but as an aside, if it is helpful to somehow 
use the block structure of the matrix to precondition (it's 3x3, so can't 
use Schur complement I think) I'd be interested if anyone has any tips. 

Thanks so much for any help!

- Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0e9c920e-42d9-4d69-bdbb-28936eb601c8n%40googlegroups.com.


Re: [deal.II] Integrate on local subsets of domain on distributed triangulation

2023-04-25 Thread Lucas Myers
Hi Wolfgang,

This is extremely helpful, thanks so much!

- Lucas

On Tuesday, April 25, 2023 at 11:24:31 AM UTC-5 Wolfgang Bangerth wrote:

> On 4/25/23 09:44, Lucas Myers wrote:
> > 
> > Re: FEPointEvaluation and RemotePointEvaluation, I think I actually need 
> > more than that because I need the quadrature points on each of the cells 
> > that I'm querying, so I can just do the standard procedure with FEValues 
> > and the corresponding Quadrature object.
>
> In that case, the way I would implement this is that every process makes 
> a list of circles: points at which it wants to evaluate the convolution, 
> and the radius of the convolution.
>
> In a second step, you then determine which of these circles require 
> communication. This isn't actually entirely trivial, but a good 
> criterion is probably whether a circle has overlap with an artificial 
> cell. So:
> std::set all_circles = ...;
> std::set circles_requiring_communication;
> for (cell=...)
> if (cell->is_artificial())
> for (c : all_circles)
> if (c not already in circles_requiring_communication)
> if (c overlaps with cell)
> circles_requiring_communication.insert (c);
>
> In a third step, you can either send all of these circles to all other 
> processes (via Utilities::MPI::all_gather()), or you can be a bit 
> smarter and first get a bounding box for each of the other processes' 
> locally owned cells (see the function in GridTools for that); in that 
> latter case, you'd only send a circle to a process if it overlaps with 
> that other process's bounding box, using one of the ConsensusAlgorithms 
> functions.
>
> At this point, each process has a list of circles that overlap with its 
> own subdomain. Now you work on that by looping over all of its locally 
> owned cells, on each cell query all circles, and do what you need to do. 
> This results in a list of contributions that need to be sent back to the 
> original process requesting that information. This could probably be 
> done nicely using the request-answer system of the ConsensusAlgorithms 
> functions.
>
> You'd need to be careful with cases where a process gets sent a circle 
> that has overlap with its bounding box, but not with any of its cells. 
> In that case, the reply is simply a zero contribution to the convolution 
> integral.
>
> I hope this makes sense!
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5560307a-34a6-45f5-afbc-db3586f59b0bn%40googlegroups.com.


Re: [deal.II] Integrate on local subsets of domain on distributed triangulation

2023-04-25 Thread Lucas Myers
Hi Wolfgang,

Thanks for the help! Point taken about neighboring processes and 
communication latency.

Re: FEPointEvaluation and RemotePointEvaluation, I think I actually need 
more than that because I need the quadrature points on each of the cells 
that I'm querying, so I can just do the standard procedure with FEValues 
and the corresponding Quadrature object.

I'll think harder about how to avoid latency issues.

- Lucas
On Monday, April 24, 2023 at 5:08:20 PM UTC-5 Wolfgang Bangerth wrote:

>
> > I'd like to coarse-grain a finite element field, where the 
> coarse-graining 
> > procedure is just convolving the field with a Gaussian function. More 
> > concretely, I'd like to do the following:
> > 
> > 
> > For a regular Triangulation, we can calculate  at every quadrature 
> point by 
> > iterating through the entire Triangulation and calculating the integrand 
> at 
> > all of the quadrature points. In fact, because there is a Gaussian in 
> distance 
> > in the integrand, we only have to look at cells a distance ~3a_0 away.
> > 
> > However, for a distributed triangulation it could be the case that cells 
> 3a_0 
> > away may be on a different MPI process. My first question is: is there a 
> way 
> > to extend the number of ghost cells along the boundary such that they 
> are some 
> > total distance away from the locally-owned cells on the edge of the 
> domain? 
>
> p4est (our parallel mesh engine) actually allows for "thicker" ghost 
> layers, 
> but we never tried that in deal.II and I doubt that it would work without 
> much 
> trouble. Either way, p4est allows having a ghost layer of N cells width 
> whereas you need one that has a specific Euclidean distance from the 
> locally 
> owned cells -- so these are different concepts anyway.
>
>
>
> > And if not, is there a nice utility to allow me to access neighboring 
> cells on 
> > other domains (albeit, at the cost of some MPI messages)?
>
> Check out the FEPointEvaluation and Utilities::MPI::RemotePointEvaluation 
> classes. I think they will do what you need.
>
>
> > Otherwise, is there a good way to do this? My guess would be to do all 
> of the 
> > local integrations first, and make a vector of cells which end up 
> butting 
> > against the boundary. Then one could use the `ghost_owners` function to 
> send 
> > information about the edge cells to the adjacent subdomains, which would 
> then 
> > send information back. However that seems a little in the weeds of using 
> MPI 
> > and also would require some error-prone bookkeeping, which I would like 
> to 
> > avoid if possible.
>
> Yes, and there's a bug in your algorithm as well: Just because the ghost 
> cells 
> are owned by a specific set of processed doesn't mean that the cell you 
> need 
> to evaluate the solution on is actually owned by one of the neighboring 
> processes.
>
> Secondly, what you *really* want to do is assess first which points you 
> need 
> information on from other processes, send those points around, and while 
> you 
> are waiting for these messages to be delivered and answered, you want to 
> work 
> on your local cells -- not the other way around: You want to hide 
> communication latency behind local work.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/401a3e47-f03a-4111-90b7-a58069549977n%40googlegroups.com.


[deal.II] Re: Integrate on local subsets of domain on distributed triangulation

2023-04-24 Thread Lucas Myers
Ack, I realize the pasted screenshot of the relevant equation has 
disappeared! I've included it as an attachment to this reply.

- Lucas

On Saturday, April 22, 2023 at 2:49:07 PM UTC-5 Lucas Myers wrote:

> Hi folks,
>
> I'd like to coarse-grain a finite element field, where the coarse-graining 
> procedure is just convolving the field with a Gaussian function. More 
> concretely, I'd like to do the following:
>
>
> For a regular Triangulation, we can calculate  at every quadrature 
> point by iterating through the entire Triangulation and calculating the 
> integrand at all of the quadrature points. In fact, because there is a 
> Gaussian in distance in the integrand, we only have to look at cells a 
> distance ~3a_0 away.
>
> However, for a distributed triangulation it could be the case that cells 
> 3a_0 away may be on a different MPI process. My first question is: is there 
> a way to extend the number of ghost cells along the boundary such that they 
> are some total distance away from the locally-owned cells on the edge of 
> the domain? And if not, is there a nice utility to allow me to access 
> neighboring cells on other domains (albeit, at the cost of some MPI 
> messages)? 
>
> Otherwise, is there a good way to do this? My guess would be to do all of 
> the local integrations first, and make a vector of cells which end up 
> butting against the boundary. Then one could use the `ghost_owners` 
> function to send information about the edge cells to the adjacent 
> subdomains, which would then send information back. However that seems a 
> little in the weeds of using MPI and also would require some error-prone 
> bookkeeping, which I would like to avoid if possible.
>
> Thanks so much for any help!
> - Lucas
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b926c832-7bcb-422b-bb78-83d286d328d7n%40googlegroups.com.


[deal.II] Integrate on local subsets of domain on distributed triangulation

2023-04-22 Thread Lucas Myers
Hi folks,

I'd like to coarse-grain a finite element field, where the coarse-graining 
procedure is just convolving the field with a Gaussian function. More 
concretely, I'd like to do the following:


For a regular Triangulation, we can calculate  at every quadrature point 
by iterating through the entire Triangulation and calculating the integrand 
at all of the quadrature points. In fact, because there is a Gaussian in 
distance in the integrand, we only have to look at cells a distance ~3a_0 
away.

However, for a distributed triangulation it could be the case that cells 
3a_0 away may be on a different MPI process. My first question is: is there 
a way to extend the number of ghost cells along the boundary such that they 
are some total distance away from the locally-owned cells on the edge of 
the domain? And if not, is there a nice utility to allow me to access 
neighboring cells on other domains (albeit, at the cost of some MPI 
messages)? 

Otherwise, is there a good way to do this? My guess would be to do all of 
the local integrations first, and make a vector of cells which end up 
butting against the boundary. Then one could use the `ghost_owners` 
function to send information about the edge cells to the adjacent 
subdomains, which would then send information back. However that seems a 
little in the weeds of using MPI and also would require some error-prone 
bookkeeping, which I would like to avoid if possible.

Thanks so much for any help!
- Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a6e8402a-9853-45ca-9298-11069ad31c8fn%40googlegroups.com.


Re: [deal.II] Biharmonic equation with periodic BCs

2023-03-22 Thread Lucas Myers
Hi Wolfgang,

Thanks for the feedback! I have also not found any literature dealing with 
this periodic boundary conditions problem -- I doubt I will try to get to 
the bottom of it experimentally unless I run into technical issues solving 
my particular problem. With both the lifting method and interior penalty 
method, it appears that you need knowledge of adjacent cells in assembly; I 
suspect this will be difficult (or at least annoying) with periodic 
boundaries if it's not already implemented in deal.II.

For the triharmonic equation, this is the only thing I've been able to find 
. It seems quite similar to step-82.

In any case, I will try the method of introducing auxiliary variables (so 
that it's just a bunch of coupled Laplace equations).

Thanks again for the help!

- Lucas

On Monday, March 20, 2023 at 3:35:30 PM UTC-5 Wolfgang Bangerth wrote:

>
> Lucas:
>
> > these two tutorials, I didn't see anything about periodic boundary 
> > conditions, however. Two potential ways I can think to enforce periodic 
> > boundary conditions would be (in 1D, for notational simplicity):
> > 
> > 1. u(0) = u(L), du/dx (0) = du/dx (L)
> > 2. u(0) = u(L), d^2 u/dx^2 (0) = d^2 u/dx^2 (L)
> > 
> > For the former case, it appears that you would have to extend the method 
> > from step-47 or step-82 to accommodate these conditions somehow. In the 
> > latter case, it seems feasible that you could write it as two coupled 
> > Laplace equations (supposing something like what is explained in the 
> > step-47 note holds here).
> > 
> > My question is: is that right? And if so, how do the solutions gotten by 
> > imposing these boundary conditions differ? For an analytic solution, it 
> > seems like these would be equivalent (periodic boundary conditions 
> > assert periodicity of derivatives at all orders, yes?) so I'm not sure 
> > what to make of it.
>
> Like you, I suspect that you can do it either way to achieve 
> periodicity, and that you can choose the way that is most convenient for 
> your formulation. But I am not certain, nor do I know literature that 
> would have covered the question. It is, however, something that can be 
> experimentally determined via the method of manufactured solutions.
>
> I do think that it is not crazy to go with the assumption that either 
> approach mentioned above will work.
>
>
> > Just as an aside, the actual problem that I'm trying to solve involves a 
> > triharmonic operator. I suspect the situation is similar (although of 
> > course the methods in the tutorials wouldn't immediately generalize), 
> > but if there's something qualitatively different that's obvious, I'd 
> > like to know about that.
>
> I don't think there is a good reason to believe that the approaches for 
> the biharmonic and triharmonic equations should be substantially different.
>
> I do think that it would be fun to have your triharmonic program be part 
> of the code gallery one day!
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9bb52d5a-709a-4aa1-b7d2-008981d2bf11n%40googlegroups.com.


[deal.II] Biharmonic equation with periodic BCs

2023-03-20 Thread Lucas Myers
Hi folks,

Given the explanations in step-47 
 and step-82 
, it appears that 
the particular boundary conditions associated with the biharmonic equation 
can make it relatively easy to solve (e.g. two coupled Laplace equations) 
or really difficult (having to use the interior penalty method or the 
lifting operator method). In skimming these two tutorials, I didn't see 
anything about periodic boundary conditions, however. Two potential ways I 
can think to enforce periodic boundary conditions would be (in 1D, for 
notational simplicity):

   1. u(0) = u(L), du/dx (0) = du/dx (L)
   2. u(0) = u(L), d^2 u/dx^2 (0) = d^2 u/dx^2 (L)

For the former case, it appears that you would have to extend the method 
from step-47 or step-82 to accommodate these conditions somehow. In the 
latter case, it seems feasible that you could write it as two coupled 
Laplace equations (supposing something like what is explained in the 
step-47 note holds here). 

My question is: is that right? And if so, how do the solutions gotten by 
imposing these boundary conditions differ? For an analytic solution, it 
seems like these would be equivalent (periodic boundary conditions assert 
periodicity of derivatives at all orders, yes?) so I'm not sure what to 
make of it. 

Just as an aside, the actual problem that I'm trying to solve involves a 
triharmonic operator. I suspect the situation is similar (although of 
course the methods in the tutorials wouldn't immediately generalize), but 
if there's something qualitatively different that's obvious, I'd like to 
know about that.

Any help is appreciated!

Thanks,
Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/272422d6-ffd1-4bb9-9933-c58732648499n%40googlegroups.com.


[deal.II] Status of hybrid shared/distributed parallelism

2023-03-16 Thread Lucas Myers
Hi folks,

I'm wondering if there's somewhere I can look to get a broad overview of 
the different parallelization schemes available in deal.II for various 
pieces of the library, and maybe what people's experiences have been. As I 
understand it, any matrix/vector assembly can be done with a 
shared-parallel scheme with the threads framework, but I'm less sure about 
solvers (which are typically the bottleneck in my applications). To relay 
my understanding so far (and ask some specific questions):

Matrix-based AMG methods:

   - PETSc and Trilinos Epetra only use MPI distributed parallelism
   - Trilinos Tpetra with MueLu is hybrid, but requires Kokkos. Do the 
   Tpetra wrappers have associated solvers? And how do they work with Kokkos 
   via CUDA?

Matrix-based GMG methods:

   - Use deal.II's own distributed vector, and wraps a regular deal.II 
   solver (although perhaps uses a Trilinos smoother). Are any of the solvers 
   hybrid-parallelized? And are there deal.II-specific smoothers to avoid 
   copying to Epetra vectors?

Matrix-free GMG methods:

   - Use deal.II's distributed vector and wrap regular solver. 
   Additionally, the matrix multiplication is done via a matrix-free operator. 
   Is it possible (or easy) to write a custom matrix-free operator which takes 
   advantage of shared parallelism? And will the solver take advantage of 
   shared-memory parallelism when (say) adding vectors?

Matrix-free GMG methods with hp-adaptivity:

   - The only example that I've seen is step-75, and that uses Trilinos AMG 
   for the coarse solver, which I think is limited to MPI distributed 
   parallelism. Could the coarse solver be adapted to use shared parallelism, 
   and are the other aspects of this solver able to be parallelized?

As a final question: which of these aspects are eligible for handling by 
CUDA, MPI-3.0 shared memory, and Kokkos? And have folks found a significant 
speed-up by implementing any of these tools in their code?

Thanks so much for any insight,
Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/23fe1990-9a48-495e-a545-07788855d9a7n%40googlegroups.com.


[deal.II] Re: Get DoFHandler::cell_iterator from Triangulation::cell_iterator

2023-02-02 Thread Lucas Myers
Oops, this is in the FAQ:

https://github.com/dealii/dealii/wiki/Frequently-Asked-Questions#can-i-convert-triangulation-cell-iterators-to-dofhandler-cell-iterators

My bad!

- Lucas

On Thursday, February 2, 2023 at 3:53:07 PM UTC-6 Lucas Myers wrote:

> Hi everyone,
>
> I'm trying to use the `distributed_compute_point_locations` functions to 
> compute a finite element function's values at specific points. However, the 
> cells that this function returns are of type Triangulation::cell_iterator, 
> and in order to use the `FEValues::get_function_values` function I need to 
> call `reinit` with a cell that is of type `DoFHandler::cell_iterator`. I 
> have a DoFHandler handy and it's linked with the Triangulation, but is 
> there any easy way to use those combination of things to get the 
> corresponding `DoFHandler::cell_iterator`?
>
> - Lucas
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/6d225f85-75e2-4892-8e12-7519a0302141n%40googlegroups.com.


[deal.II] Get DoFHandler::cell_iterator from Triangulation::cell_iterator

2023-02-02 Thread Lucas Myers
Hi everyone,

I'm trying to use the `distributed_compute_point_locations` functions to 
compute a finite element function's values at specific points. However, the 
cells that this function returns are of type Triangulation::cell_iterator, 
and in order to use the `FEValues::get_function_values` function I need to 
call `reinit` with a cell that is of type `DoFHandler::cell_iterator`. I 
have a DoFHandler handy and it's linked with the Triangulation, but is 
there any easy way to use those combination of things to get the 
corresponding `DoFHandler::cell_iterator`?

- Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b90df7c3-850d-454e-b94b-ecfecf5e04a5n%40googlegroups.com.


[deal.II] Mechanisms for deleting cells/edges/vertices in some region

2022-09-26 Thread Lucas Myers
Hi folks,

I'm trying to construct a rectangular grid with features at arbitrary 
locations. The most straightforward way of doing this that I can think of 
is to generate a rectangular hypercube, delete the cells/edges/vertices 
around feature locations, and then attach other triangulations (e.g. a 
hyper_cube_with_cylindrical_hole) at the vacant locations. 

Is there a simple-ish utility for removing parts of the domain in this way? 
And does the process differ for a distributed triangulation? I've been 
perusing the documentation for Triangulation, GridTools, and GridGenerator 
and haven't been able to find anything.

Thanks much,
Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/8eaf47e1-c2e2-4d48-b9c8-e26b67fde75en%40googlegroups.com.


Re: [deal.II] constraints.distribute does not seem to be enforcing Dirichlet conditions

2022-08-31 Thread Lucas Myers
Thanks for the help! I printed out the number of constraints after each of 
your suggestions, and after setting the hanging node constraints the number 
of constraints is 0 (as it should be, I'm working with a 1D mesh that has 
been globally refined some number of times), but after interpolating the 
boundary values the number of constraints is only 1. Given that dim = 1 and 
I'm trying to enforce Dirichlet boundaries, I reckon there should be 2 
constraints. Running the std::map version of interpolate_boundary_values, I 
still only get 1 constraint. 

So my question is: is this a bug? I'm trying to step through it in GDB, but 
it hangs when I try to step into this function 
<https://github.com/dealii/dealii/blob/5ade4a135e8b8d73985a2cace4d41d010257c8e3/include/deal.II/numerics/vector_tools_boundary.templates.h#L410>,
 
so I'm a little bit at a loss as to how to continue debugging.

- Lucas

On Tuesday, August 30, 2022 at 3:48:06 PM UTC-5 Wolfgang Bangerth wrote:

> On 8/30/22 13:56, Lucas Myers wrote:
> > 
> > To enforce these boundary conditions, I initialize constraints with:
> > ```
> > constraints.clear();
> > dealii::DoFTools::make_hanging_node_constraints(dof_handler, 
> constraints);
> > 
> > dealii::VectorTools::
> > interpolate_boundary_values(dof_handler,
> >  0,
> > 
> > dealii::Functions::ZeroFunction(1),
> >  constraints);
> > constraints.close();
> > ```
> > I use the `distribute_local_to_global` in the assembly function, and 
> then once 
> > I've solved the linear system (directly, with UMFPACK), I use 
> > `constraints.distribute`. Not sure what else I need to do to enforce 
> those 
> > boundary conditions.
> > 
> > I'm attaching the code (<300 lines including white space) in case that 
> is 
> > helpful. Any help is appreciated!
>
> Rather than trying to debug this myself, let me ask some questions:
> * Right after the call you show above, are the necessary constraints in 
> your 
> 'constraint' object?
> * Right before you call 'distribute_local_to_blobal()' are the necessary 
> constraints still in the 'constraints' object?
> * Right before you call 'distribute()' after you solve the linear system, 
> are 
> the necessary constraints still in the 'constraints' object?
> * Right after you call 'distribute()', are the entries in the solution 
> vector 
> that correspond to boundary nodes correct?
>
> If the answer to any of the questions above is 'no', then you've got a 
> spot to 
> further debug. (All of this might be easiest to check on a very coarse 
> mesh, 
> say a 2x2 mesh.) Right now, all you see is that the *end result* is wrong. 
> You've got to narrow things down to a smaller part of the code, and the 
> questions above should help you with it!
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/1ebbbc73-dd45-41e0-894f-3543f3ded1adn%40googlegroups.com.


[deal.II] constraints.distribute does not seem to be enforcing Dirichlet conditions

2022-08-30 Thread Lucas Myers
Hi everyone,

I'm trying to solve a nonlinear ODE with Dirichlet conditions using 
Newton's method. For this I give an initial guess for the solution ( y = 
(1/2) x ) and then update each Newton step with an update that ought to be 
zero at the boundaries (hence preserving the boundary conditions of the 
initial guess). However, for some reason my update is taking on nonzero 
values at the boundaries.

To enforce these boundary conditions, I initialize constraints with:
```
constraints.clear();
dealii::DoFTools::make_hanging_node_constraints(dof_handler, constraints);
 
dealii::VectorTools::
interpolate_boundary_values(dof_handler,
 0,
 
dealii::Functions::ZeroFunction(1),
 constraints);
constraints.close();
```
I use the `distribute_local_to_global` in the assembly function, and then 
once I've solved the linear system (directly, with UMFPACK), I use 
`constraints.distribute`. Not sure what else I need to do to enforce those 
boundary conditions.

I'm attaching the code (<300 lines including white space) in case that is 
helpful. Any help is appreciated!

- Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/72a84737-055f-4fbc-b7b7-ad4400f687ebn%40googlegroups.com.
#ifndef DZYALOSHINSKII_SYSTEM_HPP
#define DZYALOSHINSKII_SYSTEM_HPP

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 


#include 

class DzyaloshinskiiSystem
{
public:
DzyaloshinskiiSystem(double eps_, unsigned int degree);

void make_grid(unsigned int n_refines);
void setup_system();
void assemble_system();
void solve_and_update(double newton_step);
bool run_newton_method(double tol, 
   unsigned int max_iter, 
   double newton_step = 1.0);
void output_solution(std::string filename);

private:
static constexpr int dim = 1;

dealii::Triangulation tria;
dealii::FE_Q fe;
dealii::DoFHandler dof_handler;
dealii::AffineConstraints constraints;

dealii::SparsityPattern sparsity_pattern;
dealii::SparseMatrix system_matrix;

dealii::Vector solution;
dealii::Vector system_rhs;

double eps;
};

#endif
#include "LiquidCrystalSystems/DzyaloshinskiiSystem.hpp"

#include 
#include 

int main(int ac, char* av[])
{
double eps = 0.5;
unsigned int degree = 1;
unsigned int n_refines = 8;
double tol = 1e-8;
unsigned int max_iter = 1;
std::string initial_filename("initial_dzyaloshinskii_solution.vtu");
std::string filename("dzyaloshinskii_solution_");
filename += std::to_string(eps);
filename += std::string(".vtu");

DzyaloshinskiiSystem dzyaloshinskii_system(eps, degree);
dzyaloshinskii_system.make_grid(n_refines);
dzyaloshinskii_system.setup_system();
dzyaloshinskii_system.output_solution(initial_filename);
bool converged = dzyaloshinskii_system.run_newton_method(tol, max_iter);
dzyaloshinskii_system.output_solution(filename);

std::cout << converged << "\n";

return 0;
}
#include "DzyaloshinskiiSystem.hpp"

#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 

#include 
#include 
#include 

DzyaloshinskiiSystem::DzyaloshinskiiSystem(double eps_, unsigned int degree)
: fe(degree)
, dof_handler(tria)
, eps(eps_)
{}



void DzyaloshinskiiSystem::make_grid(unsigned int n_refines)
{
double left_endpoint = 0.0;
double right_endpoint = M_PI;
dealii::GridGenerator::hyper_cube(tria, left_endpoint, right_endpoint);
tria.refine_global(n_refines);
}



void DzyaloshinskiiSystem::setup_system()
{
dof_handler.distribute_dofs(fe);

solution.reinit(dof_handler.n_dofs());
system_rhs.reinit(dof_handler.n_dofs());

{
dealii::Table<2, double> polynomial_exponents(2, dim);
std::vector polynomial_coefficients(2);
polynomial_exponents[0][0] = 1.0;
polynomial_exponents[1][0] = 1.0;
polynomial_coefficients[0] = 0;
polynomial_coefficients[1] = 0.5;

dealii::Functions::Polynomial 
initial_configuration(polynomial_exponents, 
  polynomial_coefficients);

constraints.clear();
dealii::DoFTools::make_hanging_node_constraints(dof_handler, 
constraints);
 

[deal.II] Run solver independently in different subdomains

2022-08-24 Thread Lucas Myers
Hi everyone,

I've got a diffusion-ish equation where I'd like to do the following: 

   1. Partition my domain into n subdomains -- for concreteness, say the 
   domain is a rectangle and my partitions are two (completely separated) 
   circles along with the rest of the rectangle.
   2. Relax the system on each of the subdomains as if they were an 
   isolated system (say, with pure Dirichlet or pure Neumann BCs).
   3. Eliminate the partitions and evolve the system altogether without any 
   of the synthetically-imposed BCs.

As a first guess I suspect I can just mark each of the subdomains with a 
material_id or a user_index or something, and then only iterate over the 
cells from a particular subdomain during the assembly and solve normally. 
However, I am unsure how to mark the appropriate faces on the outside of 
the subdomains as boundary faces. 

Has anyone dealt with this kind of problem before? Does it seem like this 
is the right direction to go? And how can I mark faces as belonging to a 
boundary, even if the cells are internal to the domain?

Any help is appreciated.

- Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b8d32259-4686-4c00-9ed8-29b2355481ddn%40googlegroups.com.


[deal.II] Suggestions for finding local minima on distributed, adaptively-refined meshes

2022-07-15 Thread Lucas Myers
Hi folks,

I suspect this is a simple task, but I can't find anything which does it 
outright. I'd like to find the local minima associated with some quantity 
derived from a finite element field. The features that I'm looking at are 
reasonably well-behaved, in that they're smooth except exactly at the 
minimum (where they're continuous but not differentiable), and of 
predictable size. 

For a uniform mesh I think a reasonable (naive) approach would be to 
calculate the derived quantity at the quadrature points of each cell, then 
go through each cell and check using the CellAccessor::neighbor() function 
(possibly in a recursive manner, until we reach some distance \delta x from 
the original cell) whether that cell contains the minimum of all its 
neighbors (or neighbors-of-neighbors or whatever). However, for an 
adaptively refined mesh, the cell returned by neighbor() might be further 
refined, in which case some of the child-cells border the original cell, 
and some do not. One related question is: how do I access just those child 
cells which border the original cell? What if those child cells also have 
children? And are there other complications of this sort which might arise 
from trying to iterate through neighbors of different refinement?

As an additional complication, I'd like to be able to do this process on 
distributed meshes, and I suspect that the features may be cut up by the 
mesh partition given that there is a lot of refinement happening in those 
regions. Is there any way I can easily do some kind of point-to-point 
communications to check whether the local minimum has been split along a 
mesh partition? 

This seems like the kind of thing that is common enough to have already 
been implemented, probably in a more sophisticated way, but I'm not sure 
where to look. Any help is appreciated!

- Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a226e6e3-5d23-405d-a8b8-b2f3270ac498n%40googlegroups.com.


[deal.II] Deleted move constructors and inability to change objects post-initialization

2022-06-23 Thread Lucas Myers
Hi everyone,

Is there any way to initialize a Simulation object (akin to the StepX 
objects found in the tutorials) in a method and then return it via 
std::move()? Or, barring that, is there a way to reinitialize an FESystem 
member variable to have a different degree? 

In the former case I run into the problem that both the DoFHandler and 
SparseMatrix objects have no move constructor defined, and in the latter 
case that FESystem has no `reinit` function nor any way (that I know of) of 
reassigning the FESystem member variable to a new FESystem instance. Are 
there fundamental reasons why these operations are not implemented, or is 
this just an uncommon use case?

For context I'm trying to deserialize a Simulation object in some method, 
but that either requires deserializing the degree and then constructing the 
object (in the method), or constructing the object outside the method, then 
changing the fe degree within the deserialize method.

Hope the question's clear, and thanks so much for the help,
Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/47aac7ef-ab31-4acd-99af-a208788ed2ean%40googlegroups.com.


Re: [deal.II] Multigrid in 3D questions

2022-03-07 Thread Lucas Myers
Hi Bruno,

No, unfortunately I have not played with the parameter values. Hypre has a 
set of recommended parameter values for 3D problems, but I think some are 
not exposed to the deal.II interface so I have avoided digging into the 
source.

I chose PETSc somewhat arbitrarily, so if you are more familiar with the 
Trilinos equivalents I can switch to that. Glancing at the documentation it 
appears that all of the Trilinos parameters are manipulable via the deal.II 
interface using the Teuchos::ParameterList object -- do you know if this is 
true? If, so it might be a better option.

With regards to actually choosing parameters: Is there some (reasonably) 
systematic way to tune them? It looks like the ML documentation gives some 
tools to get started on profiling with the different parameters (section 
6.6) but I'm unsure whether the utility methods are necessarily accessible 
via the deal.II interface. Additionally, it looks like there are quite a 
few parameter values to choose. Do you just futz with each one individually 
and look at the speed? Or is there some particular order or hierarchy to 
choosing the parameters? Is there some manner in which I can analyze my 
actual problem to optimize some choices? And finally, do you have 
recommendations for 3D problems rather than 2D?

Thanks so much for the help

- Lucas
On Friday, March 4, 2022 at 4:59:22 PM UTC-6 blais...@gmail.com wrote:

> Maybe I can add something there.
> Although it's clearly not ideal to use the AMG preconditioner on the whole 
> matrix, it can work quite well sometimes.
> Have you played around with the smoother and the coarsener you use or are 
> you using the default values? I have found that you need to tune the AMG 
> preconditioner parameters a bit to get a decent. My experience is limited 
> to the Trilinos AMG preconditioner (and not the MueLU one), but if you 
> define your constant modes correctly and use adequate smoother and 
> coarsener, it's really not all that bad.
> Feel free to reach out if you feel I could help you with this.
> Best
> Bruno
>
> On Friday, March 4, 2022 at 4:13:51 p.m. UTC-5 Wolfgang Bangerth wrote:
>
>>
>> Lucas, 
>> it's hard to tell why this might be so slow. In general, I *think* what 
>> you are doing is apply the AMG to the entire, coupled system? That is 
>> unusual, though maybe not strictly wrong. In practice, however, we 
>> almost always apply multigrid preconditioners only to the elliptic 
>> blocks of coupled systems. You can see that in the preconditioner that 
>> is discussed in the "Possibilities for extensions" section of step-22, 
>> where we need to solve an elliptic problem as part of a preconditioner 
>> for Stokes, and this is the approach that is also taken in a number of 
>> other related programs (31, 32, 56). The experience of the community is 
>> that applying multigrid (geometric or algebraic) to the entire coupled 
>> problem is just fraught with so many problems that it is hard to get 
>> right and harder yet to efficiently implement. 
>>
>> Best 
>> W. 
>>
>>
>> On 3/3/22 10:22, Lucas Myers wrote: 
>> > *** Caution: EXTERNAL Sender *** 
>> > 
>> > Hi folks, 
>> > 
>> > I'm trying to solve a 3D problem in parallel using an AMG 
>> > preconditioner, but the performance is bad. I'm wondering if I can get 
>> > some advice from someone who has experience choosing and tuning 
>> > multigrid preconditioners, particularly for problems in 3D. 
>> > 
>> > For context: 
>> > 
>> > * The problem is vector-valued (5 components) and elliptic(ish) 
>> > * I use GMRES w/BoomerAMG preconditioner for asymmetric matrices, 
>> > everything is default 
>> > * In 2D, scaling is good even down to 5,000 DoFs per processor, and 
>> > number of iterations is independent of problem size 
>> > * In 3D the scaling is bad. Adding more processors after about 50,000 
>> > DoFs/processor actually slows the program down and sometimes gives 
>> > memory errors. 
>> > * Taking away the preconditioner in 3D gives a ~20x speedup at 16 
>> > processors, and strong scaling is linear. However, the number of 
>> > iterations increases with problem size. 
>> > 
>> > Given all that, I have some questions: 
>> > 
>> > 1. Why might it be that the memory and wall-time scaling is so bad in 
>> 3D? 
>> > 2. Are there any examples lying around deal.II of folks using multigrid 
>> > for 3D problems in parallel? All the tutorials that I looked at were 
>> > in 2D, including the 2 examples used in the distributed computing 
>> paper. 
>> > 3. Might there be an easy 

[deal.II] Multigrid in 3D questions

2022-03-03 Thread Lucas Myers
Hi folks,

I'm trying to solve a 3D problem in parallel using an AMG preconditioner, 
but the performance is bad. I'm wondering if I can get some advice from 
someone who has experience choosing and tuning multigrid preconditioners, 
particularly for problems in 3D.

For context:

   - The problem is vector-valued (5 components) and elliptic(ish)
   - I use GMRES w/BoomerAMG preconditioner for asymmetric matrices, 
   everything is default
   - In 2D, scaling is good even down to 5,000 DoFs per processor, and 
   number of iterations is independent of problem size
   - In 3D the scaling is bad. Adding more processors after about 50,000 
   DoFs/processor actually slows the program down and sometimes gives memory 
   errors.
   - Taking away the preconditioner in 3D gives a ~20x speedup at 16 
   processors, and strong scaling is linear. However, the number of iterations 
   increases with problem size.
   
Given all that, I have some questions:

   1. Why might it be that the memory and wall-time scaling is so bad in 3D?
   2. Are there any examples lying around deal.II of folks using multigrid 
   for 3D problems in parallel? All the tutorials that I looked at were in 2D, 
   including the 2 examples used in the distributed computing paper.
   3. Might there be an easy way to fix it while still using BoomerAMG? I 
   know the Hypre documentation 
    gives 
   some parameter recommendations, but I'm not so sure (1) how to set those 
   options via deal.II (I think they are not available via the AdditionalData 
   interface), or (2) whether those will work. Does anyone have experience 
   with this?
   4. Might the Trilinos AMG preconditioner work any better for this 
   problem by default? And if not, is there a systematic way to tune it 
   (particularly using a deal.II interface) to work better for a 3D problem?
   5. Might the in-house GMG methods work better? And if so, do the 
   matrix-free methods stand a chance of performing better even if I have to 
   use some complicated functions in the matrix assembly (for my weak form I 
   have a 5-component nonlinear function of my solution vector which has to be 
   inverted via Newton's method across my domain).

If more context on the particular problem would be helpful I can certainly 
give details. Mostly I'm looking for intuition, general suggestions, or 
pointers to good references. Any help is much appreciated.

Kind regards,
Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/7bcfdab4-ea5e-47ea-82f5-c78bc9e3ffd8n%40googlegroups.com.


Re: [deal.II] Re: get_function_values giving different results depending on the number of MPI ranks

2022-01-31 Thread Lucas Myers
Done. Issue is documented here 
<https://github.com/dealii/dealii/issues/13314#issue-1119883623>, for 
anyone wandering the mailing list.

- Lucas

On Monday, January 31, 2022 at 10:16:42 AM UTC-6 Wolfgang Bangerth wrote:

> On 1/28/22 7:50 PM, Lucas Myers wrote:
> > **
> > 
> > Whoops, this one was indeed a silly error! The problem had nothing to do 
> with 
> > the `get_function_values` function. Rather, it only showed up there 
> because 
> > that is the only place that I use ghost elements. These weren't being 
> updated 
> > because (as clearly stated in the ghost vector documentation 
> > <
> https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.dealii.org%2Fcurrent%2Fdoxygen%2Fdeal.II%2FDEALGlossary.html%23GlossGhostedVector=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7C49474d9b216b42772e9608d9e2d21227%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637790215228308755%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=uxMqyURgOkHhJ3KNSOgZ1rdH3qgQyJQP2NtWkxOATjI%3D=0>)
>  
>
> > one should not write to a ghosted vector, except by a copy assignment 
> from a 
> > completely distributed (non-ghost) vector. I fixed the problem by 
> copying the 
> > `locally_relevant_solution` vector into a 
> `completely_distributed_solution` 
> > vector, adding the update there, and then copying back over to the 
> > `locally_relevant_solution` vector.
> > 
> > As an aside, is there a way to throw a warning/error if one tries to 
> write to 
> > a ghosted vector with anything except a completely distributed vector? 
> If 
> > possible, it might save someone some frustration...
>
> Lucas -- I'm glad to hear you found the error. Reading through your email 
> when 
> it came in first, I was thinking of exactly the situation you ultimately 
> found 
> to be responsible: ghost values not properly updated.
>
> As for the warning/error: Yes, there should ideally be one. Do you think 
> you 
> can come up with as short a program that illustrates the mistake and that 
> should, but does not, produce an error? Then create a github issue that 
> documents this? Thanks in advance!
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/040a3f34-1b39-4213-971e-61cddc17038fn%40googlegroups.com.


[deal.II] Re: get_function_values giving different results depending on the number of MPI ranks

2022-01-28 Thread Lucas Myers
Whoops, this one was indeed a silly error! The problem had nothing to do 
with the `get_function_values` function. Rather, it only showed up there 
because that is the only place that I use ghost elements. These weren't 
being updated because (as clearly stated in the ghost vector documentation 
<https://www.dealii.org/current/doxygen/deal.II/DEALGlossary.html#GlossGhostedVector>)
 
one should not write to a ghosted vector, except by a copy assignment from 
a completely distributed (non-ghost) vector. I fixed the problem by copying 
the `locally_relevant_solution` vector into a 
`completely_distributed_solution` vector, adding the update there, and then 
copying back over to the `locally_relevant_solution` vector.

As an aside, is there a way to throw a warning/error if one tries to write 
to a ghosted vector with anything except a completely distributed vector? 
If possible, it might save someone some frustration...

In any case, so sorry for all the spam.

Kind regards,
Lucas

On Wednesday, January 26, 2022 at 4:31:21 PM UTC-6 Lucas Myers wrote:

> Hi everyone,
>
> I've been trying to write a parallel distributed solution to a nonlinear 
> PDE using Newton's method. It runs fine on one rank, but when I use two the 
> solver fails to converge on the second iteration. I've tracked the 
> difference (between one rank and two) down to the `get_function_values` 
> function evaluating the previous solution at quadrature points differently. 
> This seems to happen only near the boundaries of regions owned by different 
> ranks. I figure this is some sort of synchronization error, but I cannot 
> figure out how to fix it. I believe that I am calling `compress` when is 
> necessary, and call `update_ghost_values` prior to calling 
> `get_function_values`. However, I am still getting the difference.
>
> To help with debugging, I've tried to write a program which mimics the 
> behavior in as few lines as possible (~150 with comments and whitespace). I 
> attach it here in case it helps identify my (hopefully silly) error. 
>
> Any help is much appreciated.
>
> Kind regards,
> Lucas
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/8731f7ac-aa73-451d-8212-df4b6f43fd30n%40googlegroups.com.


[deal.II] get_function_values giving different results depending on the number of MPI ranks

2022-01-26 Thread Lucas Myers
Hi everyone,

I've been trying to write a parallel distributed solution to a nonlinear 
PDE using Newton's method. It runs fine on one rank, but when I use two the 
solver fails to converge on the second iteration. I've tracked the 
difference (between one rank and two) down to the `get_function_values` 
function evaluating the previous solution at quadrature points differently. 
This seems to happen only near the boundaries of regions owned by different 
ranks. I figure this is some sort of synchronization error, but I cannot 
figure out how to fix it. I believe that I am calling `compress` when is 
necessary, and call `update_ghost_values` prior to calling 
`get_function_values`. However, I am still getting the difference.

To help with debugging, I've tried to write a program which mimics the 
behavior in as few lines as possible (~150 with comments and whitespace). I 
attach it here in case it helps identify my (hopefully silly) error. 

Any help is much appreciated.

Kind regards,
Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/91a725b9-b2cc-49c8-b442-4f71426ae9ddn%40googlegroups.com.
#include 
#include 
#include 
#include 
#include 
#include 
#include 

namespace LA = dealii::LinearAlgebraPETSc;

#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 

const int dim = 2;
const int vec_dim = 2;

const double left = -1.0;
const double right = 1.0;
const int num_refines = 2;

template 
class StepFunction : public dealii::Function
{
public:
virtual void vector_value(const dealii::Point ,
  dealii::Vector ) const override
{
if (p[1] > 0)
{
for (unsigned int i = 0; i < values.size(); i++)
values[i] = static_cast(i);
}
else
{
for (unsigned int i = 0; i < values.size(); i++)
values[i] = -static_cast(i);
}
}
};


int main(int ac, char *av[])
{
dealii::Utilities::MPI::MPI_InitFinalize mpi_initialization(ac, av, 1);

// Set up mpi objects, output rank and number of ranks for process
MPI_Comm mpi_communicator(MPI_COMM_WORLD);
int rank = 0;
int num_ranks = 0;
MPI_Comm_rank(mpi_communicator, );
MPI_Comm_size(mpi_communicator, _ranks);

std::cout << "rank: " << std::to_string(rank) << "\n";
std::cout << "num_ranks: " << std::to_string(num_ranks) << "\n\n";

// Declare other necessary finite element objects
dealii::parallel::distributed::Triangulation
triangulation(mpi_communicator,
  typename dealii::Triangulation::MeshSmoothing(
  dealii::Triangulation::smoothing_on_refinement |
  dealii::Triangulation::smoothing_on_coarsening));
dealii::DoFHandler dof_handler(triangulation);
dealii::FESystem fe(dealii::FE_Q(1), vec_dim);

dealii::IndexSet locally_owned_dofs;
dealii::IndexSet locally_relevant_dofs;
LA::MPI::Vector locally_relevant_solution;
LA::MPI::Vector locally_relevant_update;
LA::MPI::Vector locally_owned_solution;

// Generate grid
dealii::GridGenerator::hyper_cube(triangulation, left, right);
triangulation.refine_global(num_refines);

// Initialize dofs and vectors
dof_handler.distribute_dofs(fe);
locally_owned_dofs = dof_handler.locally_owned_dofs();
dealii::DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs);
locally_relevant_solution.reinit(locally_owned_dofs,
 locally_relevant_dofs,
 mpi_communicator);
locally_relevant_update.reinit(locally_owned_dofs,
   locally_relevant_dofs,
   mpi_communicator);
locally_owned_solution.reinit(locally_owned_dofs,
  mpi_communicator);

// Interpolate everything with StepFunction function
dealii::VectorTools::interpolate(dof_handler,
 StepFunction(),
 locally_owned_solution);
// compress here because interpolating inserts elements into the vector
locally_owned_solution.compress(dealii::VectorOperation::insert);


locally_relevant_solution = locally_owned_solution;
locally_relevant_update = locally_owned_solution;

// compress here because elements have been assigned to solution & update
locally_relevant_solution.compress(dealii::VectorOperation::insert);

Re: [deal.II] Re: Error loading petsc shared library in step-40 when distributing cell matrix and rhs separately

2022-01-20 Thread Lucas Myers
Matthias,

Under /usr the find command doesn't get anything. In fact, if I search for 
"*petsc*" it only finds the dealii header files.

In $HOME it finds:

/home/lucas/Application-Data/petsc/src/binding/petsc4py/src/libpetsc4py.h
/home/lucas/Application-Data/petsc/src/binding/petsc4py/src/libpetsc4py.c
/home/lucas/Application-Data/petsc/src/binding/petsc4py/src/libpetsc4py
/home/lucas/Application-Data/petsc/src/binding/petsc4py/src/libpetsc4py/libpetsc4py.pyx
/home/lucas/Application-Data/petsc/x86_64/lib/libpetsc.so
/home/lucas/Application-Data/petsc/x86_64/lib/libpetsc.so.3.16
/home/lucas/Application-Data/petsc/x86_64/lib/libpetsc.so.3.16.3

and then an old petsc directory in my trash (~/.local/share/Trash/). I 
deleted that and recompiled + relinked the file and the problem persists.

Wolfgang,

I would want to do that, but I cannot seem to find a second installation of 
PETSc. Additionally, I'm almost certain that I only have one installation 
of p4est and Hypre. Further, my LD_LIBRARY_PATH is empty except for the 
singular path to PETSc that Bruno recommended. So in the case that I want 
to clear everything I'm not sure how to reliably uninstall all of these 
packages, including any sort of cache that cmake contains on where to find 
these shared libraries.

So sorry for the trouble everyone, and thanks for the help.

- Lucas
On Thursday, January 20, 2022 at 2:56:25 PM UTC-6 Matthias Maier wrote:

> Can you find any petsc library under /usr or /usr/local ?
>
> $ find /usr -iname "libpetsc*"
>
> Similarly, would you mind checking how many petsc installation you have
> in your home directory?
>
> $ find $HOME -iname "libpetsc*"
>
> Best,
> Matthias
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/829698ea-59e2-4a80-b307-0fffd6f1527an%40googlegroups.com.


[deal.II] Re: Error loading petsc shared library in step-40 when distributing cell matrix and rhs separately

2022-01-20 Thread Lucas Myers
Bruno,

Yes, I'm using bash -- just the default terminal on Ubuntu 20.04. 

I've attached the output from `make VERBOSE=1` in the 
`gcc_linker_debug.txt`. 

As an added twist, when I try to compile with clang 13 (just step-40, not 
deal.II -- that's still compiled with gcc) it tries to look for a bunch of 
libraries twice: hypre, p4est, and petsc. I've also attached the output of 
`make VERBOSE=1` from the clang compilation. Note that there's a bunch of 
extraneous project stuff unrelated to the bug in both these make files.

Thanks for being patient.

- Lucas

On Thursday, January 20, 2022 at 2:20:12 PM UTC-6 bruno.t...@gmail.com 
wrote:

> Lucas,
>
> On Thursday, January 20, 2022 at 3:04:04 PM UTC-5 lucasm...@gmail.com 
> wrote:
>
>> Bruno,
>>
>> You're saying I just need to export the path before running the 
>> executable? If that is the case, it is still not working for whatever 
>> reason (still giving the linker error).
>>
>  
> Are you using bash or do you use a different shell?
>  
>
>>
>> As for two PETSc installations: I vaguely recall trying to install it to 
>> root and then quickly trying to undo it after reading the warning in the 
>> PETSc manual. I'm not sure if that caused the problem, but do you have any 
>> tips on sussing it out? That is, how do I find where in the compilation 
>> process the executable is being linked to PETSc twice? And how does cmake 
>> know that there could be two (possibly deleted) versions of PETSc? It seems 
>> like the deal.II installation knows where to find the PETSc shared library 
>> given the output of ldd (and the variables set in the detailed.log).
>>
>  
> Honestly I am not sure how to debug that. Usually it's pretty obvious that 
> there are two different versions of the library but here it looks like the 
> same library is present twice. CMake has a bunch of default paths that it 
> will search. If you install it to root, it's probably in a path that CMake 
> will search. Try `make VERBOSE=1` maybe we'll see something useful.
>
> Best,
>
> Bruno
>  
>
>> A final bit of information: the linker error only happens when I call the 
>> `distribute_local_to_global` function that has calling signature 
>> `(local_vector, local_dof_indices, global_vector)`. Both the mixed and 
>> matrix equivalent work fine, even when called together. 
>>
>> Let me know if you have any ideas about where to get started.
>>
>> - Lucas
>>
>> On Thursday, January 20, 2022 at 12:55:27 PM UTC-6 bruno.t...@gmail.com 
>> wrote:
>>
>>> Lucas,
>>>
>>> There is something strange. When you separate the function somehow your 
>>> executable depends on PETSc twice and for one of these two dependencies it 
>>> cannot find PETSc. For the other PETSc object, the code is using  
>>> /home/lucas/Application-Data/petsc/x86_64/lib/libpetsc.so.3.16 
>>> This usually happens if you have two different versions of a library 
>>> installed and CMake tries to use both. Did you install PETSc twice? Since 
>>> you only care about debugging the rest of your code, an easy and dirty way 
>>> to make the code run is to type in your terminal 
>>>
>>> export LD_LIBRARY_PATH=home/lucas/Application-Data/petsc/x86_64/lib:
>>> $LD_LIBRARY_PATH
>>>
>>> This doesn't fix the problem that there are two PETSc objects but at 
>>> least it should run.
>>>
>>> Best,
>>>
>>> Brun
>>>
>>> On Thursday, January 20, 2022 at 1:10:06 PM UTC-5 lucasm...@gmail.com 
>>> wrote:
>>>
 Bruno,

 Thanks for the quick response!

 Yes, I do get the error at runtime and I am running from the terminal. 
 The call in both cases is `mpirun -np 6 ./install/bin/examples/step-40`. 
 The output from ldd is in the attached file.

 - Lucas

 On Thursday, January 20, 2022 at 7:54:21 AM UTC-6 bruno.t...@gmail.com 
 wrote:

> Lucas,
>
> Just to make sure, you get the error message at runtime right? Are you 
> using the same terminal in both cases? Can you show the output of ldd on 
> your executable and liddealii.so.
>
> Best,
>
> Bruno
>
> On Wednesday, January 19, 2022 at 3:27:40 PM UTC-5 lucasm...@gmail.com 
> wrote:
>
>> Hi folks,
>>
>> For debugging purposes, I need to use the 
>> `distribute_local_to_global` function which can distribute vectors and 
>> matrices separately (this guy in the docs 
>> ).
>>  
>> However, when I try to use this one instead of the function which 
>> simultaneously distributes a vector and a matrix (this guy in the 
>> docs 
>> )
>>  
>> it throws an `error while loading shared libraries: libpetsc.so.3.16: 
>> cannot open shared object file: No such file or directory` error. 
>>
>> For an example of the behavior, I just replaced the following 

[deal.II] Re: Error loading petsc shared library in step-40 when distributing cell matrix and rhs separately

2022-01-20 Thread Lucas Myers
Bruno,

You're saying I just need to export the path before running the executable? 
If that is the case, it is still not working for whatever reason (still 
giving the linker error).

As for two PETSc installations: I vaguely recall trying to install it to 
root and then quickly trying to undo it after reading the warning in the 
PETSc manual. I'm not sure if that caused the problem, but do you have any 
tips on sussing it out? That is, how do I find where in the compilation 
process the executable is being linked to PETSc twice? And how does cmake 
know that there could be two (possibly deleted) versions of PETSc? It seems 
like the deal.II installation knows where to find the PETSc shared library 
given the output of ldd (and the variables set in the detailed.log).

A final bit of information: the linker error only happens when I call the 
`distribute_local_to_global` function that has calling signature 
`(local_vector, local_dof_indices, global_vector)`. Both the mixed and 
matrix equivalent work fine, even when called together. 

Let me know if you have any ideas about where to get started.

- Lucas

On Thursday, January 20, 2022 at 12:55:27 PM UTC-6 bruno.t...@gmail.com 
wrote:

> Lucas,
>
> There is something strange. When you separate the function somehow your 
> executable depends on PETSc twice and for one of these two dependencies it 
> cannot find PETSc. For the other PETSc object, the code is using  
> /home/lucas/Application-Data/petsc/x86_64/lib/libpetsc.so.3.16 
> This usually happens if you have two different versions of a library 
> installed and CMake tries to use both. Did you install PETSc twice? Since 
> you only care about debugging the rest of your code, an easy and dirty way 
> to make the code run is to type in your terminal 
>
> export LD_LIBRARY_PATH=home/lucas/Application-Data/petsc/x86_64/lib:
> $LD_LIBRARY_PATH
>
> This doesn't fix the problem that there are two PETSc objects but at least 
> it should run.
>
> Best,
>
> Brun
>
> On Thursday, January 20, 2022 at 1:10:06 PM UTC-5 lucasm...@gmail.com 
> wrote:
>
>> Bruno,
>>
>> Thanks for the quick response!
>>
>> Yes, I do get the error at runtime and I am running from the terminal. 
>> The call in both cases is `mpirun -np 6 ./install/bin/examples/step-40`. 
>> The output from ldd is in the attached file.
>>
>> - Lucas
>>
>> On Thursday, January 20, 2022 at 7:54:21 AM UTC-6 bruno.t...@gmail.com 
>> wrote:
>>
>>> Lucas,
>>>
>>> Just to make sure, you get the error message at runtime right? Are you 
>>> using the same terminal in both cases? Can you show the output of ldd on 
>>> your executable and liddealii.so.
>>>
>>> Best,
>>>
>>> Bruno
>>>
>>> On Wednesday, January 19, 2022 at 3:27:40 PM UTC-5 lucasm...@gmail.com 
>>> wrote:
>>>
 Hi folks,

 For debugging purposes, I need to use the `distribute_local_to_global` 
 function which can distribute vectors and matrices separately (this 
 guy in the docs 
 ).
  
 However, when I try to use this one instead of the function which 
 simultaneously distributes a vector and a matrix (this guy in the docs 
 )
  
 it throws an `error while loading shared libraries: libpetsc.so.3.16: 
 cannot open shared object file: No such file or directory` error. 

 For an example of the behavior, I just replaced the following on line 
 416 of step-40:

 // constraints.distribute_local_to_global(cell_matrix,
 // 
 cell_rhs,
  //
 local_dof_indices,
 //
 system_matrix,
 //
 system_rhs);
 constraints.distribute_local_to_global(cell_matrix,
 
 local_dof_indices,
 
 system_matrix);
  constraints.distribute_local_to_global(cell_rhs,
  
 local_dof_indices,
  
 system_rhs);

 This is sort of bizarre to me because (1) it doesn't throw the error in 
 the original tutorial case and (2) CMake ought to take care of the PETSc 
 dependency through the deal.II target. I've attached my detailed.log file 
 in case it's relevant.

 Any help figuring out the problem is appreciated!

 - Lucas

>>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 

[deal.II] Re: Error loading petsc shared library in step-40 when distributing cell matrix and rhs separately

2022-01-20 Thread Lucas Myers
Bruno,

Thanks for the quick response!

Yes, I do get the error at runtime and I am running from the terminal. The 
call in both cases is `mpirun -np 6 ./install/bin/examples/step-40`. The 
output from ldd is in the attached file.

- Lucas

On Thursday, January 20, 2022 at 7:54:21 AM UTC-6 bruno.t...@gmail.com 
wrote:

> Lucas,
>
> Just to make sure, you get the error message at runtime right? Are you 
> using the same terminal in both cases? Can you show the output of ldd on 
> your executable and liddealii.so.
>
> Best,
>
> Bruno
>
> On Wednesday, January 19, 2022 at 3:27:40 PM UTC-5 lucasm...@gmail.com 
> wrote:
>
>> Hi folks,
>>
>> For debugging purposes, I need to use the `distribute_local_to_global` 
>> function which can distribute vectors and matrices separately (this guy 
>> in the docs 
>> ).
>>  
>> However, when I try to use this one instead of the function which 
>> simultaneously distributes a vector and a matrix (this guy in the docs 
>> )
>>  
>> it throws an `error while loading shared libraries: libpetsc.so.3.16: 
>> cannot open shared object file: No such file or directory` error. 
>>
>> For an example of the behavior, I just replaced the following on line 416 
>> of step-40:
>>
>> // constraints.distribute_local_to_global(cell_matrix,
>> // 
>> cell_rhs,
>>  //
>> local_dof_indices,
>> //
>> system_matrix,
>> //
>> system_rhs);
>> constraints.distribute_local_to_global(cell_matrix,
>> 
>> local_dof_indices,
>> 
>> system_matrix);
>>  constraints.distribute_local_to_global(cell_rhs,
>>  
>> local_dof_indices,
>>  
>> system_rhs);
>>
>> This is sort of bizarre to me because (1) it doesn't throw the error in 
>> the original tutorial case and (2) CMake ought to take care of the PETSc 
>> dependency through the deal.II target. I've attached my detailed.log file 
>> in case it's relevant.
>>
>> Any help figuring out the problem is appreciated!
>>
>> - Lucas
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/270907ce-6faf-43cb-981c-03334ec0816dn%40googlegroups.com.
Output from ldd for separate `distribute_local_to_global` calls:

linux-vdso.so.1 (0x7ffc97aa7000)
libdeal_II.so.10.0.0-pre => /usr/local/lib/libdeal_II.so.10.0.0-pre 
(0x7fbd25f0c000)
libmpi_cxx.so.40 => /lib/x86_64-linux-gnu/libmpi_cxx.so.40 (0x7fbd25eee000)
libopen-pal.so.40 => /lib/x86_64-linux-gnu/libopen-pal.so.40 
(0x7fbd25e4)
libpetsc.so.3.16 => not found
libmpi.so.40 => /lib/x86_64-linux-gnu/libmpi.so.40 (0x7fbd25d1b000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x7fbd25b39000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7fbd259e8000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x7fbd259cd000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7fbd257db000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x7fbd257b8000)
libp4est.so.0 => /home/lucas/Application-Data/p4est/FAST/lib/libp4est.so.0 
(0x7fbd256f4000)
libsc.so.0 => /home/lucas/Application-Data/p4est/FAST/lib/libsc.so.0 
(0x7fbd256bd000)
libcudart.so.11.0 => /usr/local/cuda/targets/x86_64-linux/lib/libcudart.so.11.0 
(0x7fbd2541b000)
libcusparse.so.11 => /usr/local/cuda/targets/x86_64-linux/lib/libcusparse.so.11 
(0x7fbd1703)
libcusolver.so.11 => /usr/local/cuda/targets/x86_64-linux/lib/libcusolver.so.11 
(0x7fbd09d67000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x7fbd09d4b000)
libboost_iostreams.so.1.76.0 => /usr/local/lib/libboost_iostreams.so.1.76.0 
(0x7fbd09d33000)
libboost_serialization.so.1.76.0 => 
/usr/local/lib/libboost_serialization.so.1.76.0 (0x7fbd09cec000)
libpetsc.so.3.16 => 
/home/lucas/Application-Data/petsc/x86_64/lib/libpetsc.so.3.16 
(0x7fbd07336000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7fbd0733)
liblapack.so.3 => /lib/x86_64-linux-gnu/liblapack.so.3 

[deal.II] Error loading petsc shared library in step-40 when distributing cell matrix and rhs separately

2022-01-19 Thread Lucas Myers
Hi folks,

For debugging purposes, I need to use the `distribute_local_to_global` 
function which can distribute vectors and matrices separately (this guy in 
the docs 
).
 
However, when I try to use this one instead of the function which 
simultaneously distributes a vector and a matrix (this guy in the docs 
)
 
it throws an `error while loading shared libraries: libpetsc.so.3.16: 
cannot open shared object file: No such file or directory` error. 

For an example of the behavior, I just replaced the following on line 416 
of step-40:

// constraints.distribute_local_to_global(cell_matrix,
// 
cell_rhs,
 //
local_dof_indices,
//
system_matrix,
//
system_rhs);
constraints.distribute_local_to_global(cell_matrix,

local_dof_indices,

system_matrix);
 constraints.distribute_local_to_global(cell_rhs,
 
local_dof_indices,
 
system_rhs);

This is sort of bizarre to me because (1) it doesn't throw the error in the 
original tutorial case and (2) CMake ought to take care of the PETSc 
dependency through the deal.II target. I've attached my detailed.log file 
in case it's relevant.

Any help figuring out the problem is appreciated!

- Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/fd85d887-5ab9-4875-bb4f-c918f9ffd739n%40googlegroups.com.
###
#
#  deal.II configuration:
#CMAKE_BUILD_TYPE:   DebugRelease
#BUILD_SHARED_LIBS:  ON
#CMAKE_INSTALL_PREFIX:   /usr/local
#CMAKE_SOURCE_DIR:   /home/lucas/Application-Data/dealii
#(version 10.0.0-pre, shortrev 24e77b90af)
#CMAKE_BINARY_DIR:   /home/lucas/Application-Data/dealii/build
#CMAKE_CXX_COMPILER: GNU 9.3.0 on platform Linux x86_64
#/usr/bin/c++
#C++ language standard:  C++14
#CMAKE_C_COMPILER:   /usr/bin/cc
#CMAKE_GENERATOR:Unix Makefiles
#
#  Base configuration (prior to feature configuration):
#DEAL_II_CXX_FLAGS:-fPIC -Wall -Wextra -Wmissing-braces -Woverloaded-virtual -Wpointer-arith -Wsign-compare -Wsuggest-override -Wswitch -Wsynth -Wwrite-strings -Wno-placement-new -Wno-deprecated-declarations -Wno-literal-suffix -Wno-psabi -Wno-class-memaccess -fopenmp-simd
#DEAL_II_CXX_FLAGS_RELEASE:-O2 -funroll-loops -funroll-all-loops -fstrict-aliasing -Wno-unused-local-typedefs
#DEAL_II_CXX_FLAGS_DEBUG:  -O0 -ggdb -Wa,--compress-debug-sections
#DEAL_II_LINKER_FLAGS: -Wl,--as-needed -rdynamic -fuse-ld=gold -lpthread -fopenmp
#DEAL_II_LINKER_FLAGS_RELEASE: 
#DEAL_II_LINKER_FLAGS_DEBUG:   -ggdb
#DEAL_II_DEFINITIONS:  
#DEAL_II_DEFINITIONS_RELEASE:  NDEBUG
#DEAL_II_DEFINITIONS_DEBUG:DEBUG
#DEAL_II_USER_DEFINITIONS: 
#DEAL_II_USER_DEFINITIONS_REL: NDEBUG
#DEAL_II_USER_DEFINITIONS_DEB: DEBUG
#DEAL_II_INCLUDE_DIRS  
#DEAL_II_USER_INCLUDE_DIRS:
#DEAL_II_BUNDLED_INCLUDE_DIRS: 
#DEAL_II_LIBRARIES:
#DEAL_II_LIBRARIES_RELEASE:
#DEAL_II_LIBRARIES_DEBUG:  
#DEAL_II_VECTORIZATION_WIDTH_IN_BITS: 0
#DEAL_II_HAVE_CXX14
#
#  Configured Features (DEAL_II_ALLOW_BUNDLED = ON, DEAL_II_ALLOW_AUTODETECTION = ON):
#  ( DEAL_II_WITH_64BIT_INDICES = OFF )
#  ( DEAL_II_WITH_ADOLC = OFF )
#  ( DEAL_II_WITH_ARBORX = OFF )
#  ( DEAL_II_WITH_ARPACK = OFF )
#  ( DEAL_II_WITH_ASSIMP = OFF )
#DEAL_II_WITH_BOOST set up with external dependencies
#BOOST_VERSION = 1.76.0
#BOOST_CXX_FLAGS = -Wno-unused-local-typedefs
#BOOST_DEFINITIONS = BOOST_NO_AUTO_PTR
#BOOST_USER_DEFINITIONS = BOOST_NO_AUTO_PTR
#BOOST_INCLUDE_DIRS = /usr/local/include
#BOOST_USER_INCLUDE_DIRS = 

[deal.II] Re: Boost version problem

2021-10-27 Thread Lucas Myers
Hi folks,

I figured out the problem and am posting it here for sake of posterity:

When one installs dealii with a bundled library (e.g. Boost), the header 
files of the bundled version of that library are copied to 
`${DEAL_II_PATH}/${DEAL_II_INCLUDE_RELDIR}$/deal.II/bundled`. This include 
directory is also passed in as a flag to the compiler by cmake whenever 
find_package() is used to find dealii. Additionally, it's not possible (so 
far as I can tell) to pass in the standard system include directories (e.g. 
/usr/local/include) and have cmake generate an include compiler flag which 
takes precedence over other include compiler flags. That is, if I use 
`include_directories("/usr/local/include")`, cmake does not actually 
include a `-I/usr/local/include` compiler flag in the Makefile. Hence, the 
bundled version will always take precedent over the external version, even 
if dealii is reconfigured and reinstalled with external dependencies. 

My solution was just to manually remove the `boost` folder from 
`${DEAL_II_PATH}/${DEAL_II_INCLUDE_RELDIR}$/deal.II/bundled`. Not sure if 
there is a way to fix this during re-installation of dealii in case someone 
wants to reinstall with a different configuration...

- Lucas

On Monday, October 25, 2021 at 10:33:07 AM UTC-5 Lucas Myers wrote:

> Hi Jean-Paul,
>
> Sorry to bring this back up again, but I'm running into it after 
> installing dealii on a new machine -- last time the solution was just to 
> delete the entire dealii installation and do a clean reinstall. That's 
> annoying, so I want to avoid if possible.
>
> To reiterate: I first installed dealii with the packaged Boost version 
> (1.70). Then, I installed the newest workable Boost version (1.76) and 
> reinstalled. The installation didn't give any errors, and the detailed.log 
> file says that dealii was set up with external dependencies, using Boost 
> version (1.76). However, when I try to build anything using the standard 
> cmake files included in the examples, the deal.II/base/config.h file 
> complains that I have the wrong boost version because when dealii includes 
> boost, the include directory points to the packaged boost instead of the 
> external boost.
>
> Following your suggestion, I tried to pass in the -DBOOST_DIR flag when 
> cmaking the examples, but it says that the variable is unused -- I suspect 
> you meant to pass that into cmake when building and configuring the initial 
> dealii installation. I've also attached the requested files, but I think 
> that the initial installation worked so I'm not sure what you'll find there.
>
> One question to maybe get started: in the context of building one of the 
> examples (say step-1), how does dealii know where to look for boost when it 
> calls the deal.II/base/config.h file? I reckon it tells cmake where to look 
> in the "FINDPACKAGE", "DEAL_II_INITIALIZE_CACHED_VARIABLES", or the 
> "DEAL_II_INVOKE_AUTOPILOT" macros, but I don't know enough about cmake to 
> know where to look.
>
> Let me know if you know the answer to my question, or a workaround, and 
> thanks so much for any help.
>
> Kind regards,
> Lucas
>
> On Thursday, August 19, 2021 at 1:55:48 PM UTC-5 Jean-Paul Pelteret wrote:
>
>> Hi Lucas,
>>
>> The documentation that describes how to configure CMake options for 
>> external libraries is here: 
>> https://dealii.org/developer/users/cmake_dealii.html#configureext . You 
>> should just need to pass cmake the flag -DBOOST-DIR= 
>> (with the correct path added, of course. From your first post, I think that 
>> this would just be /usr). To be safe, you probably want to delete the 
>> CMakeCache.txt before reconfiguring. You should also take a look at this 
>> section: 
>> https://dealii.org/developer/users/cmake_dealii.html#buildinformation . 
>> There you can see that when an external version of boost is detected and 
>> can be used, this status should be reflected at the end of the 
>> configuration process, e.g.
>>
>> # Configured Features (DEAL_II_ALLOW_BUNDLED = ON, 
>> DEAL_II_ALLOW_AUTODETECTION = ON): 
>> # DEAL_II_WITH_BOOST set up with external dependencies 
>> # [...] 
>>
>> If you pass the boost directory to CMake and then don't see a message 
>> like that, then it doesn't necessarily mean that the boost path wasn't 
>> found -- it could also mean that this specific installation of boost was 
>> not found to be usable (i.e. it didn't pass the checks that are executed 
>> during the configuration process). To understand why that could be, you'd 
>> have to provide us with the detailed.log file, along with two log files 
>> that CMake generates, namely /CMakeFiles/CMakeError.log and 
>>

[deal.II] dealii not compiling because of boost geometry problem

2021-10-20 Thread Lucas Myers
Hi everyone,

I'm trying to install dealii on Ubuntu 20.04. For this, I manually 
installed the latest Boost library first (Boost 1.77.0) as described here 
.
 
It looks like dealii has found this external boost library, and is set up 
to install dealii with external dependencies on boost. However, when I try 
to install dealii I get a big error message with the following highlighted 
in red:

"error: static assertion failed: This operation is not or not yet 
implemented."
"BOOST_GEOMETRY_STATIC_ASSERT" 
"error: ‘apply’ is not a member of 
‘boost::geometry::dispatch::distance, dealii::Point<1>, 
boost::geometry::strategies::distance::detail::comparable
 
>, boost::geometry::point_tag, boost::geometry::point_tag, 
boost::geometry::strategy_tag_distance_point_point, false>’"

Installing with the bundled boost library seems to work, but I would really 
rather have the external boost library to use.

Any suggestions on what the problem might be? Could it be that I installed 
boost incorrectly? Is there any other information that I can give to help 
with debugging?

Kind regards,
Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4f163886-e563-4087-9543-15192f324eb5n%40googlegroups.com.


[deal.II] Re: Boost version problem

2021-08-19 Thread Lucas Myers
Hi folks,

This problem is back, and I'm not sure what to do about it. To solve the 
problem, I updated boost to the newest version (1.76.0), and then 
reinstalled the newest dealii release (9.3.1). I configured it to use 
external dependency for boost so that it configured and installed with 
boost version 1.76.0. It was working for a while for some reason, but then 
it broke again after updating cmake to the newest version. 

I reinstalled the newest dealii version with external dependency for boost, 
and it gives the error that it was giving before. Again, I think the 
problem is that on line 506 of /include/deal.II/base/config.h there is a 
#include which is pointing to the bundled boost version 
as opposed to the externally installed one. How do I get this + all other 
include directives to point to the proper boost installation? I think this 
is a cmake problem, but I do not know how to tell the compiler where the 
proper boost installation is.

Any help is appreciated.

Kind regards,
Lucas

On Tuesday, August 10, 2021 at 4:49:28 PM UTC-5 Lucas Myers wrote:

> Update: the problem is that, when I installed the newest version of 
> dealii, the boost version that I had separately installed was 1.67.0. 
> Instead of using the version bundled in dealii (version 1.70.0), dealii 
> configured with the separately installed version. However, whenever it does 
> the compilation check, it #includes the bundled version of boost. I'm not 
> sure why it does that, because when I run a loose source file (e.g. 
> `boost_test.cpp` in my home directory) with boost included, it uses the 
> separately installed version. 
>
> In any case, I think the fix will be to uninstall boost, and then 
> re-install dealii so that it uses its bundled version. Just wanted to make 
> a note about it because I think this is unintended behavior. 
>
> - Lucas
>
> On Tuesday, August 10, 2021 at 3:46:17 PM UTC-5 Lucas Myers wrote:
>
>> Hi folks,
>>
>> I'm getting the error "The version number of boost that you are compiling 
>> with does not match the version number of boost found during deal.II's 
>> configuration step." when I try to compile the first tutorial program. It 
>> is thrown at /usr/local/include/deal.II/base/config.h:508:17
>>
>> This is confusing to me, because when I go into 
>> /usr/include/boost/version.hpp, it states that my BOOST_VERSION is 106700. 
>> Further, in /usr/local/include/deal.II/base/config.h, it says that my 
>> DEAL_II_BOOST_VERSION_MAJOR is 1, DEAL_II_BOOST_VERSION_MINOR is 67, and 
>> DEAL_II_BOOST_VERSION_SUBMINOR is 0 which makes my 
>> DEAL_II_BOOST_VERSION_GTE 106700, the same as what boost says it ought to 
>> be.
>>
>> For context, I had installed dealii version 9.2.0 earlier, but had 
>> problems when I tried to update it to include CUDA (I think there was some 
>> API change with CUDA 11). I then installed dealii version 9.3.1 (without 
>> doing anything to the other version). Now none of the example files in 
>> version 9.3.1 will compile.
>>
>> Any help is appreciated, thanks so much
>>
>> Kind regards,
>> Lucas
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/83b7b535-afb0-4c7f-ace0-ca1335e127cfn%40googlegroups.com.


[deal.II] Re: Boost version problem

2021-08-10 Thread Lucas Myers
Update: the problem is that, when I installed the newest version of dealii, 
the boost version that I had separately installed was 1.67.0. Instead of 
using the version bundled in dealii (version 1.70.0), dealii configured 
with the separately installed version. However, whenever it does the 
compilation check, it #includes the bundled version of boost. I'm not sure 
why it does that, because when I run a loose source file (e.g. 
`boost_test.cpp` in my home directory) with boost included, it uses the 
separately installed version. 

In any case, I think the fix will be to uninstall boost, and then 
re-install dealii so that it uses its bundled version. Just wanted to make 
a note about it because I think this is unintended behavior. 

- Lucas

On Tuesday, August 10, 2021 at 3:46:17 PM UTC-5 Lucas Myers wrote:

> Hi folks,
>
> I'm getting the error "The version number of boost that you are compiling 
> with does not match the version number of boost found during deal.II's 
> configuration step." when I try to compile the first tutorial program. It 
> is thrown at /usr/local/include/deal.II/base/config.h:508:17
>
> This is confusing to me, because when I go into 
> /usr/include/boost/version.hpp, it states that my BOOST_VERSION is 106700. 
> Further, in /usr/local/include/deal.II/base/config.h, it says that my 
> DEAL_II_BOOST_VERSION_MAJOR is 1, DEAL_II_BOOST_VERSION_MINOR is 67, and 
> DEAL_II_BOOST_VERSION_SUBMINOR is 0 which makes my 
> DEAL_II_BOOST_VERSION_GTE 106700, the same as what boost says it ought to 
> be.
>
> For context, I had installed dealii version 9.2.0 earlier, but had 
> problems when I tried to update it to include CUDA (I think there was some 
> API change with CUDA 11). I then installed dealii version 9.3.1 (without 
> doing anything to the other version). Now none of the example files in 
> version 9.3.1 will compile.
>
> Any help is appreciated, thanks so much
>
> Kind regards,
> Lucas
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ff8355c1-75c9-4d0c-bf90-014357e1ac37n%40googlegroups.com.


[deal.II] Boost version problem

2021-08-10 Thread Lucas Myers
Hi folks,

I'm getting the error "The version number of boost that you are compiling 
with does not match the version number of boost found during deal.II's 
configuration step." when I try to compile the first tutorial program. It 
is thrown at /usr/local/include/deal.II/base/config.h:508:17

This is confusing to me, because when I go into 
/usr/include/boost/version.hpp, it states that my BOOST_VERSION is 106700. 
Further, in /usr/local/include/deal.II/base/config.h, it says that my 
DEAL_II_BOOST_VERSION_MAJOR is 1, DEAL_II_BOOST_VERSION_MINOR is 67, and 
DEAL_II_BOOST_VERSION_SUBMINOR is 0 which makes my 
DEAL_II_BOOST_VERSION_GTE 106700, the same as what boost says it ought to 
be.

For context, I had installed dealii version 9.2.0 earlier, but had problems 
when I tried to update it to include CUDA (I think there was some API 
change with CUDA 11). I then installed dealii version 9.3.1 (without doing 
anything to the other version). Now none of the example files in version 
9.3.1 will compile.

Any help is appreciated, thanks so much

Kind regards,
Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/04760d59-d6ef-464a-8f5d-d5b63cd7e31fn%40googlegroups.com.


[deal.II] Copying field evaluated at quadrature points to device via CUDA

2021-08-09 Thread Lucas Myers
Hi folks,

As I understand it, the standard way of looping through a finite element 
problem is to iterate through (1) each cell in the mesh, (2) each 
quadrature point in the cell, (3) the ith shape functions, and (4) the jth 
shape functions. Given that, would it be possible to copy a finite element 
field evaluated at all of the quadrature points into an (n_cells x 
n_q_points_per_cell) array/vector/matrix/whatever? 

My purpose for doing this would be to copy this array over to a GPU device 
in order to perform a (somewhat complicated) function inversion in parallel 
on all of the quadrature point values, and then pass the inverted points 
back to the host. I have two questions regarding this business:

   1. If, after I have iterated through all cells/quadrature points and 
   processed my data, I iterate *again* through the mesh/q_points/shape 
   functions in order to fill in my matrix as in a typical FE problem, will 
   the iteration go back through in the same way as the first time? That is, 
   will I be able to index the processed data in the same way that I indexed 
   the initial data that I copied over? This might be obvious, but I ask 
   because the cell iterators are a seemingly complicated object that I don't 
   understand fully yet.
   2. Is there a better way to do this process using already-written dealii 
   objects (e.g. in the CUDAWrappers namespace)? I ask because I'm having a 
   little bit of trouble parsing the step-64 comments, particularly regarding 
   the management of different CUDA memory locations (e.g. how often does each 
   object access global memory?). 

In any case, thank you for any help and for any general tips on the problem.

Kind regards,
Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/05ff3360-2772-4740-97d5-47b41f2ad052n%40googlegroups.com.


[deal.II] Re: Able to use umfpack in debug configuration, but not in release configuration

2021-07-21 Thread Lucas Myers
Hi folks,

Excuse me, the problem was on my end -- when using cmake4eclipse, you have 
to specify that you want to use the "CMAKE_EXPORT_COMPILE_COMMANDS parser" 
as the provider in the "Preprocessor Includes, Paths, Macros, etc." tab for 
your project for *both* the Release and Debug configurations.

Given that the problem was a silly mistake on my end, ought I delete the 
forum post?

Thanks,
Lucas

On Wednesday, July 21, 2021 at 3:57:12 PM UTC-5 Lucas Myers wrote:

> Hi all,
>
> I've used the #include  command in a program 
> in order to use UMFPack. I'm able to compile and run everything fine in 
> Debug mode (provided by Eclipse via the cmake4eclipse add-on). However, 
> when I try to switch to Release mode to optimize the program, it gives me 
> the following warning:
>
> /usr/local/include/deal.II/lac/sparse_direct.h:31:12: fatal error: 
> umfpack.h: No such file or directory
>  #  include 
> ^~~
>
> This is confusing to me because, given that I can use UMFPack in Debug 
> mode, I must've installed it properly during the dealii installation.
>
> Let me know if you have any tips on how to fix, or if you need more 
> information.
>
> Thanks so much for any help,
> Lucas
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/da868037-2256-43ab-953d-99e8eff30cbcn%40googlegroups.com.


[deal.II] Able to use umfpack in debug configuration, but not in release configuration

2021-07-21 Thread Lucas Myers
Hi all,

I've used the #include  command in a program 
in order to use UMFPack. I'm able to compile and run everything fine in 
Debug mode (provided by Eclipse via the cmake4eclipse add-on). However, 
when I try to switch to Release mode to optimize the program, it gives me 
the following warning:

/usr/local/include/deal.II/lac/sparse_direct.h:31:12: fatal error: 
umfpack.h: No such file or directory
 #  include 
^~~

This is confusing to me because, given that I can use UMFPack in Debug 
mode, I must've installed it properly during the dealii installation.

Let me know if you have any tips on how to fix, or if you need more 
information.

Thanks so much for any help,
Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/764b9953-1136-4d96-8a7d-7809ca1ca4aen%40googlegroups.com.


Re: [deal.II] Standard way of applying a function to a finite element vector

2021-07-13 Thread Lucas Myers
Hi Wolfgang,

Thanks so much for the response, this is exactly what I was looking for! A 
few follow-up questions:

   1. For the projection operation that you describe, are you specifically 
   talking about this implementation 
   
<https://www.dealii.org/current/doxygen/deal.II/namespaceVectorTools.html#a51a13e948e1296dbfdfc297167e4fd5a>
 of 
   the projection() function (the [5/7] implementation)? I ask because it 
   seems like the others require a Function object for which we need the 
   FEFieldFunction class, rather than just something that eats a quadrature & 
   cell index and gives back the value at the given quadrature point. If so, 
   am I reading it correctly that this is only for a scalar-valued finite 
   element function? How would we do this with vector-valued functions?
   2. With respect to accuracy (that is, neglecting speed of computation or 
   ease of implementation) are either of these methods better? My guess would 
   be that projection is more (the most?) accurate in an L2 sense, while 
   interpolation is more accurate in some other sense.
   3. Is this kind of operation described in any of the tutorial steps, do 
   you know?
   4. Is this how folks typically do post-processing? Really what I would 
   like to do is extract eigenvectors corresponding to the largest eigenvalues 
   from my 3x3 tensor solution data and plot those as a vector field. Would 
   this be something that is typically done with scripts in a visualization 
   software (e.g. Paraview, Visit), or is that done in the C++ source? Just 
   trying to get a sense for best practices...

Again, thanks so much for the help!

- Lucas

On Monday, July 12, 2021 at 11:03:02 PM UTC-5 Wolfgang Bangerth wrote:

> On 7/12/21 5:56 PM, Lucas Myers wrote:
> > 
> > I'd like to apply a method to a finite element function (that is, 
> something 
> > that looks like the solution vectors) in order to get another finite 
> element 
> > function. For example, suppose my finite element function is a symmetric 
> > rank-2 tensor in type, and I would like a finite element function which 
> is a 
> > vector of its eigenvalues. I haven't been able to find anything in the 
> > tutorials which do this (though maybe I'm skimming too much).
> > 
> > My naive approach would be to (i) construct a dealii Function which 
> evaluates 
> > some finite element vector (function?) at a given point and then (ii) 
> use the 
> > Interpolation or Projection functions from the VectorTools namespace. 
> However, 
> > I am not sure whether Interpolation or Projection would be a better 
> idea. 
> > Further, I don't know how to evaluate a finite element function at an 
> > arbitrary point -- only at quadrature points via the FEValues class. 
> Also I 
> > would be unsure how to build that into a Function class.
> > 
> > Let me know if there's a better way to do this -- it seems like 
> something 
> > that's probably already implemented.
>
> Lucas,
> do I understand right that what you want is something like
> v_h = f(u_h)
> where f(u) is some function of the solution? If so, you're right that that 
> cannot hold pointwise if u_h is a finite element function and v_h should 
> also 
> be a finite element function unless f has a special structure and the 
> spaces 
> for u_h and v_h match. So you need
> v_h = I_h f(u_h) -- interpolation
> or
> v_h = Pi_h f(u_h) -- projection
> Which of these two you choose typically depends on the application you 
> have. 
> The projection only requires you to apply f at quadrature points, and so 
> that's going to be easy. For I_h, you need to evaluate f(u) at the *nodal* 
> points of the space for v_h. That too is not complicated but you probably 
> want 
> to write the interpolation function yourself, after looking at how this is 
> implemented in VectorTools::interpolate.
>
> The cheap way to do the latter (but also the slow way) is to use the 
> FEFieldFunction class to evaluate u_h at arbitrary points. You'd then 
> write 
> your own class derived from Function that gets a point as argument, calls 
> FEFieldFunction to evaluate u_h(x), computes f(u_h(x)), and returns that 
> value. This kind of function you can then provide to 
> VectorTools::interpolate. 
> The problem is that FEFieldFunction is expensive. If you don't care about 
> speed, this is the way to go. If you do for whatever reason care about 
> speed, 
> go with the approach in the previous paragraph :-)
>
> Best
> W.
>
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing

[deal.II] Standard way of applying a function to a finite element vector

2021-07-12 Thread Lucas Myers
Hi all,

I'd like to apply a method to a finite element function (that is, something 
that looks like the solution vectors) in order to get another finite 
element function. For example, suppose my finite element function is a 
symmetric rank-2 tensor in type, and I would like a finite element function 
which is a vector of its eigenvalues. I haven't been able to find anything 
in the tutorials which do this (though maybe I'm skimming too much). 

My naive approach would be to (i) construct a dealii Function which 
evaluates some finite element vector (function?) at a given point and then 
(ii) use the Interpolation or Projection functions from the VectorTools 
namespace. However, I am not sure whether Interpolation or Projection would 
be a better idea. Further, I don't know how to evaluate a finite element 
function at an arbitrary point -- only at quadrature points via the 
FEValues class. Also I would be unsure how to build that into a Function 
class.

Let me know if there's a better way to do this -- it seems like something 
that's probably already implemented.

Thanks so much,
Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f27e2ac3-a0bf-42a4-928b-41671a395680n%40googlegroups.com.


Re: [deal.II] 5x5 Matrix Inversion Problem

2021-06-22 Thread Lucas Myers
Hi David,

Thanks so much, this is exactly what I was looking for! For some reason I 
was trying to use the square bracket notation. Is there any reason why the 
(i, j) syntax is used rather than [i, j]? Is it just a restriction of c++?

A few follow-up questions about the LAPACKFullMatrix class: is there 
anywhere that I can find more information about the class besides the 
reference documentation here 
<https://www.dealii.org/current/doxygen/deal.II/namespaceLAPACKSupport.html>? 
In particular, I'm wondering how to deal with the state attribute. It looks 
like the compute_lu_factorization() method sets the state to lu 
(reasonable), but I'd like to repopulate the same matrix (meaning it has to 
be in the matrix state, I think). For now I'm using reinit() because that 
seems to change the state back to matrix, but I worry that it's expensive 
to keep reinitializing the matrix every Newton's method iteration. Is there 
a way to avoid reinitializing it? And is it bad programming practice to 
*not* reinitialize it?

Again, thanks so much for the help.

Kind regards,
Lucas



On Monday, June 21, 2021 at 1:08:50 PM UTC-5 Wells, David wrote:

> Hi Lucas,
>
> You should be able to use syntax like
>
> A(i, j) = 5;
>
> to assign values to a FullMatrix - it's what we do in most of the tutorial 
> programs. TableIndices are somewhat hard to use (you are right) - they are 
> intended to be used as a generic interface to Table where N is not 
> known until compile time. Once you populate the matrix, you should be able 
> to do what you mentioned before with the Gauss-Jordan algorithm to get the 
> inverse.
>
> That being said - do you need the inverse itself, or the action of the 
> inverse? If you only ever need the action of the inverse a much better 
> approach would be to use LAPACKFullMatrix and factorize it (via 
> LAPACKFullMatrix::compute_lu_factorization() and LAPACKFullMatrix::solve()) 
> since there is a significant amount of roundoff error accumulation in 
> computing inverses.
>
> The Solver classes are more than you need here - they are optimized for 
> large sparse problems, not small dense ones.
>
> Best,
> David Wells
>
>
> ------
> *From:* dea...@googlegroups.com  on behalf of 
> Lucas Myers 
> *Sent:* Monday, June 21, 2021 1:50 PM
> *To:* deal.II User Group 
> *Subject:* [deal.II] 5x5 Matrix Inversion Problem 
>  
> Hi Everyone, 
>
> I have a problem wherein one of the terms in my PDE is a 5-component 
> vector function \Lambda of the 5-component solution vector Q. However, I 
> only have an explicit formula for Q as a function of \Lambda and so I am 
> trying to write a numerical inversion class (as a `dealii::Function`) to 
> get \Lambda as a function of Q. 
>
> I am using Newton's method, and so I need to solve a 5x5 matrix equation. 
> I was planning on just inverting the Jacobian which I have implemented as a 
> `dealii::Tensor`, but it looks like the `invert` function is only 
> implemented up to dimension 3 (?). It looks like the `dealii::FullMatrix` 
> class can do this via a Gauss Jordan method, but I was having some trouble 
> using the `FullMatrix` class directly. In particular, to read and write 
> individual elements I need to create a `dealii::TableIndices` object, which 
> seems tedious and makes me think it's meant to be used internally. 
>
> Given all this, I am looking for advice at any level: is this a reasonable 
> way to numerically get \Lambda as a function of Q? Is it recommended to use 
> a `dealii::Tensor` object as opposed to a `dealii::FullMatrix` object? And 
> if I am supposed to use the latter, is there a nicer way to index it than 
> with the `dealii::TableIndices` object? Lastly is it a good idea to be 
> using an `invert` operation to solve the Newton's method linear equation? I 
> know that the `Eigen` package recommends against that here 
> <https://eigen.tuxfamily.org/dox/group__TutorialLinearAlgebra.html>, and 
> instead recommends one of its solution methods. I haven't seen any of those 
> here besides one of the `Solver` classes, but those seem like a lot of 
> overhead for a 5x5 matrix (I'm going to have to do this operation for every 
> degree of freedom, so I want too minimize overhead). I also don't really 
> want to use the Eigen package, just because (1) I want to avoid unnecessary 
> value-copying and (2) I'd like to not have to try to interface multiple 
> packages.
>
> Any help is appreciated, and let me know if you need more information 
> about the problem.
>
> Kind regards,
> Lucas
>
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- 
> You received this message 

[deal.II] 5x5 Matrix Inversion Problem

2021-06-21 Thread Lucas Myers
Hi Everyone,

I have a problem wherein one of the terms in my PDE is a 5-component vector 
function \Lambda of the 5-component solution vector Q. However, I only have 
an explicit formula for Q as a function of \Lambda and so I am trying to 
write a numerical inversion class (as a `dealii::Function`) to get \Lambda 
as a function of Q. 

I am using Newton's method, and so I need to solve a 5x5 matrix equation. I 
was planning on just inverting the Jacobian which I have implemented as a 
`dealii::Tensor`, but it looks like the `invert` function is only 
implemented up to dimension 3 (?). It looks like the `dealii::FullMatrix` 
class can do this via a Gauss Jordan method, but I was having some trouble 
using the `FullMatrix` class directly. In particular, to read and write 
individual elements I need to create a `dealii::TableIndices` object, which 
seems tedious and makes me think it's meant to be used internally. 

Given all this, I am looking for advice at any level: is this a reasonable 
way to numerically get \Lambda as a function of Q? Is it recommended to use 
a `dealii::Tensor` object as opposed to a `dealii::FullMatrix` object? And 
if I am supposed to use the latter, is there a nicer way to index it than 
with the `dealii::TableIndices` object? Lastly is it a good idea to be 
using an `invert` operation to solve the Newton's method linear equation? I 
know that the `Eigen` package recommends against that here 
, and 
instead recommends one of its solution methods. I haven't seen any of those 
here besides one of the `Solver` classes, but those seem like a lot of 
overhead for a 5x5 matrix (I'm going to have to do this operation for every 
degree of freedom, so I want too minimize overhead). I also don't really 
want to use the Eigen package, just because (1) I want to avoid unnecessary 
value-copying and (2) I'd like to not have to try to interface multiple 
packages.

Any help is appreciated, and let me know if you need more information about 
the problem.

Kind regards,
Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/29d3a1c1-6703-4c98-95a6-44ba96add049n%40googlegroups.com.


[deal.II] Compiling a dealii project without cmake

2021-06-17 Thread Lucas Myers
Hi Everyone,

This is sort of a silly question, but how feasible is it to compile and 
link a dealii project without cmake? I'm pretty new to using C++ so I'm 
trying to get a handle on the most basic way to do things.

I wrote a little file `dealii_function_test.cpp` to test out the Function, 
Point, Vector, and Full_Matrix classes, as well as a simple accompanying 
Makefile. The former just creates objects of those types and then prints 
the contents. The latter searches through `src` and `include` directories 
in my own project directory. I can post the text to these files if it would 
be helpful.

When I try to compile I get a lot of "undefined reference to" errors 
pointing towards dealii functions. I can post a file with the error output 
if that would be helpful. Do I just need to include a reference to the 
relevant dealii object files? And if so, what do those look like/where 
might they be? In my `/usr/local/lib` directory I have some dealii 
libraries (named some combination of libdeal_II, so, g, and 9.2.0). Do I 
need to explicitly reference those? And if so, why does make not know to 
look there?

For reference I'm running WSL2 on Windows 10 with a Debian distro. 

Thanks so much,
Lucas

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ef2b9939-78fa-4308-8873-06de4a6d1d8bn%40googlegroups.com.