Re: [deal.II] Does deal.ll support import grid from openfoam solver

2023-07-15 Thread vachan potluri
I don't know about a direct technique, but you can first use foamToVTK to
convert foam mesh to vtk and then import vtk in dealii.

Vachan

On Sat, 15 Jul, 2023, 16:47 ztdep...@gmail.com, 
wrote:

> I want to couple the mesh adaptivity off deal.ll with openfoam solver.
> Could you please give me some suggestions.
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/86025b56-5496-4be7-8d03-c2b01acbe13cn%40googlegroups.com
> 
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALAVa_x9rQd1DNDYqHP6cweF5g-LfcFf0N8VerY4-SRLu5BjgA%40mail.gmail.com.


Re: [deal.II] Calculate cell center distance from a boundary

2022-02-17 Thread vachan potluri
>
> Hello,
> Here is the PR https://github.com/dealii/dealii/pull/13394 that adds the
> new wrappers for ArborX
> Best,
> Bruno


Thank you very much! Didn't expect it to come so fast :) !

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALAVa_x244GfUP_fr7Di8eTPjtPhbkJ8vDdNX2fEgyL%2BdXYGDw%40mail.gmail.com.


Re: [deal.II] Calculate cell center distance from a boundary

2022-02-10 Thread vachan potluri
Dear Dr. Wolfgang,

Thank you very much for the kind reply.


> This is a very difficult operation to do even in sequential computations
> unless you have an analytical description of the boundary. That's because
> in
> principle you would have to compare the current position with all points
> (or
> at least all vertices) on the boundary -- which is very expensive to do if
> you
> had to do it for more than just a few points. The situation does not get
> better if you are in parallel, because then you don't even know all of the
> boundary vertices.

Completely realise and agree.

The only efficient way to do this sort of operation is to solve an eikonal
> equation in which the solution function equals the distance to the
> boundary.
> You can't solve it exactly, and so whatever distance you get is going to
> be a
> finite-dimensional approximation of the exact distance function.

I have got a basic idea of the equation from Wikipedia. Can you kindly also
point me to any references which describe its numerical solution technique? I
have no background in mathematics, so I have difficulty in understanding
any high level content.

Thanks again!

On Fri, 11 Feb 2022 at 10:22, Wolfgang Bangerth 
wrote:

> On 2/10/22 21:24, vachanpo...@gmail.com wrote:
> >
> > Is there a way to get the shortest distance from cell center to a given
> > boundary in p::d::Triangulation? What I really want is the wall normal
> > distance. Any other suggestions are also welcome.
>
> This is a very difficult operation to do even in sequential computations
> unless you have an analytical description of the boundary. That's because
> in
> principle you would have to compare the current position with all points
> (or
> at least all vertices) on the boundary -- which is very expensive to do if
> you
> had to do it for more than just a few points. The situation does not get
> better if you are in parallel, because then you don't even know all of the
> boundary vertices.
>
> The only efficient way to do this sort of operation is to solve an eikonal
> equation in which the solution function equals the distance to the
> boundary.
> You can't solve it exactly, and so whatever distance you get is going to
> be a
> finite-dimensional approximation of the exact distance function.
>
> Best
>   W.
>
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/1df8e903-7124-86e1-2459-9a12a21f17f6%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALAVa_zfN377%2BFdGWRexwH6ub64kfSLp73bRYwpfXtpQeWVY3g%40mail.gmail.com.


Re: [deal.II] Small suggestion to improve GridIn::read_unv()

2021-10-04 Thread vachan potluri
>
> Yes, this makes sense. A patch would be welcome!

Please have a look at pr12787 .

It really is a complete nightmare and my preference would be if that
> file format was banned from existence by the QAnon high council. My second
> choice would be if people/Salome exported meshes in other, better
> described
> and easier to read formats.

I hope they realise this and make necessary changes soon :).

On Thu, 30 Sept 2021 at 17:57, Wolfgang Bangerth 
wrote:

> On 9/30/21 6:02 AM, vachanpo...@gmail.com wrote:
> >
> > I found this thread which highlights one small issue in importing a unv
> file
> > (generated by
> > Salome): https://groups.google.com/g/dealii/c/iWnLv4v0V1M/m/38jm5Ck8WXYJ
> .
> >
> > If I understand correctly, dealii expects the file to start with "-1"
> and then
> > have the section "2411", instead of all the other stuff that Salome
> prints.
> > This issue still persists in dealii-9.3.0 (and Salome 9.7).
> >
> > Is there any specific reason why we simply can not ignore all the lines
> > between "-1" and "2411"?
> >
> > I am willing to submit a patch if the developers feel this not wrong and
> would
> > be useful.
>
> Yes, this makes sense. A patch would be welcome!
>
> (Fundamentally, the reason why UNV is such a terrible file format is
> because
> it is essentially a memory dump of a program built decades ago. It has a
> large
> number of "sections" -- 2411 being one of them -- each of which one has to
> understand because there are no begin- and end-markers for each section.
> You
> can't just read the file until you find the number 2411: if you wanted to
> do
> it right, you'd have to correctly read the previous section until it is
> over,
> at which point you can inspect the next number and see whether it is 2411,
> indicating the 2411 section. In practice, you are probably safe if you
> skip
> forward to a place where a new -1 alone on one line is followed by 2411
> alone
> on one line, but while this works, this is not the *correct* way to read
> these
> files. It really is a complete nightmare and my preference would be if
> that
> file format was banned from existence by the QAnon high council. My second
> choice would be if people/Salome exported meshes in other, better
> described
> and easier to read formats.)
>
> Best
>   W.
>
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/2ba62583-1f00-bdd4-d514-254a120f8c8b%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALAVa_x8sneO9%3D%2BZWR6XAi-nR-cB5Fup%2BSC8KeYhaPRRjOK0PQ%40mail.gmail.com.


Re: [deal.II] Use a coarse grid solution as initial condition for a finer grid

2021-08-03 Thread vachan potluri
Peter,

Thanks very much for taking time and providing the resources. I will go
through them and get back if I have any more questions.

I also have a simple related question. Can I skip all these steps if the
triangulation, FE and number of processors of the solution being
transferred are kept constant? This is one special case which is like
restarting a completed simulation. In this case (triangulation, FE, and
processors kept constant), is the partition and dof numbering always going
out to be the same? I am not doing any mesh refinement (either in dealii or
outside).

If yes, then I suppose loading the solution transfer and deserializing it
will do the job without having to worry about "evaluating" the solution?

On Tue, 3 Aug 2021 at 14:09, 'peterrum' via deal.II User Group <
dealii@googlegroups.com> wrote:

> Hi Vachan,
>
> I don't think you need to use RPE plane. My guess is that you can use
> VectorTools::point_values(), which internally uses PRE (see
> https://github.com/dealii/dealii/blob/e413cbf8ac77606cf4a04e1e2fa75232c08533b4/include/deal.II/numerics/vector_tools_evaluate.h#L230-L343).
> Here you could collect the support points from the new DoFHandler, let
> VT::point_values() do the work and finally set the values at the support
> points.
>
> Further resources are:
> - the test folder
> https://github.com/dealii/dealii/tree/master/tests/remote_point_evaluation
> (in particular
> https://github.com/dealii/dealii/blob/master/tests/remote_point_evaluation/vector_tools_evaluate_at_points_01.cc;
> here, the solution is interpolated between non-matching grids)
> - the usage of RPE and VT::point_values() in DataOutResample (
> https://github.com/dealii/dealii/blob/master/source/numerics/data_out_resample.cc
> )
> - the usage in the two-phase solver adaflo (
> https://github.com/kronbichler/adaflo/blob/master/include/adaflo/sharp_interface_util.h
> )
>
> PM
>
> On Tuesday, 3 August 2021 at 09:03:02 UTC+2 vachanpo...@gmail.com wrote:
>
>> Dr. Wolfgang,
>>
>> Thank you for still taking interest in this thread :)
>>
>> This becomes a very difficult data transfer problem because you want to
>>> evaluate the solution at a point that the current process may know
>>> nothing
>>> about. In essence, you will have to ask all of the other processes about
>>> who
>>> owns a part of the mesh on which a given interpolation point is located,
>>> and
>>> then that process has to send you the value of the function at that
>>> point. You
>>> have to do that for all points. This is going to be an expensive
>>> operation.
>>> You might want to check out the Utilities::MPI::RemotePointEvaluation
>>> class in
>>> the latest (9.3) release for help with this.
>>
>> I understand that the approach I had mentioned is inefficient. I only
>> need to do this once, to start a simulation, so I thought it might be okay.
>>
>> I had a look at Utilities::MPI::RemotePointEvaluation. This is how I
>> think it can be used (correct me if wrong)
>>
>>1. Call reinit() with the dof locations, along with the corresponding
>>triangulation and mapping, where I want to evaluate FEFieldFunction (I
>>suppose the triangulation and mapping here are not of the 
>> FEFieldFunction?).
>>2. Call evaluate_and_process() by passing in the FEFieldFunction.
>>
>> I have a few questions.
>>
>>1. What is the "buffer" argument in evaluate_and_process()?
>>2. The documentation for this function says get_point_ptrs() must be
>>used to "process" the output in case the point-cell map is not one-one. I
>>will surely encounter such cases. How can I use the data returned by
>>get_point_ptrs() and how exactly should I "process" the output?
>>
>> I couldn't find this used in any examples. Any clarifications would be
>> greatly helpful.
>>
>> On Tue, 3 Aug 2021 at 04:47, Wolfgang Bangerth 
>> wrote:
>>
>>> On 7/5/21 11:04 PM, vachan potluri wrote:
>>> >
>>> > So is your fine mesh a refinement of the coarse one? If not, you
>>> may want to
>>> > look at FEFieldFunction.
>>> >
>>> > Yes, it is. But the "refinement" is done by the meshing
>>> software, outside
>>> > dealii. Is there any simplification possible in such a case?
>>>
>>> Not really. If deal.II has no knowledge about the relationship between
>>> cells
>>> on the two meshes, then it is in essence the interpolation from one
>>> unstructured grid to another unstructured grid.
>>>
>

Re: [deal.II] Use a coarse grid solution as initial condition for a finer grid

2021-08-03 Thread vachan potluri
Dr. Wolfgang,

Thank you for still taking interest in this thread :)

This becomes a very difficult data transfer problem because you want to
> evaluate the solution at a point that the current process may know nothing
> about. In essence, you will have to ask all of the other processes about
> who
> owns a part of the mesh on which a given interpolation point is located,
> and
> then that process has to send you the value of the function at that point.
> You
> have to do that for all points. This is going to be an expensive operation.
> You might want to check out the Utilities::MPI::RemotePointEvaluation
> class in
> the latest (9.3) release for help with this.

I understand that the approach I had mentioned is inefficient. I only need
to do this once, to start a simulation, so I thought it might be okay.

I had a look at Utilities::MPI::RemotePointEvaluation. This is how I think
it can be used (correct me if wrong)

   1. Call reinit() with the dof locations, along with the corresponding
   triangulation and mapping, where I want to evaluate FEFieldFunction (I
   suppose the triangulation and mapping here are not of the FEFieldFunction?).
   2. Call evaluate_and_process() by passing in the FEFieldFunction.

I have a few questions.

   1. What is the "buffer" argument in evaluate_and_process()?
   2. The documentation for this function says get_point_ptrs() must be
   used to "process" the output in case the point-cell map is not one-one. I
   will surely encounter such cases. How can I use the data returned by
   get_point_ptrs() and how exactly should I "process" the output?

I couldn't find this used in any examples. Any clarifications would be
greatly helpful.

On Tue, 3 Aug 2021 at 04:47, Wolfgang Bangerth 
wrote:

> On 7/5/21 11:04 PM, vachan potluri wrote:
> >
> > So is your fine mesh a refinement of the coarse one? If not, you may
> want to
> > look at FEFieldFunction.
> >
> > Yes, it is. But the "refinement" is done by the meshing
> software, outside
> > dealii. Is there any simplification possible in such a case?
>
> Not really. If deal.II has no knowledge about the relationship between
> cells
> on the two meshes, then it is in essence the interpolation from one
> unstructured grid to another unstructured grid.
>
>
> > Otherwise, I think FEFieldFunction would be a safe choice. Since I want
> to use
> > it with p::d::Triangulation, will setting all dofs as relevant do the
> job?
> > This way the partitioning can also be different.
>
> This becomes a very difficult data transfer problem because you want to
> evaluate the solution at a point that the current process may know nothing
> about. In essence, you will have to ask all of the other processes about
> who
> owns a part of the mesh on which a given interpolation point is located,
> and
> then that process has to send you the value of the function at that point.
> You
> have to do that for all points. This is going to be an expensive operation.
>
> You might want to check out the Utilities::MPI::RemotePointEvaluation
> class in
> the latest (9.3) release for help with this.
>
> Best
>   W.
>
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/61601d83-856e-dd27-dc78-3653bd45ec2f%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALAVa_zoMRqDBmRLzGhVg7XiV3AOYmU9TJWGtxSARY0tU2t9sA%40mail.gmail.com.


Re: [deal.II] Unexpected data output with cell data vector

2021-07-09 Thread vachan potluri
>
> You want to use cell->active_cell_index() as the index into the vector. The
> vector should have
>triangulation.n_active_cells()
> as its size. This corresponds to the *local* number of active cells,
> including
> ghost and artificial cells (for which vector entries are then just
> ignored). I
> think step-47 shows this (whether in parallel or not doesn't matter in this
> regard).


Thank you, that clarifies my doubts.

On Sat, 10 Jul 2021 at 08:58, Wolfgang Bangerth 
wrote:

> On 7/7/21 12:08 AM, vachanpo...@gmail.com wrote:
> >
> > Vector temp_vec(gh_vec); // for data output
> > for(auto : dof_handler.active_cell_iterators()){
> >if(!(cell->is_locally_owned())) continue;
> >temp_vec[cell->index()] = gh_vec[cell->global_active_cell_index()];
> > }
> > data_out.add_data_vector(temp_vec, "vector");
> >
> > ... then the output looks as expected. So here I have manually set the
> entries
> > of the temporary vector using cell->index(), rather than letting the
> > constructor do the job. For 1d meshes both seem to produce the same
> output.
> >
> > What is the correct procedure? What kind of cell index does DataOut use
> > internally? Any clarification would be greatly appreciated!
>
> You want to use cell->active_cell_index() as the index into the vector.
> The
> vector should have
>triangulation.n_active_cells()
> as its size. This corresponds to the *local* number of active cells,
> including
> ghost and artificial cells (for which vector entries are then just
> ignored). I
> think step-47 shows this (whether in parallel or not doesn't matter in
> this
> regard).
>
> Best
>   W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/fafa93e0-9da2-cf5b-b0f9-8bbfd4e1e75c%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALAVa_yNWj-50gR9End--K%3DYuAaQSbrdTXxL%2BR69jkPbbdY2wA%40mail.gmail.com.


Re: [deal.II] Use a coarse grid solution as initial condition for a finer grid

2021-07-05 Thread vachan potluri
Dr. Wolfgang,

Thank you for the reply.

So is your fine mesh a refinement of the coarse one? If not, you may want to
> look at FEFieldFunction.

Yes, it is. But the "refinement" is done by the meshing software, outside
dealii. Is there any simplification possible in such a case?

Otherwise, I think FEFieldFunction would be a safe choice. Since I want to
use it with p::d::Triangulation, will setting all dofs as relevant do the
job? This way the partitioning can also be different.

You would have to attach manifolds to the triangulation object that is being
> reconstructed, before reconstruction.

Ok, noted.

Thank you.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALAVa_zuoanwZ4kqrvE-2X%3DoKQ4wt2xhAkK3NEMptm2pJXwA3g%40mail.gmail.com.


Re: [deal.II] dealii 9.3.0 make install fails at "Generating mpi.inst" with Invalid instantiation list: missing 'for'

2021-06-23 Thread vachan potluri
I had noticed that this make error would occur if the file being expanded
doesn't have a prefix 'for'. The following snippet is from line 453- of
dealii/cmake/scripts/expand_instantiations.cc
if (!has_prefix(whole_file, "for"))
{
std::cerr << "Invalid instantiation list: missing 'for'" << std::endl;
std::exit(1);
}

Now, I also noticed that dealii/source/base/mpi.inst.in has all the
template function instantiations enclosed within #ifndef DOXYGEN ...
#endif. And in fact, this is the only instantiation file which does so. I
thought this could be the issue and commented these two statements (#ifndef
and #endif). Then the installation went past this point (not complete yet
though).

I have two questions based on what I saw so far.

   1. Why are the mpi function instantiations to be declared only when
   doxygen is not configured.
   2. If I understand correctly, the function expand_instantiations.cc also
   has the following code snippet to ignore any preprocessor macros (line
   445). Why is this not working when the line #ifndef DOXYGEN is added to
   mpi.inst.in?

// output preprocessor defines as is:
if (has_prefix(whole_file, "#"))
{
std::cout << get_substring_with_delim(whole_file, "\n") << '\n';
skip_space(whole_file);
continue;
}

Thanks,
Vachan

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALAVa_y%2B6KZvsDp-PQr82dyCLVEBKv88fY%2BdtrAWs4N1AXiE1A%40mail.gmail.com.


Re: [deal.II] A data structure for distributed storage of some cell "average"

2021-06-14 Thread vachan potluri
>
> The problem is a mismatch of expectations. Like in many other places, when
> you
> pass a cell-based vector to DataOut, it assumes that on every process, the
> vector has one entry for each active cell of the triangulation -- i.e., on
> every process it is a *local* vector -- rather than a distributed vector
> with
> one entry for each globally active cell. In other words, for cell-based
> vectors, we ignore the fact that the computation might be parallel.


Thank you for the response, looks like I didn't do my homework properly :P.
Copying the MPI vector into a Vector of size triang.n_active_cells()
and adding this vector instead to DataOut works.

Thanks again!

On Tue, 15 Jun 2021 at 04:24, Wolfgang Bangerth 
wrote:

> On 6/11/21 12:09 AM, vachan potluri wrote:
> >
> > I am having an issue in using DataOut for such vector in a parallel
> process. I
> > am attaching a MWE which captures my problem. I am encountering a
> segmentation
> > fault (signal 11).
>
> The problem is a mismatch of expectations. Like in many other places, when
> you
> pass a cell-based vector to DataOut, it assumes that on every process, the
> vector has one entry for each active cell of the triangulation -- i.e., on
> every process it is a *local* vector -- rather than a distributed vector
> with
> one entry for each globally active cell. In other words, for cell-based
> vectors, we ignore the fact that the computation might be parallel.
>
> Best
>   W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/88da546c-259d-aac8-89cf-598e7c1f6b33%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALAVa_wF96oKWJZ6B77e2a80VDCk_vrGXqp-3sEN0it2XwbGgw%40mail.gmail.com.


Re: [deal.II] A data structure for distributed storage of some cell "average"

2021-06-11 Thread vachan potluri
Hello,

I am having an issue in using DataOut for such vector in a parallel
process. I am attaching a MWE which captures my problem. I am encountering
a segmentation fault (signal 11).

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 

using namespace dealii;
namespace LA
{
using namespace ::LinearAlgebraPETSc;
}

int main(int argc, char **argv)
{
Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);

constexpr int dim = 3;

MPI_Comm mpi_comm(MPI_COMM_WORLD);

parallel::distributed::Triangulation triang(mpi_comm);
GridGenerator::subdivided_hyper_cube(triang, 5);
const std::shared_ptr cell_partitioner =
triang.global_active_cell_index_partitioner().lock();
LA::MPI::Vector vec(cell_partitioner->locally_owned_range(), mpi_comm);
LA::MPI::Vector gh_vec(
cell_partitioner->locally_owned_range(),
cell_partitioner->ghost_indices(),
mpi_comm
);

DoFHandler dof_handler(triang);
FE_DGQ fe(2);
dof_handler.distribute_dofs(fe);

for(auto : dof_handler.active_cell_iterators()){
if(!cell->is_locally_owned()) continue;

vec[cell->global_active_cell_index()] = cell->global_active_cell_index();
}
vec.compress(VectorOperation::insert);

gh_vec = vec;

DataOut data_out;
DataOutBase::VtkFlags flags;
flags.write_higher_order_cells = true;
data_out.set_flags(flags);

data_out.attach_dof_handler(dof_handler);

data_out.add_data_vector(gh_vec, "x");

data_out.build_patches(
MappingQGeneric(2),
2,
DataOut::CurvedCellRegion::curved_inner_cells
);

std::ofstream proc_file(
"output" + Utilities::int_to_string(
Utilities::MPI::this_mpi_process(mpi_comm),
2
) + ".vtu"
);
data_out.write_vtu(proc_file);
proc_file.close();

if(Utilities::MPI::this_mpi_process(mpi_comm) == 0){
std::vector filenames;
for(int i=0; i
wrote:

> Thank you :).
>
> On Tue, 8 Jun, 2021, 19:24 Wolfgang Bangerth, 
> wrote:
>
>> On 6/8/21 4:18 AM, vachanpo...@gmail.com wrote:
>> > If I want to add such a vector to DataOut, will the regular
>> > DataOut::add_data_vector() work? Or is something else required to be
>> done?
>>
>> Yes, DataOut::add_data_vector() can take two kinds of vectors:
>> * Ones that have as many entries as there are DoFs
>> * Ones that have as many entries as there are active cells
>>
>> Best
>>   W.
>>
>> --
>> 
>> Wolfgang Bangerth  email: bange...@colostate.edu
>> www: http://www.math.colostate.edu/~bangerth/
>>
>> --
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see
>> https://groups.google.com/d/forum/dealii?hl=en
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "deal.II User Group" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to dealii+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/dealii/2c89ac60-d144-8ef3-62db-f15770096bd6%40colostate.edu
>> .
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALAVa_zZ24atRD6wj-ju87KRY9Q%3Dg1%3DJPBTBRLcH0txZgGywug%40mail.gmail.com.


Re: [deal.II] A data structure for distributed storage of some cell "average"

2021-06-08 Thread vachan potluri
Thank you :).

On Tue, 8 Jun, 2021, 19:24 Wolfgang Bangerth, 
wrote:

> On 6/8/21 4:18 AM, vachanpo...@gmail.com wrote:
> > If I want to add such a vector to DataOut, will the regular
> > DataOut::add_data_vector() work? Or is something else required to be
> done?
>
> Yes, DataOut::add_data_vector() can take two kinds of vectors:
> * Ones that have as many entries as there are DoFs
> * Ones that have as many entries as there are active cells
>
> Best
>   W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/2c89ac60-d144-8ef3-62db-f15770096bd6%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALAVa_w-AoEfkzA2rnVtEVM-Cm7cJJpNte6N72OnCw0vVab0XA%40mail.gmail.com.


Re: [deal.II] Compiling deal.II with GCC version 9.3.0 results in missing C++11 features error

2021-06-07 Thread vachan potluri
Alex,

I think this is a problem related to the cluster's OS. On Cray XC50, I had
to explicitly set the link type to dynamic before installation, because by
default Cray does a static link. I had to set

export XTPE_LINK_TYPE=dynamic
export CRAYPE_LINK_TYPE=dynamic

before the installation. You can try and see if anything similar is
happening.

Regards,
Vachan

On Mon, 7 Jun 2021 at 22:34, Wolfgang Bangerth 
wrote:

> On 6/7/21 10:35 AM, Alex Cumberworth wrote:
> >
> > make[2]: *** No rule to make target
> > '/home/ipausers/cumberworth/lib/libsacado.a', needed by
> > 'lib/libdeal_II.g.so.9.3.0'.  Stop.
> > make[1]: *** [CMakeFiles/Makefile2:3238:
> source/CMakeFiles/deal_II.g.dir/all]
> > Error 2
> > make: *** [Makefile:149: all] Error 2
> >
> > There is libsacado.so in the directory I specified for trilinos
> libraries. I
> > configured both trilinos and deal.ii to build shared libraries. I also
> have set
> >
> > DEAL_II_PREFER_STATIC_LIBS   OFF
> >
> > I'm not really sure why it seems to only look for the static version of
> the
> > library.
>
> I don't think that the deal.II configuration specifies this. Check the
> files
> under lib/cmake in your Trilinos installation. For example, I find there
> the
> following information that deal.II simply imports:
>
> trilinos-12.8.1-mpi/lib> grep -r libsacado *
> cmake/Sacado/SacadoTargets-release.cmake:  IMPORTED_LOCATION_RELEASE
> "${_IMPORT_PREFIX}/lib/libsacado.so.12.8.1"
> cmake/Sacado/SacadoTargets-release.cmake:  IMPORTED_SONAME_RELEASE
> "libsacado.so.12"
> cmake/Sacado/SacadoTargets-release.cmake:list(APPEND
> _IMPORT_CHECK_FILES_FOR_sacado "${_IMPORT_PREFIX}/lib/libsacado.so.12.8.1"
> )
> Binary file libsacado.so matches
> Binary file libsacado.so.12 matches
> Binary file libsacado.so.12.8.1 matches
>
> Best
>   W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/59c24df0-17d6-5c39-4ef9-9ba5fb722ca8%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALAVa_yf8y1_7Q9i5K2uZv-%2BO6%3DLwFa9tVwxg88C8f%2BPM8HEkw%40mail.gmail.com.


Re: [deal.II] Compiling deal.II with GCC version 9.3.0 results in missing C++11 features error

2021-06-03 Thread vachan potluri
Hi Alex,

I previously ran into a lot of issues when I tried to install dealii on our
institute's cluster. The OS was different though and I had problems with
PETSc.

I don't know if this helps but this is the relevant section in
dealii-9.2.0/cmake/modules/FindTRILINOS.cmake file which searches for
epetra.

# Look for Epetra_config.h - we'll query it to determine MPI and 64bit
# indices support:
#
DEAL_II_FIND_FILE(EPETRA_CONFIG_H Epetra_config.h
  HINTS ${Trilinos_INCLUDE_DIRS}
  NO_DEFAULT_PATH NO_CMAKE_ENVIRONMENT_PATH NO_CMAKE_PATH
  NO_SYSTEM_ENVIRONMENT_PATH NO_CMAKE_SYSTEM_PATH NO_CMAKE_FIND_ROOT_PATH
  )

Notice the hint: it is Trilinos_INCLUDE_DIRS and not TRILINOS_INCLUDE_DIRS.
If your dealii version also has Trilinos in smallcase, then maybe making
this change will do the job. You may even try manually adding the full path
in the cmake module file as a hint.

Hope this helps
Vachan

On Thu, 3 Jun 2021 at 15:07, Alex Cumberworth <
alexandercumberwo...@gmail.com> wrote:

> Hello,
>
> The file does exist and is readable. If I set a manual include flag it is
> able to find it:
>
> CMAKE_CXX_FLAGS_DEBUGRELEASE
> -I/opt/ohpc/pub/libs/gnu9/openmpi4/trilinos/13.0.0/include
>
> then it is able to get past this point. From the output in my previous
> message, it seems that cmake is not looking in the right place for these
> header files, and I have no idea how to set this properly:
>
> --   TRILINOS_INCLUDE_DIRS:
> /include;/opt/ohpc/pub/libs/gnu9/openmpi4/hdf5/1.10.6/include;/usr/include;/opt/ohpc/pub/libs/gnu9/openmpi4/boost/1.73.0/include
> --   TRILINOS_USER_INCLUDE_DIRS:
> /include;/opt/ohpc/pub/libs/gnu9/openmpi4/hdf5/1.10.6/include;/usr/include;/opt/ohpc/pub/libs/gnu9/openmpi4/boost/1.73.0/include
>
> There are further issues past this point, but perhaps if I understand the
> problem here that will help with the later issues.
>
> Best,
> Alex
>
> On Tuesday, June 1, 2021 at 9:02:12 p.m. UTC+2 Wolfgang Bangerth wrote:
>
>>
>> Alex,
>>
>> > I have also tried
>> >
>> > cmake -DTRILINOS_DIR=/opt/ohpc/pub/libs/gnu9/openmpi4/trilinos/13.0.0
>> ..
>> >
>> > It is also unable to find the epetra header file.
>>
>> I can not tell why that would be so (but you should be able to find out
>> by
>> searching for the place in CMakeFiles/CMakeErrors.log where it shows you
>> the
>> command that was executed to find that file). It may be that the file
>> just
>> doesn't exist. It may be that it's not readable. It may be that the
>> compiler
>> finds an error in it.
>>
>> If you can't figure out how to use that installation, why not install
>> Trilinos
>> yourself in your home directory and take it from there?
>>
>> Best
>> W.
>>
>>
>> --
>> 
>> Wolfgang Bangerth email: bang...@colostate.edu
>> www: http://www.math.colostate.edu/~bangerth/
>>
>> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/1f525829-638b-42f4-badd-01d70d5de9efn%40googlegroups.com
> 
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALAVa_zno1mbLsVjfDon5OPH%3DZF_ifvU7V4na8amA-71c-69nw%40mail.gmail.com.


Re: [deal.II] Ordering of polynomials in FE_DGQLegendre<3>

2021-05-25 Thread vachan potluri
Wolfgang,


> It doesn't have to be. For example, for FE_Q, we also build on
> TensorProductPolynomials but the ordering is not lexicographic. So it would
> still be of interest to document the order of shape functions if you end up
> finding out what it is!


Noted. So I have verified this with the following code.

#include 
#include 
#include 

#include 
#include 
#include 

using namespace dealii;

int main(int argc, char **argv)
{
const int degree = 4;
const int n_poly1 = degree+1;

const FE_DGQLegendre<1> fe1(degree);
const FE_DGQLegendre<3> fe3(degree);
const QGauss<3> quad(degree+1);
const int n_qp = quad.size();
const std::vector>& points = quad.get_points();
const std::vector& weights = quad.get_weights();

for(int k=0; k(points[q][0]))*
fe1.shape_value(j, Point<1>(points[q][1]))*
fe1.shape_value(k, Point<1>(points[q][2]))
),
2
);
}
std::cout << "Tensor indices: " << i << " " << j << " " << k << ", "
<< "3d index: " << index_3d << ", error: " << error << "\n";
}
}
}
}
And the output shows all errors to be 0! If this is ok, I will create a pr
with a patch to the documentation shortly.

I don't recall whether there is an easy way to achieve what you are looking
> for, but some elements definitely do something like what you are trying to
> achieve. For example, the difference between FE_Q and FE_DGQ is, in
> essence,
> just a change of basis where the basis functions are permuted. Similarly,
> the
> difference between FE_Q and FE_QHierarchical is similar to the nodal ->
> modal
> change you are interested in. Finally, there is also the case of the FE_Q
> constructor that receives a Quadrature object as argument and that then
> computes a basis change.
> You might want to look into how all of these are implemented. Most of this
> kind of functionality exists in some kind of helper function that might be
> useful to you.


Thanks a lot for providing these details. For now, I have done this by
myself, but I will keep this in mind.

Thanks very much!

On Tue, 25 May 2021 at 19:35, Wolfgang Bangerth 
wrote:

>
> Vachan
>
> > Thanks a lot for your reply. What I actually need is a change of basis
> from
> > Lagrange polynomials (nodal) to Legendre polynomials (modal). I then
> want to
> > know the coefficients of certain modes.
> >
> > So, if there is any straightforward way to do this in dealii, I would
> proceed
> > with that. I, could not find any such functions and hence planned on
> doing it
> > manually, using the shape functions. However, even using any inbuilt
> functions
> > would only address my issue partly if the ordering is not clear.
>
> I don't recall whether there is an easy way to achieve what you are
> looking
> for, but some elements definitely do something like what you are trying to
> achieve. For example, the difference between FE_Q and FE_DGQ is, in
> essence,
> just a change of basis where the basis functions are permuted. Similarly,
> the
> difference between FE_Q and FE_QHierarchical is similar to the nodal ->
> modal
> change you are interested in. Finally, there is also the case of the FE_Q
> constructor that receives a Quadrature object as argument and that then
> computes a basis change.
>
> You might want to look into how all of these are implemented. Most of this
> kind of functionality exists in some kind of helper function that might be
> useful to you.
>
> Best
>   W.
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/4d767eb0-cf66-b11a-8791-55deab3666de%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALAVa_yMw_i_hmj4GuB6iT0BOOcv2MC1DoUE1OEWfO9j%3D7s9Uw%40mail.gmail.com.


[deal.II] Re: Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-14 Thread vachan potluri
Here is a summary of the installation process on Cray XC50.

I have configured deal.II with MPI, LAPACK, SCALAPACK, PETSc and p4est. Our 
system didn't have p4est so I started with installing it. All cray 
libraries are in /opt/cray/pe/lib64/ in out system.

*Installing p4est*
1. Download source files and setup script from here 
.
2. By default, the setup script searches for mpicxx compilers. Instead, 
explicitly specifiy cray compilers. The configure command will look as 
follows.
"$SRCDIR/configure" CXX=/opt/cray/pe/craype/2.5.13/bin/CC \
CC=/opt/cray/pe/craype/2.5.13/bin/cc \
F77=/opt/cray/pe/craype/2.5.13/bin/ftn \
FC=/opt/cray/pe/craype/2.5.13/bin/ftn \
--enable-mpi --enable-shared \
--disable-vtk-binary --without-blas \
--prefix="$INSTALL_DEBUG" CFLAGS="$CFLAGS_DEBUG" \
CPPFLAGS="-DSC_LOG_PRIORITY=SC_LP_ESSENTIAL" \
"$@" > config.output || bdie "Error in configure"
Make this change for both FAST and DEBUG versions.
3. By default cray assumes static linking. Change them:
export XTPE_LINK_TYPE=dynamic
export CRAYPE_LINK_TYPE=dynamic
This will be required in subsequent steps also.
4. The makefile generated uses flags corresponding to GNU compilers. Switch 
module PrgEnv-cray with PrgEnv-gnu.

*Configuring with LAPACK and SCALAPACK*
1. For LAPACK, deal.II's find module calls cmake's corresponding find 
module. For this to work on cray systems, cmake version >=3.16 is required. 
So I installed a new version in my home directory and used this cmake 
version. See this 

 
and this .
2. For Cray environments, lapack libraries are linked directly to cray 
compiler without requiring any other flags. So the _lapack_libraries 
variable in deal.II's FindLAPACK.cmake will be empty. This is okay. So set 
this as OPTIONAL in the end of this file.
3. For SCALAPACK, the library name in FindLAPACK.cmake should be changed to 
sci_gnu_61_mpi_mp (or whatever is the name of libsci library on your 
system) since on cray, SCALAPACK is a part of this library.

*Configuring with MPI and PETSc*
1. For MPI, simply specify the compilers explicitly.
2. For PETSc, the library name must be changed to craypetsc_gnu_real-64 
(depending on your system).
3. The additional libraries PETSc interfaces to are read from linker line 
of $PETSC_DIR/lib/petsc/conf/petscvariables. Make a copy of this file and 
modify the linker line so that the library names are correct (if they are 
not already, as was the case with me). Change the hint to petscvariables 
file in FindPETSC.cmake.
4. Also, add the correct hint to these library paths in the following 
portion of the aforementioned file.
DEAL_II_FIND_LIBRARY(PETSC_LIBRARY_${_token}
NAMES ${_token}
#HINTS ${_hints}
HINTS ${_hints} ${CMAKE_PREFIX_PATH}
)
In my case, I set CMAKE_PREFIX_PATH in configure script to 
/opt/cray/pe/lib64.
5. If your system has PETSc libraries with ".so.mpi" extensions, you 
must enable find those in dealii-9.1.1/CMakeLists.txt (the top most one)
SET(CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES}
".so.0" ".so.5" ".so.mpi31.2" ".so.mpi31.4" ".so.mpi31.5" 
".so.mpi31.6" ".so.mpi31.12"
)
6. If you are using 64-bit versions of PETSc libraries, you must enable 
this for deal.II too (see below).



You must unload the atp module before configuring (see here 
). For 
cross-compilation (see here 
),
 
you can just add -DCMAKE_SYSTEM_NAME=CrayLinuxEnvironment without requiring 
a Toolchain file in newer cmake versions. The configure script is

cmake_new=~/bin/cmake-3.16.4/usr/local/bin/cmake # from bashrc, shell 
scripts can't use aliases
$cmake_new -DCMAKE_INSTALL_PREFIX=~/bin/dealii-9.1.1/ \
-DWITH_64BIT_INDICES=ON \
-DCMAKE_PREFIX_PATH=/opt/cray/pe/lib64 \
-DWITH_MPI=ON \
-DMPI_DIR=/opt/cray/pe/mpt/default/gni/mpich-gnu/5.1/ \

-DMPI_CXX_INCLUDE_PATH=/opt/cray/pe/mpt/default/gni/mpich-gnu/5.1/include/ \
-DCMAKE_CXX_COMPILER=/opt/cray/pe/craype/2.5.13/bin/CC \
-DCMAKE_C_COMPILER=/opt/cray/pe/craype/2.5.13/bin/cc \
-DCMAKE_Fortran_COMPILER=/opt/cray/pe/craype/2.5.13/bin/ftn 
\
-DWITH_BLAS=ON \
-DWITH_LAPACK=ON \
-DWITH_SCALAPACK=ON \
-DWITH_PETSC=ON \
-DWITH_P4EST=ON -DP4EST_DIR=~/bin/p4est-2.2/ \
-DCMAKE_SYSTEM_NAME=CrayLinuxEnvironment \
~/source/dealii-9.1.1

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you 

[deal.II] Re: Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-13 Thread vachan potluri
It is working!

The mistake I did was to open an interactive job and run the executables 
through bash. When I instead submitted a job, and executed using aprun 
(Cray's equivalent to mpirun) to run the executables, they ran 
successfully. I tested step-1, step-18 and my own code too. The 
installation tests will probably not run though, since they are actually 
makefile targets.

I apologise for being irresponsible and hasty in the previous couple of 
messages. I thank everyone for helping me and hearing me out when I was all 
by myself. I will also post a summary of the installation process.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/15ec6df7-c08a-4ae1-98c0-ab04bc6db24c%40googlegroups.com.


[deal.II] Re: Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-12 Thread vachan potluri
I have found few reports of glibc version 2.28 causing such behaviour (e.g. 
see here ). It might be possible 
that /lib64/ld-linux-x86-64.so.2 on our system "links" to this version of 
glibc. But it actually is a static library:
$ ldd -v ld-linux-x86-64.so.2
statically linked
So there is probably no way to ascertain this. If it infact is so (linked 
to glibc 2.28), then I don't think there is anyway I can get working. With 
a simple code from here ,  I 
have found that my compiler links to glibc version 2.22 both during compile 
and run time. So there is no issue with the compiler.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0c6262f0-e4c1-49da-b896-afc5aa461a50%40googlegroups.com.


[deal.II] Re: Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-12 Thread vachan potluri
This is the full backtrace with gdb.
(gdb) bt
#0  __static_initialization_and_destruction_0 (__initialize_p=1, 
__priority=65535)
at 
/home/ComptGasDynLab/vachanpotluri/source/dealii-9.1.1/source/numerics/time_dependent.cc:1196
#1  0x7fffec1aa6f8 in _GLOBAL__sub_I_time_dependent.cc(void) () at 
/home/ComptGasDynLab/vachanpotluri/source/dealii-9.1.1/source/numerics/time_dependent.cc:1275
#2  0x77deacba in call_init.part () from /lib64/ld-linux-x86-64.so.2
#3  0x77deada3 in _dl_init () from /lib64/ld-linux-x86-64.so.2
#4  0x77ddd22a in _dl_start_user () from /lib64/ld-linux-x86-64.so.2
#5  0x0001 in ?? ()
#6  0x7fff729e in ?? ()
#7  0x in ?? ()
Unfortunately, gdb is probably not configured properly. Cray has its own 
debuggers, most of them GUIs (and hence cannot be used) and all of them 
require submitting a job interactively which I am currently unable to do. I 
will post the bt with one of those when the queue is empty.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ac773c2c-87a8-430b-b4c2-3a1617a27011%40googlegroups.com.


[deal.II] Re: Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-11 Thread vachan potluri
Step-1 aborts with Illegal Instruction (core dumped). The error msg gdb 
prints is the following.
Program received signal SIGILL, Illegal instruction.
__static_initialization_and_destruction_0 (__initialize_p=1, 
__priority=65535)
at 
/home/ComptGasDynLab/vachanpotluri/source/dealii-9.1.1/source/numerics/time_dependent.cc:1196
1196  std::make_pair(0U, 
0.)));
When I backtrace the error, it leads to this.
template  
typename 
TimeStepBase_Tria_Flags::RefinementFlags::CorrectionRelaxations
  
TimeStepBase_Tria_Flags::RefinementFlags::default_correction_relaxations(
1, // one element, denoting the first and all subsequent sweeps
std::vector>(1, // one element, 
denoting the
// upper bound for the
// following relaxation
 std::make_pair(0U, 0.)));
Not just step-1, but step.debug, affinity.debug and mpi.debug (and possibly 
other debug tests may) also terminate with the same error and bt. Can 
someone explain why this is happening?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9fce9f67-4040-44c3-89c2-bae0effe339a%40googlegroups.com.


[deal.II] Re: Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-10 Thread vachan potluri
Ok. After installing newer cmake version and making _lapack_libraries 
OPTIONAL, LAPACK configuration has gone fine. For PETSc, I did something 
dirty. I figured that FindPETSC.cmake searches for libraries in a file 
named petscvariables. I made my own copy of petscvariables file and 
modified the linker line in this file. I changed the path hints to find 
this file and then the rest was as expected.

The installation was successful (I had to unload apt module, see the 
trailing discussion here 
). I 
did a cross compilation (see here 
).
 
Instead of using a Toolchain file, I used the option 
-DCMAKE_SYSTEM_NAME=CrayLinuxEnvironment as mentioned here 
.
 
The complete cmake invocation is as below.

cmake_new=~/bin/cmake-3.16.4/usr/local/bin/cmake
$cmake_new -DCMAKE_INSTALL_PREFIX=~/bin/dealii-9.1.1/ \
-DWITH_64BIT_INDICES=ON \
-DCMAKE_PREFIX_PATH=/opt/cray/pe/lib64 \
-DWITH_MPI=ON \
-DMPI_DIR=/opt/cray/pe/mpt/default/gni/mpich-gnu/5.1/ \

-DMPI_CXX_INCLUDE_PATH=/opt/cray/pe/mpt/default/gni/mpich-gnu/5.1/include/ \
-DCMAKE_CXX_COMPILER=/opt/cray/pe/craype/2.5.13/bin/CC \
-DCMAKE_C_COMPILER=/opt/cray/pe/craype/2.5.13/bin/cc \
-DCMAKE_Fortran_COMPILER=/opt/cray/pe/craype/2.5.13/bin/ftn 
\
-DWITH_BLAS=ON \
-DWITH_LAPACK=ON \
-DWITH_SCALAPACK=ON \
-DWITH_PETSC=ON \
-DWITH_P4EST=ON -DP4EST_DIR=~/bin/p4est-2.2/ \
-DCMAKE_SYSTEM_NAME=CrayLinuxEnvironment \
~/source/dealii-9.1.1

However, make test shows all tests failing with the error
/bin/sh: .: command not found.
When I navigated to tests/quick_tests and individually ran the tests, the 
output was as follows.
$ ./lapack.debug 
Illegal instruction

Can anyone help me with this issue? Is this because I have cross-compiled? 
The make test instruction was run on login node. It so, is there a way I 
check my installation on login node itself without needing submitting a job?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/390c861b-65ec-4075-8715-ecafcd6a2864%40googlegroups.com.


[deal.II] Re: Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-09 Thread vachan potluri

>
> If only an old version is the problem, I would just go ahead and download 
> and compile a recent version myself. I never had any issues with that and 
> should be quite simple.

I did this. Indeed the cmake output now prints
A library with LAPACK API found.
 However, the lapack configuration fails:
-- Include 
/home/ComptGasDynLab/vachanpotluri/source/dealii-9.1.1/cmake/configure/configure_1_lapack.cmake
-- Lapack dir /usr/lib64
-- A library with LAPACK API found.
-- Performing Test craypetsc_gnu_53_real-64_LIBRARY
-- Performing Test craypetsc_gnu_53_real-64_LIBRARY - Success
-- Performing Test rca_LIBRARY
-- Performing Test rca_LIBRARY - Success
-- Performing Test AtpSigHandler_LIBRARY
-- Performing Test AtpSigHandler_LIBRARY - Success
-- Performing Test AtpSigHCommData_LIBRARY
-- Performing Test AtpSigHCommData_LIBRARY - Success
-- Performing Test sci_gnu_61_mpi_LIBRARY
-- Performing Test sci_gnu_61_mpi_LIBRARY - Success
-- Performing Test sci_gnu_61_LIBRARY
-- Performing Test sci_gnu_61_LIBRARY - Success
-- Performing Test mpich_gnu_51_LIBRARY
-- Performing Test mpich_gnu_51_LIBRARY - Success
-- Performing Test mpichf90_gnu_51_LIBRARY
-- Performing Test mpichf90_gnu_51_LIBRARY - Success
-- Performing Test gfortran_LIBRARY
-- Performing Test gfortran_LIBRARY - Success
-- Performing Test quadmath_LIBRARY
-- Performing Test quadmath_LIBRARY - Success
-- Performing Test pthread_LIBRARY
-- Performing Test pthread_LIBRARY - Success
-- Performing Test m_LIBRARY
-- Performing Test m_LIBRARY - Success
-- Performing Test gomp_LIBRARY
-- Performing Test gomp_LIBRARY - Success
-- Performing Test gcc_s_LIBRARY
-- Performing Test gcc_s_LIBRARY - Success
-- Performing Test gcc_LIBRARY
-- Performing Test gcc_LIBRARY - Success
-- Performing Test c_LIBRARY
-- Performing Test c_LIBRARY - Success
--   LAPACK_LIBRARIES: *** Required variable "_lapack_libraries" empty ***
--   LAPACK_LINKER_FLAGS: 
--   LAPACK_INCLUDE_DIRS: 
--   LAPACK_USER_INCLUDE_DIRS: 
-- Could NOT find LAPACK
-- DEAL_II_WITH_LAPACK has unmet external dependencies.

I had a look at cmake's version of FindLAPACK.cmake. They explicitly 
mention that for cray programming environment, the variable 
LAPACK_LIBRARIES is set empty.
# On compilers that implicitly link LAPACK (such as ftn, cc, and CC on Cray 
HPC machines)
# we used a placeholder for empty LAPACK_LIBRARIES to get through our logic 
above.
if (LAPACK_LIBRARIES STREQUAL 
"LAPACK_LIBRARIES-PLACEHOLDER-FOR-EMPTY-LIBRARIES")
  set(LAPACK_LIBRARIES "")
endif()
And this is causing the error. So in deal.II's FindLAPACK.cmake can I set 
_lapack_libraries as OPTIONAL? Or is there a cleaner way to tackle this?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/af1f6f19-a84b-4690-8242-8051df13d87f%40googlegroups.com.


[deal.II] Re: deal.II installation on cray XC50 giving MPI_VERSION=0.0

2020-02-07 Thread vachan potluri
Dear Prof. Bangerth,

Can you attach it to a reply? It would be interesting to see why the 
> version 
> detection didn't work. (Although I see that cmake complains that it can't 
> find 
> the file, so that is probably the issue. I don't know why it can't find 
> the 
> file...)

I really appreciate and value your involvement in this thread. I have 
attached mpi.h with this mail. I want to mention that I added this line in 
FindMPI.cmake 
just before DEAL_II_FIND_FILE(MPI_MPI_H ...): 
MESSAGE(STATUS "Searching for mpi.h in ${MPI_CXX_INCLUDE_PATH}")
and found the corresponding output by cmake to be as follows.
-- Include 
/home/ComptGasDynLab/vachanpotluri/source/dealii-9.1.1/cmake/configure/configure_1_mpi.cmake
-- Searching for mpi.h in 
-- Found MPI_MPI_H
--   MPI_VERSION: 3.1
--   MPI_LIBRARIES: 
--   MPI_INCLUDE_DIRS: 
--   MPI_USER_INCLUDE_DIRS: 
--   MPI_CXX_FLAGS: 
--   MPI_LINKER_FLAGS: 
-- Found MPI
-- DEAL_II_WITH_MPI successfully set up with external dependencies.
Notice that the path is not printed, the variable is empty! Probably the 
problem is not with reading the header but with the variable being changed 
somewhere else.

It's still possible that everything links correctly. What happens if you 
> run 
> cmake and then compile? Does it work? 

I have not tried compiling the source because there are some other things 
also that I wanted to figure out. I have started a new thread for this 
purpose here https://groups.google.com/forum/#!topic/dealii/MCYyPrZNyjg. I 
request you to have a look at this one too.

Thank you

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5f65d296-af15-4b09-a00e-127ae97fe5bc%40googlegroups.com.
/* -*- Mode: C; c-basic-offset:4 ; -*- */
/*  
 *  (C) 2001 by Argonne National Laboratory.
 *  See COPYRIGHT in top-level directory.
 */
/* src/include/mpi.h.  Generated from mpi.h.in by configure. */
#include "cray_version.h"
#ifndef MPI_INCLUDED
#define MPI_INCLUDED

/* user include file for MPI programs */

/* Keep C++ compilers from getting confused */
#if defined(__cplusplus)
extern "C" {
#endif

#define NO_TAGS_WITH_MODIFIERS 1
#undef MPICH_DEFINE_ATTR_TYPE_TYPES
#if defined(__has_attribute)
#  if __has_attribute(pointer_with_type_tag) && \
  __has_attribute(type_tag_for_datatype) && \
  !defined(NO_TAGS_WITH_MODIFIERS) &&\
  !defined(MPICH_NO_ATTR_TYPE_TAGS)
#define MPICH_DEFINE_ATTR_TYPE_TYPES 1
#define MPICH_ATTR_POINTER_WITH_TYPE_TAG(buffer_idx, type_idx)  __attribute__((pointer_with_type_tag(MPI,buffer_idx,type_idx)))
#define MPICH_ATTR_TYPE_TAG(type)   __attribute__((type_tag_for_datatype(MPI,type)))
#define MPICH_ATTR_TYPE_TAG_LAYOUT_COMPATIBLE(type) __attribute__((type_tag_for_datatype(MPI,type,layout_compatible)))
#define MPICH_ATTR_TYPE_TAG_MUST_BE_NULL()  __attribute__((type_tag_for_datatype(MPI,void,must_be_null)))
#include 
#  endif
#endif

#if !defined(MPICH_ATTR_POINTER_WITH_TYPE_TAG)
#  define MPICH_ATTR_POINTER_WITH_TYPE_TAG(buffer_idx, type_idx)
#  define MPICH_ATTR_TYPE_TAG(type)
#  define MPICH_ATTR_TYPE_TAG_LAYOUT_COMPATIBLE(type)
#  define MPICH_ATTR_TYPE_TAG_MUST_BE_NULL()
#endif

#if !defined(INT8_C)
/* stdint.h was not included, see if we can get it */
#  if defined(__cplusplus)
#if __cplusplus >= 201103
#  include 
#endif
#  endif
#endif

#if !defined(INT8_C)
/* stdint.h was not included, see if we can get it */
#  if defined(__STDC_VERSION__)
#if __STDC_VERSION__ >= 199901
#  include 
#endif
#  endif
#endif

#if defined(INT8_C)
/* stdint.h was included, so we can annotate these types */
#  define MPICH_ATTR_TYPE_TAG_STDINT(type) MPICH_ATTR_TYPE_TAG(type)
#else
#  define MPICH_ATTR_TYPE_TAG_STDINT(type)
#endif

#ifdef __STDC_VERSION__ 
#if __STDC_VERSION__ >= 199901
#  define MPICH_ATTR_TYPE_TAG_C99(type) MPICH_ATTR_TYPE_TAG(type)
#else
#  define MPICH_ATTR_TYPE_TAG_C99(type)
#endif
#else 
#  define MPICH_ATTR_TYPE_TAG_C99(type)
#endif

#if defined(__cplusplus)
#  define MPICH_ATTR_TYPE_TAG_CXX(type) MPICH_ATTR_TYPE_TAG(type)
#else
#  define MPICH_ATTR_TYPE_TAG_CXX(type)
#endif


/* Define some null objects */
#define MPI_COMM_NULL  ((MPI_Comm)0x0400)
#define MPI_OP_NULL((MPI_Op)0x1800)
#define MPI_GROUP_NULL ((MPI_Group)0x0800)
#define MPI_DATATYPE_NULL  ((MPI_Datatype)0x0c00)
#define MPI_REQUEST_NULL   ((MPI_Request)0x2c00)
#define MPI_ERRHANDLER_NULL ((MPI_Errhandler)0x1400)
#define MPI_MESSAGE_NULL   ((MPI_Message)0x2c00)
#define MPI_MESSAGE_NO_PROC 

[deal.II] Installation on cray XC50 | linking to petsc, lapack and blas libraries with different names

2020-02-07 Thread vachan potluri

Hello,

I am trying to install deal.II on a cray XC50 machine. I had posted a 
question related to MPI 
here https://groups.google.com/forum/#!topic/dealii/EJm6ePrI81w.

1. Configuring with MPI was "successful" with the following cmake 
invocation.
cmake -DCMAKE_INSTALL_PREFIX=~/bin/dealii-9.1.1/ \
-DWITH_64BIT_INDICES=ON \
-DCMAKE_PREFIX_PATH=/opt/cray/pe/lib64 \
-DWITH_MPI=ON \
-DMPI_DIR=/opt/cray/pe/mpt/default/gni/mpich-gnu/5.1/ \

-DMPI_CXX_INCLUDE_PATH=/opt/cray/pe/mpt/default/gni/mpich-gnu/5.1/include/ \
-DCMAKE_CXX_COMPILER=/opt/cray/pe/craype/2.5.13/bin/CC \
-DCMAKE_C_COMPILER=/opt/cray/pe/craype/2.5.13/bin/cc \
-DCMAKE_Fortran_COMPILER=/opt/cray/pe/craype/2.5.13/bin/ftn 
\
-DWITH_BLAS=OFF \
-DWITH_LAPACK=OFF \
-DWITH_SCALAPACK=OFF \
-DWITH_PETSC=OFF \
-DWITH_P4EST=OFF -DP4EST_DIR=~/bin/p4est-2.2/ \
~/source/dealii-9.1.1

2. Configuring with PETSC was also successful, but I had to make the 
following change in cmake/modules/FindPETSC.cmake because the petsc 
libraries had a different name.
DEAL_II_FIND_LIBRARY(PETSC_LIBRARY
  #NAMES petsc libpetsc
  NAMES craypetsc_gnu_real-64 craypetsc_gnu_53_real-64
  HINTS ${PETSC_DIR} ${PETSC_DIR}/${PETSC_ARCH}
  PATH_SUFFIXES lib${LIB_SUFFIX} lib64 lib
  )
With -DWITH_PETSC=ON option, configuring with petsc was also "successful" 
but the libraries optionally used by petsc were not detected, as stated in 
this snippet of the output:
-- Include 
/home/ComptGasDynLab/vachanpotluri/source/dealii-9.1.1/cmake/configure/configure_3_petsc.cmake
-- Found PETSC_LIBRARY
-- Found PETSC_INCLUDE_DIR_ARCH
-- Found PETSC_INCLUDE_DIR_COMMON
-- Found PETSC_PETSCVARIABLES
-- PETSC_LIBRARY_superlu_dist-64 not found! Call:
-- FIND_LIBRARY(PETSC_LIBRARY_superlu_dist-64 NAMES superlu_dist-64 
HINTS /opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/lib 
/opt/cray/pe/hdf5-parallel/1.10.1.1/GNU/5.1/lib)
-- PETSC_LIBRARY_parmetis-64 not found! Call:
-- FIND_LIBRARY(PETSC_LIBRARY_parmetis-64 NAMES parmetis-64 HINTS 
/opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/lib 
/opt/cray/pe/hdf5-parallel/1.10.1.1/GNU/5.1/lib)
-- PETSC_LIBRARY_metis-64 not found! Call:
-- FIND_LIBRARY(PETSC_LIBRARY_metis-64 NAMES metis-64 HINTS 
/opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/lib 
/opt/cray/pe/hdf5-parallel/1.10.1.1/GNU/5.1/lib)
-- PETSC_LIBRARY_HYPRE-64 not found! Call:
-- FIND_LIBRARY(PETSC_LIBRARY_HYPRE-64 NAMES HYPRE-64 HINTS 
/opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/lib 
/opt/cray/pe/hdf5-parallel/1.10.1.1/GNU/5.1/lib)
-- PETSC_LIBRARY_sci_gnu_mpi_mp not found! Call:
-- FIND_LIBRARY(PETSC_LIBRARY_sci_gnu_mpi_mp NAMES sci_gnu_mpi_mp HINTS 
/opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/lib 
/opt/cray/pe/hdf5-parallel/1.10.1.1/GNU/5.1/lib)
-- PETSC_LIBRARY_sci_gnu_mp not found! Call:
-- FIND_LIBRARY(PETSC_LIBRARY_sci_gnu_mp NAMES sci_gnu_mp HINTS 
/opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/lib 
/opt/cray/pe/hdf5-parallel/1.10.1.1/GNU/5.1/lib)
-- PETSC_LIBRARY_ptscotch-64 not found! Call:
-- FIND_LIBRARY(PETSC_LIBRARY_ptscotch-64 NAMES ptscotch-64 HINTS 
/opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/lib 
/opt/cray/pe/hdf5-parallel/1.10.1.1/GNU/5.1/lib)
-- PETSC_LIBRARY_scotch-64 not found! Call:
-- FIND_LIBRARY(PETSC_LIBRARY_scotch-64 NAMES scotch-64 HINTS 
/opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/lib 
/opt/cray/pe/hdf5-parallel/1.10.1.1/GNU/5.1/lib)
-- PETSC_LIBRARY_ptscotcherr-64 not found! Call:
-- FIND_LIBRARY(PETSC_LIBRARY_ptscotcherr-64 NAMES ptscotcherr-64 HINTS 
/opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/lib 
/opt/cray/pe/hdf5-parallel/1.10.1.1/GNU/5.1/lib)
-- PETSC_LIBRARY_scotcherr-64 not found! Call:
-- FIND_LIBRARY(PETSC_LIBRARY_scotcherr-64 NAMES scotcherr-64 HINTS 
/opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/lib 
/opt/cray/pe/hdf5-parallel/1.10.1.1/GNU/5.1/lib)
-- Found PETSC_LIBRARY_hdf5_parallel
-- Found PETSC_LIBRARY_z
-- Performing Test PETSC_LIBRARY_dl
-- Performing Test PETSC_LIBRARY_dl - Success
--   PETSC_VERSION: 3.7.6.0
--   PETSC_LIBRARIES: 
/opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/lib/libcraypetsc_gnu_real-64.so;/opt/cray/pe/hdf5-parallel/1.10.1.1/GNU/5.1/lib/libhdf5_parallel.so;/usr/lib64/libz.so;dl;dl
--   PETSC_INCLUDE_DIRS: 
/opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/include;/opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/include;${PETSC_DIR}/include;${PETSC_DIR}/include;/opt/cray/pe/hdf5-parallel/1.10.1.1/GNU/5.1/include
--   PETSC_USER_INCLUDE_DIRS: 
/opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/include;/opt/cray/pe/petsc/3.7.6.2/real/GNU64/5.3/x86_64/include;${PETSC_DIR}/include;${PETSC_DIR}/include;/opt/cray/pe/hdf5-parallel/1.10.1.1/GNU/5.1/include
-- Found PETSC
-- DEAL_II_WITH_PETSC successfully set up with external dependencies.
The reason is that these libraries also have different names. For example, 
the GNU compiled metis-64 is named 

[deal.II] Re: deal.II installation on cray XC50 giving MPI_VERSION=0.0

2020-02-06 Thread vachan potluri

>
>  If you know to which standard the MPI installation is conforming, you 
> could try to set it via
> cmake -DMPI_VERSION=... 
> yourself.

The MPI version is 3.1. But will this be of use? After all, the include 
paths, linker flags and library variables will still be blank.


But separately, we try to obtain the MPI version from the file mpi.h via 
> the 
> following cmake code in cmake/modules/FindMPI.h

I tried this. The mpi.h file is present in 
/opt/cray/pe/mpt/default/gni/mpich-gnu/5.1/include. So I modified my cmake 
invocation as follows.
cmake -DCMAKE_INSTALL_PREFIX=~/bin/dealii-9.1.1 \
-DPREFIX_PATH=/opt/cray/pe \
-DWITH_MPI=ON \
-DMPI_DIR=/opt/cray/pe/mpt/default/gni/mpich-gnu/5.1 \

-DMPI_CXX_INCLUDE_PATH=/opt/cray/pe/mpt/default/gni/mpich-gnu/5.1/include \
-DCMAKE_CXX_COMPILER=/opt/cray/pe/craype/2.5.13/bin/CC \
-DCMAKE_C_COMPILER=/opt/cray/pe/craype/2.5.13/bin/cc \
-DCMAKE_Fortran_COMPILER=/opt/cray/pe/craype/2.5.13/bin/ftn 
\
-DWITH_LAPACK=OFF 
-DLAPACK_DIR=/opt/cray/pe/libsci/17.12.1/GNU/6.1/x86_64 \
-DWITH_PETSC=OFF -DPETSC_DIR=$PETSC_DIR -DPETSC_ARCH=$PETSC_ARCH \
-DWITH_P4EST=OFF -DP4EST_DIR=~/bin/p4est-2.2 \
~/source/dealii-9.1.1

The output still shows this message:
-- Include 
/home/ComptGasDynLab/vachanpotluri/source/dealii-9.1.1/cmake/configure/configure_1_mpi.cmake
-- MPI_MPI_H not found! Call:
-- FIND_FILE(MPI_MPI_H NAMES mpi.h HINTS)
--   MPI_VERSION: 0.0
--   MPI_LIBRARIES: 
--   MPI_INCLUDE_DIRS: 
--   MPI_USER_INCLUDE_DIRS: 
--   MPI_CXX_FLAGS: 
--   MPI_LINKER_FLAGS: 
-- Found MPI
-- DEAL_II_WITH_MPI successfully set up with external dependencies.

and the detailed log still shows blanks:
#DEAL_II_WITH_MPI set up with external dependencies
#MPI_VERSION = 0.0
#MPI_DIR = /opt/cray/pe/mpt/default/gni/mpich-gnu/5.1
#MPI_C_COMPILER = /opt/cray/pe/craype/2.5.13/bin/cc
#MPI_CXX_COMPILER = /opt/cray/pe/craype/2.5.13/bin/CC
#MPI_Fortran_COMPILER = /opt/cray/pe/craype/2.5.13/bin/ftn
#MPI_CXX_FLAGS = 
#MPI_LINKER_FLAGS = 
#MPI_INCLUDE_DIRS = 
#MPI_USER_INCLUDE_DIRS = 
#MPI_LIBRARIES = 

Even after giving the MPI_CXX_INCLUDE_PATH hint, why is cmake not able to 
detect the version? Am I missing something?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/de290c82-01a9-4fca-9ce6-d4febd42a22f%40googlegroups.com.


[deal.II] deal.II installation on cray XC50 giving MPI_VERSION=0.0

2020-02-05 Thread vachan potluri
Hello,

I am trying to install deal.II on a cray XC50 supercomputer.

cmake -DCMAKE_INSTALL_PREFIX=~/bin/dealii-9.1.1 \
-DPREFIX_PATH=/opt/cray/pe \
-DCMAKE_CXX_COMPILER=/opt/cray/pe/craype/2.5.13/bin/CC \
-DWITH_MPI=ON \
-DWITH_PETSC=OFF -DPETSC_DIR=$PETSC_DIR -DPETSC_ARCH=$PETSC_ARCH \
-DWITH_P4EST=ON -DP4EST_DIR=~/bin/p4est-2.2 \
~/source/dealii-9.1.1

I have attached detailed.log and summary.log. Although the configuring 
exits without errors, I can see in detailed.log that MPI_VERSION was not 
detected correctly. The compilers were correctly detected. All other 
variables are just blanks. The relevant snippet of detailed.log is as 
follows:

#DEAL_II_WITH_MPI set up with external dependencies
#MPI_VERSION = 0.0
#MPI_C_COMPILER = /opt/cray/pe/craype/2.5.13/bin/cc
#MPI_CXX_COMPILER = /opt/cray/pe/craype/2.5.13/bin/CC
#MPI_Fortran_COMPILER = /opt/cray/pe/craype/2.5.13/bin/ftn
#MPI_CXX_FLAGS = 
#MPI_LINKER_FLAGS = 
#MPI_INCLUDE_DIRS = 
#MPI_USER_INCLUDE_DIRS = 
#MPI_LIBRARIES = 

Is this an issue? How to fix this? I had loaded cray-mpich module before 
invoking cmake and switched PrgEnv-cray with PrgEnv-gnu.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/71a50bd5-acf1-4cd5-b3a8-bcb0fd295399%40googlegroups.com.
###
#
#  deal.II configuration:
#CMAKE_BUILD_TYPE:   DebugRelease
#BUILD_SHARED_LIBS:  ON
#CMAKE_INSTALL_PREFIX:   
/home/ComptGasDynLab/vachanpotluri/bin/dealii-9.1.1
#CMAKE_SOURCE_DIR:   
/home/ComptGasDynLab/vachanpotluri/source/dealii-9.1.1
#(version 9.1.1)
#CMAKE_BINARY_DIR:   
/home/ComptGasDynLab/vachanpotluri/build/dealii-9.1.1
#CMAKE_CXX_COMPILER: GNU 7.2.0 on platform Linux x86_64
#/opt/cray/pe/craype/2.5.13/bin/CC
#CMAKE_C_COMPILER:   /opt/cray/pe/craype/2.5.13/bin/cc
#CMAKE_Fortran_COMPILER: /opt/cray/pe/craype/2.5.13/bin/ftn
#CMAKE_GENERATOR:Unix Makefiles
#
#  Base configuration (prior to feature configuration):
#DEAL_II_CXX_FLAGS:-pedantic -fPIC -Wall -Wextra 
-Woverloaded-virtual -Wpointer-arith -Wsign-compare -Wsuggest-override -Wswitch 
-Wsynth -Wwrite-strings -Wno-placement-new -Wno-deprecated-declarations 
-Wno-literal-suffix -Wno-psabi -fopenmp-simd -std=c++17
#DEAL_II_CXX_FLAGS_RELEASE:-O2 -funroll-loops -funroll-all-loops 
-fstrict-aliasing -Wno-unused-local-typedefs
#DEAL_II_CXX_FLAGS_DEBUG:  -O0 -ggdb -Wa,--compress-debug-sections
#DEAL_II_LINKER_FLAGS: -Wl,--as-needed -rdynamic -fuse-ld=gold
#DEAL_II_LINKER_FLAGS_RELEASE: 
#DEAL_II_LINKER_FLAGS_DEBUG:   -ggdb
#DEAL_II_DEFINITIONS:  
#DEAL_II_DEFINITIONS_RELEASE:  
#DEAL_II_DEFINITIONS_DEBUG:DEBUG
#DEAL_II_USER_DEFINITIONS: 
#DEAL_II_USER_DEFINITIONS_REL: 
#DEAL_II_USER_DEFINITIONS_DEB: DEBUG
#DEAL_II_INCLUDE_DIRS  
#DEAL_II_USER_INCLUDE_DIRS:
#DEAL_II_BUNDLED_INCLUDE_DIRS: 
#DEAL_II_LIBRARIES:
#DEAL_II_LIBRARIES_RELEASE:
#DEAL_II_LIBRARIES_DEBUG:  
#DEAL_II_COMPILER_VECTORIZATION_LEVEL: 0
#
#  Configured Features (DEAL_II_ALLOW_BUNDLED = ON, DEAL_II_ALLOW_AUTODETECTION 
= ON):
#  ( DEAL_II_WITH_64BIT_INDICES = OFF )
#  ( DEAL_II_WITH_ADOLC = OFF )
#  ( DEAL_II_WITH_ARPACK = OFF )
#  ( DEAL_II_WITH_ASSIMP = OFF )
#DEAL_II_WITH_BOOST set up with bundled packages
#BOOST_CXX_FLAGS = -Wno-unused-local-typedefs
#BOOST_DEFINITIONS = BOOST_NO_AUTO_PTR
#BOOST_USER_DEFINITIONS = BOOST_NO_AUTO_PTR
#BOOST_BUNDLED_INCLUDE_DIRS = 
/home/ComptGasDynLab/vachanpotluri/source/dealii-9.1.1/bundled/boost-1.62.0/include
#BOOST_LIBRARIES = rt
#DEAL_II_WITH_COMPLEX_VALUES = ON
#  ( DEAL_II_WITH_CUDA = OFF )
#DEAL_II_WITH_CXX14 = ON
#DEAL_II_WITH_CXX17 = ON
#  ( DEAL_II_WITH_GINKGO = OFF )
#  ( DEAL_II_WITH_GMSH = OFF )
#  ( DEAL_II_WITH_GSL = OFF )
#  ( DEAL_II_WITH_HDF5 = OFF )
#  ( DEAL_II_WITH_LAPACK = OFF )
#  ( DEAL_II_WITH_METIS = OFF )
#DEAL_II_WITH_MPI set up with external dependencies
#MPI_VERSION = 0.0
#MPI_C_COMPILER = /opt/cray/pe/craype/2.5.13/bin/cc
#MPI_CXX_COMPILER = /opt/cray/pe/craype/2.5.13/bin/CC

[deal.II] Re: Is a call to compress() required after scale()?

2019-11-24 Thread vachan potluri
I was able to reproduce this behaviour with the following code (also 
attached); the CMakeLists file is also attached. The code hangs after 
printing 'Scaled variable 0'.

Let me mention that I have used a different algorithm to obtain locally 
relevant dofs, rather than directly using the function from DoFTools. My 
algorithm is as follows:

Loop over owned interior cells
  Loop over faces
  If neighbor cell is ghost:
Add all neighbors dofs on this face to relevant dofs

With this algorithm, relevant dofs are not all the ghost cell's dofs, but 
only those lying on a subdomain interface. This is implemented in lines 
49-69 in the file main.cc. I verified that this algorithm works correctly 
for a small mesh, I don't presume this is wrong.

#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 

#include 
#include 

/**
 * See README file for details
 */
using namespace dealii;
namespace LA
{
using namespace ::LinearAlgebraPETSc;
}

int main(int argc, char **argv)
{
Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);

parallel::distributed::Triangulation<2> triang(MPI_COMM_WORLD);
GridGenerator::hyper_cube(triang);
triang.refine_global(5);
const MappingQ1<2> mapping;
const uint degree = 1;
FE_DGQ<2> fe(degree);
FE_FaceQ<2> fe_face(degree);
const std::array::faces_per_cell> 
face_first_dof{0, degree, 0, (degree+1)*degree};
const std::array::faces_per_cell> 
face_dof_increment{degree+1, degree+1, 1, 1};

DoFHandler<2> dof_handler(triang);
dof_handler.distribute_dofs(fe);
IndexSet locally_owned_dofs = dof_handler.locally_owned_dofs();
IndexSet locally_relevant_dofs;

locally_relevant_dofs = locally_owned_dofs; // initialise with 
owned dofs
uint face_id, face_id_neighbor, i; // face ids wrt owner and 
neighbor
std::vector dof_ids_neighbor(fe.dofs_per_cell);
for(auto : dof_handler.active_cell_iterators()){
if(!(cell->is_locally_owned())) continue;
for(face_id=0; face_id::faces_per_cell; 
face_id++){
if(cell->face(face_id)->at_boundary()) continue;
if(cell->neighbor(face_id)->is_ghost()){
// current face lies at subdomain interface
// add dofs on this face (wrt neighbor) to 
locally relevant dofs

cell->neighbor(face_id)->get_dof_indices(dof_ids_neighbor);

face_id_neighbor = 
cell->neighbor_of_neighbor(face_id);
for(i=0; i vecs;
std::array gh_vecs;
LA::MPI::Vector scaler;
for(uint var=0; var<4; var++){
vecs[var].reinit(locally_owned_dofs, MPI_COMM_WORLD);
gh_vecs[var].reinit(locally_owned_dofs, 
locally_relevant_dofs, MPI_COMM_WORLD);
}
scaler.reinit(locally_owned_dofs, MPI_COMM_WORLD);

for(uint i: locally_owned_dofs){
scaler[i] = 1.0*i;
}
std::vector dof_ids(fe.dofs_per_cell);

// setting ops
for(auto : dof_handler.active_cell_iterators()){
if(!(cell->is_locally_owned())) continue;

cell->get_dof_indices(dof_ids);

pcout << "\tCell " << cell->index() << "\n";
pcout << "\t\tSetting\n";
for(uint var=0; var<4; var++){
for(uint i: dof_ids){
vecs[var][i] = 1.0*i;
}
vecs[var].compress(VectorOperation::insert);
}

// addition ops
pcout << "\t\tAdding\n";
for(uint var=0; var<4; var++){
for(uint i: dof_ids){
vecs[var][i] += 1.0*i;
}
vecs[var].compress(VectorOperation::add);
}

// more ops
pcout << "\t\tMore additions\n";
for(uint var=0; var<4; var++){
for(uint i: dof_ids){
vecs[var][i] += -5.0*i;
}
vecs[var].compress(VectorOperation::add);
}
} // loop over owned cells
// scaling and communicating
pcout << "Scaling and communicating\n";
for(uint var=0; var<4; var++){
vecs[var].scale(scaler);
pcout << "Scaled variable " << var << "\n";
gh_vecs[var] = vecs[var];
pcout << "Communicated variable " << var << "\n";
}
pcout << "Completed all\n";

return 0;
}

-- 
The deal.II project is 

[deal.II] Is a call to compress() required after scale()?

2019-11-24 Thread vachan potluri
Hello,

I am facing a weird problem. At a point in code, I 
have PETScWrappers::VectorBase::scale() called for few distributed vectors. 
Subsequently, I have assignment operator on ghosted versions of these 
vectors for parallel communication. When I launch the code with 2 or 4 
processes, it works fine. But with 3 processes, the code halts after the 
scaling operations and before the first assignment. I am limited to 4 
processes.

   1. Is a compress() required after scale()? With what operation as 
   argument?
   2. Why does this behaviour occur only when 3 processes are launched? Has 
   anyone experienced this before?

Thanks

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/31558d55-94ca-4ab8-b668-9c231788a99d%40googlegroups.com.


[deal.II] Strategy for efficiently calculating face flux in time evolution problems

2019-11-21 Thread vachan potluri
Hello all,

This question is a general one, may not be specific just to deal.II. I am 
writing a code to solve the compressible Navier-Stokes equations. Every 
time step requires calculation of numerical flux on every face. There can 
be three cases in a distributed triangulation. I am not considering a 
dynamic mesh.

   1. Internal face, not shared between domains. Here, the flux is 
   calculated and RHS of both the concerned cells is updated.
   2. Internal face, shared between domains. Flux is calculated and only 
   the RHS of owned cell is updated.
   3. Boundary face. Flux is calculated (boundary conditions come to play) 
   and RHS of cell is updated.

To identify these cases, if conditions can be used. But since every face 
can only be of one of the three types, it is redundant to check its type 
for every time step. *Does this significantly effect the performance?* If 
yes, then the following question is relevant.

Is there any better way? I once had the following idea.

   1. Assign a user id for every face:
  1. If face is a boundary face (type 3), user id = boundary id
  2. For type 1 face, user id = maximum boundary id + 1
  3. For type 2 face, user id = maximum boundary id + 2
   2. Construct an array of lambdas, one for each user id of face. This 
   probably requires capturing everything by reference in the lambda
   3. Call the lambda corresponding to a face user id for every face to 
   update the contribution of numerical flux to RHS

I would like to know opinions about feasibility and efficiency of this 
method. And also, any other approaches that somebody might have used or are 
aware of.

Thank you
Vachan

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5f07e124-ad12-48e4-97f0-379de168f83d%40googlegroups.com.


[deal.II] Re: Query regarding DoFTools::dof_indices_with_subdomain_association()

2019-10-09 Thread vachan potluri
Daniel,

DoFTools::dof_indices_with_subdomain_association returns the degrees of 
> freedom of all the cells that have the given subdomain id. For 
> parallel::distributed::Triangulation objects the subdomain id is the the 
> MPI rank and this is the only valid input.
> In this case, the function simply returns all the degrees of freedom on 
> locally owned cells. If a finite element defines degrees of freedoms that 
> are associated with a face, these degrees of freedom are part of all the 
> IndexSet objects in case the face is located on the interface between two 
> subdomains. For a parallel::distributed::Triangulation object, these are 
> the degrees of freedom on the faces to ghosted cells


Thanks for the reply. Indeed the indices given by 
DoFHandler::locally_owned_dofs() and 
DoFTools::dof_indices_with_subdomain_association() are identical for FE_DGQ 
element. I have checked this in my code.

You could still ask for the unit support points and figure out which of 
> them are geometrically on a given face.


This is definitely possible, but would probably be inefficient if I code 
for it. Isn't there a function in DoFTools which does this? Because not 
marking all dofs of ghost cells as relevant will give much savings in 
communication, I was wondering if DoFTools already doesn't have an 
implementation for this.

However, note that as soon as gradients are involved all the degrees of 
> freedom contribute to values on faces.


I don't have much experience in parallel programming, but I think we can 
circumvent this by computing gradients at all dof locations in a subdomain 
and again only communicating data of dofs lying on subdomain interfaces. I 
might need some correction on this :).

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a4133885-0eab-4ddc-8e58-3d9c3835ddaa%40googlegroups.com.


[deal.II] Query regarding DoFTools::dof_indices_with_subdomain_association()

2019-10-09 Thread vachan potluri
Hello all,

I am writing an MPI parallel DG code for linear advection equation, just 
for understanding the parallel programming paradigm of deal.II.

To evaluate numerical flux at a face connecting two subdomains, I only 
require the solution (from a different mpi process) at dofs which lie on 
the face; the dofs which lie inside the neighboring cell (of the neighbor 
subdomain) are not required. Consequently, I think the ghost indices for my 
solution vector would be those provided by the function 
DoFTools::dof_indices_with_subdomain_association() 
,
 
rather than DoFTools::extract_locally_relevant_dofs() 
.
 
However, the documentation of the former function states

Note that this function is of questionable use for DoFHandler objects built 
> on parallel::distributed::Triangulation since in that case ownership of 
> individual degrees of freedom by MPI processes is controlled by the DoF 
> handler object, not based on some geometric algorithm in conjunction with 
> subdomain id. In particular, the degrees of freedom identified by the 
> functions in this namespace as associated with a subdomain are not the same 
> the DoFHandler class identifies as those it owns.


What is the meaning of the second line? I suppose this has something to do 
with the FiniteElement type of the dof handler. Does this mean to say that 
irrespective of whether the FiniteElement has dofs on cell faces or not, 
this function approves a dof if it is geometrically residing on a face? 
This behaviour is actually what I want because although FE_DGQ element 
doesn't attach any dofs to cell faces, I want to get dof indices of the 
dofs (geometrically) lying on the face.

Thanks!
Vachan

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f1a506a4-f652-43fd-9b02-c948f1eca49d%40googlegroups.com.


[deal.II] Re: Installation error, unable to configure with p4est

2019-10-05 Thread vachan potluri

>
> Yes, you can likely ignore the error. If you really want to run this 
> quicktest, you can change
> make_quicktest("p4est" ${_mybuild} 10)
> to
> make_quicktest("p4est" ${_mybuild} 4)
> in tests/quick_tests/CMakeLists.txt.


This works. Thanks. 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f352cf6d-7f40-457d-9e5a-8ce9e5a5ef1f%40googlegroups.com.


[deal.II] Re: Installation error, unable to configure with p4est

2019-10-04 Thread vachan potluri
Okay, I found the error. Some time back, I noted changing
include/deal.II/base/config.h.in
to
include/deal.II/base/config.h
(removed the .in). I don't remember exactly, but the reason I did this was 
because some error popped up while compiling one of the initial tutorials. 
This was my mistake.

The complete error message of cmake (on the terminal) mentions that 
config.h.in file is missing. So now, I made a copy and, the configuration 
and installation went fine.

However,
make test
for p4est failed with the following message
There are not enough slots available in the system to satisfy the 10 slots
that were requested by the application:
  ./p4est.debug

Either request fewer slots for your application, or make more slots 
available
for use.
It is true that my PC has only 4 slots (8 with hyper threading). So can I 
ignore this error?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/418ae277-bf9d-4b83-b987-a03a3d48a4be%40googlegroups.com.


[deal.II] Re: Installation error, unable to configure with p4est

2019-10-04 Thread vachan potluri
Sorry for incomplete information, cmake exits with the following message.

###
#
#  deal.II configuration:
#CMAKE_BUILD_TYPE:   DebugRelease
#BUILD_SHARED_LIBS:  ON
#CMAKE_INSTALL_PREFIX:   /home/vachan/bin/dealii
#CMAKE_SOURCE_DIR:   /home/vachan/dealii-9.1.1
#(version 9.1.1)
#CMAKE_BINARY_DIR:   /home/vachan/build/dealii
#CMAKE_CXX_COMPILER: GNU 7.4.0 on platform Linux x86_64
#/usr/local/bin/mpicxx
#
#  Configured Features (DEAL_II_ALLOW_BUNDLED = ON, 
DEAL_II_ALLOW_AUTODETECTION = ON):
#  ( DEAL_II_WITH_64BIT_INDICES = OFF )
#  ( DEAL_II_WITH_ADOLC = OFF )
#  ( DEAL_II_WITH_ARPACK = OFF )
#  ( DEAL_II_WITH_ASSIMP = OFF )
#DEAL_II_WITH_BOOST set up with bundled packages
#  ( DEAL_II_WITH_COMPLEX_VALUES = OFF )
#  ( DEAL_II_WITH_CUDA = OFF )
#DEAL_II_WITH_CXX14 = ON
#DEAL_II_WITH_CXX17 = ON
#  ( DEAL_II_WITH_GINKGO = OFF )
#  ( DEAL_II_WITH_GMSH = OFF )
#  ( DEAL_II_WITH_GSL = OFF )
#  ( DEAL_II_WITH_HDF5 = OFF )
#  ( DEAL_II_WITH_LAPACK = OFF )
#  ( DEAL_II_WITH_METIS = OFF )
#DEAL_II_WITH_MPI set up with external dependencies
#DEAL_II_WITH_MUPARSER set up with bundled packages
#  ( DEAL_II_WITH_NANOFLANN = OFF )
#  ( DEAL_II_WITH_NETCDF = OFF )
#  ( DEAL_II_WITH_OPENCASCADE = OFF )
#DEAL_II_WITH_P4EST set up with external dependencies
#DEAL_II_WITH_PETSC set up with external dependencies
#  ( DEAL_II_WITH_SCALAPACK = OFF )
#  ( DEAL_II_WITH_SLEPC = OFF )
#  ( DEAL_II_WITH_SUNDIALS = OFF )
#  ( DEAL_II_WITH_SYMENGINE = OFF )
#DEAL_II_WITH_THREADS set up with bundled packages
#  ( DEAL_II_WITH_TRILINOS = OFF )
#  ( DEAL_II_WITH_UMFPACK = OFF )
#DEAL_II_WITH_ZLIB set up with external dependencies
#
#  Component configuration:
#  ( DEAL_II_COMPONENT_DOCUMENTATION = OFF )
#DEAL_II_COMPONENT_EXAMPLES
#  ( DEAL_II_COMPONENT_PACKAGE = OFF )
#  ( DEAL_II_COMPONENT_PYTHON_BINDINGS = OFF )
#
#  Detailed information (compiler flags, feature configuration) can be 
found in detailed.log
#
#  Run  $ make info  to print a help message with a list of top level 
targets
#
###
-- Configuring incomplete, errors occurred!
See also "/home/vachan/build/dealii/CMakeFiles/CMakeOutput.log".
See also "/home/vachan/build/dealii/CMakeFiles/CMakeError.log".

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/74b5c4d1-7098-417c-bca2-260b4ca48a10%40googlegroups.com.


[deal.II] Re: Calculation of local flux matrix in DG: looping over a cell's faces

2019-09-24 Thread vachan potluri
Found the face ordering 
here https://www.dealii.org/current/doxygen/deal.II/structGeometryInfo.html

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/387e0781-2cec-4729-b972-d747875ec891%40googlegroups.com.


[deal.II] Calculation of local flux matrix in DG: looping over a cell's faces

2019-09-23 Thread vachan potluri
Hello all,

I want to calculate local flux matrix of a cell (in 2D). The algorithm I 
thought of is the following:

[image: CodeCogsEqn.png]




Within loop over cells:
start loop over faces:
re-initialize fe_face_values for current face
calculate the 1D mass matrix
add contribution of this face to cell's flux matrix (how)

The 1D mass matrix on a face can be computed using fe_face_values. To get 
the mapping between a face's mass matrix and cell's flux matrix, we require 
to know what DoFs of a cell are on which face. The FE_DGQ class 
 
documents the ordering of DoFs. What is the ordering of the faces w.r.t. a 
cell? i.e.; what DoFs of a cell do each of the faces 0,1,2 and 3 hold?

Thanks
Vachan

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5fec8b64-a651-4651-9665-6c209cf68c4e%40googlegroups.com.


[deal.II] Re: DG explicit time integration for linear advection equation with MeshWorker (suggestions)

2019-09-18 Thread vachan potluri


step-33 does compute interior face integrals twice.

One way I handle this is to attach a cell user index

 Thanks Praveen, this is similar to owner/neighbour concept of OpenFOAM :).


Doug,
Many thanks for the detailed explanations and your code :).

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9b315e3b-c862-4e8f-8f78-c572cedc1f82%40googlegroups.com.


[deal.II] Re: DG explicit time integration for linear advection equation with MeshWorker (suggestions)

2019-09-17 Thread vachan potluri
Doug and Praveen,

Thanks for your answers. I had a look at step-33. As far as I understand, 
although the looping through cells is not through MeshWorker, the assembly 
is still global! So, for a non-cartesian mesh, I think you are suggesting 
using such a loop over all cells to calculate local matrices. Will these 
cell-local matrices stored as an array of matrices in the conservation law 
class?

I now have one more question. In assemble_face_term function of step-33, 
the normal numerical flux is calculated. This function is called for every 
cell-neighbor pair from the assemble_system function. This means, if I am 
not wrong, at every interior face quadrature point, the numerical flux is 
calculated twice. This might be very costly. Is there a way to avoid this? 
One can probably force only one cell to calculate numerical flux at a face 
based on the face normal's orientation. But then how can we communicate the 
calculated flux to the other cell. Or even better, is it possible to loop 
over faces? We could then add contributions to both the cells sharing this 
face without double computation.

Thanks again!

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/92ef6323-88de-4dc2-8d80-72a0cbd617ec%40googlegroups.com.


[deal.II] DG explicit time integration for linear advection equation with MeshWorker (suggestions)

2019-09-17 Thread vachan potluri
Hello all,


I am a beginner in dealii. I want to solve a linear, transient advection 
equation explicitly in two dimensions using DG. The resulting discrete 
equation will have a mass matrix as the system matrix and a sum of terms 
which depend on previous solution (multiplied by mass, differentiation, 
flux and boundary matrix) as the rhs.

[image: linearAdvection2D.png]

Instead of using MeshWorker::loop for every single time step, I think the 
following approach would be better. I am using a ghost cell approach to 
specify the boundary condition: the boundary condition can be specified by 
an appropriately calculated normal numerical flux.


   1. Before the any time steps, use a MeshWorker::loop each for each of 
   the four matrices: mass, differentiation, flux and boundary
   2. During each update
  1. Again use MeshWorker::loop, but this time only to calculate the 
  normal numerical flux.
  2. Use the normal numerical flux and the previous solution to obtain 
  the RHS using appropriate matrix-vector products
  3. Solve the system
   
I have few question regarding this approach.

   1. Is it feasible
   2. Can it give significant improvement in performance over the case when 
   assembling is done for every time step
   3. (Assuming answers to above questions are positive) For higher orders, 
   the flux and boundary matrices will be very sparse. The normal numerical 
   flux (which will be a vector) will also be sparse. Can the matrix-vector 
   product involving these combinations be optimised by using appropriate 
   sparsity pattern? Can a sparsity pattern be specified for a vector too?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/7fd51309-f137-421f-91bc-8c2d1c6b7ff1%40googlegroups.com.


[deal.II] Re: Member description of MeshWorker::DoFInfo.matrix

2019-09-16 Thread vachan potluri
Sorry for the late reply. Thank you Prof. Bangerth.

On Wednesday, September 4, 2019 at 9:19:42 PM UTC+5:30, vachan potluri 
wrote:
>
> Hello,
>
> I am reading step-12 of the tutorial. The following lines are from local 
> integrator for interior face (dinfo is an alias to MeshWorker::DoFInfo).
>
> FullMatrix 
> <https://www.dealii.org/current/doxygen/deal.II/classFullMatrix.html> 
> _v1_matrix = dinfo1.matrix(0, false).matrix;
> FullMatrix 
> <https://www.dealii.org/current/doxygen/deal.II/classFullMatrix.html> 
> _v1_matrix = dinfo1.matrix(0, true).matrix;
> FullMatrix 
> <https://www.dealii.org/current/doxygen/deal.II/classFullMatrix.html> 
> _v2_matrix = dinfo2.matrix(0, true).matrix;
> FullMatrix 
> <https://www.dealii.org/current/doxygen/deal.II/classFullMatrix.html> 
> _v2_matrix = dinfo2.matrix(0, false).matrix;
>
> The matrix() function is documented here: 
> https://www.dealii.org/current/doxygen/deal.II/classMeshWorker_1_1LocalResults.html#afdae422206740b2f5a14fd562c27e6ca
> .
>
> I couldn't understand what the arguments of this function signify. Can 
> anyone please clarify?
>
> Thank you
> Vachan
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/7e4585eb-efc6-49e2-96ba-6451453e2caa%40googlegroups.com.


[deal.II] Member description of MeshWorker::DoFInfo.matrix

2019-09-04 Thread vachan potluri
Hello,

I am reading step-12 of the tutorial. The following lines are from local 
integrator for interior face (dinfo is an alias to MeshWorker::DoFInfo).

FullMatrix 
 
_v1_matrix = dinfo1.matrix(0, false).matrix;
FullMatrix 
 
_v1_matrix = dinfo1.matrix(0, true).matrix;
FullMatrix 
 
_v2_matrix = dinfo2.matrix(0, true).matrix;
FullMatrix 
 
_v2_matrix = dinfo2.matrix(0, false).matrix;

The matrix() function is documented here: 
https://www.dealii.org/current/doxygen/deal.II/classMeshWorker_1_1LocalResults.html#afdae422206740b2f5a14fd562c27e6ca.

I couldn't understand what the arguments of this function signify. Can 
anyone please clarify?

Thank you
Vachan

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/e97a4369-0ed8-45cb-afb8-ba3e481ae6c9%40googlegroups.com.