[deal.II] The type of particle properties

2019-09-11 Thread Yidong ZHAO
Hello all,

>From the descriptions of Particle(
https://www.dealii.org/current/doxygen/deal.II/classParticles_1_1Particle.html) 
and PropertyPool(
https://www.dealii.org/current/doxygen/deal.II/classParticles_1_1PropertyPool.html),
 
I know the properties of a particle should be ArrayView type. 

Is there anyway I can use the self-defined class as particle's properties?

Many thanks,
Yidong

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/142ed8f7-c5d8-44d7-b8f7-31e509478707%40googlegroups.com.


Re: [deal.II] Function parser for tensor function

2019-09-11 Thread Wolfgang Bangerth
On 9/6/19 1:36 AM, Konrad wrote:
> 
> I was wondering if there is a simple way to parse input (e.g., from a 
> parameter file) and use it similarly to the FunctionParser class (which 
> works for vector valued functions) just for a TensorFunction.

Nothing I would know already does that. But the FunctionParser class 
supports vector-valued functions, and it should not be very difficult to 
write a little wrapper that is derived from TensorFunction, takes a 
Function object as constructor argument, and simply interprets the 
vector components returned by the latter as the tensor components of a 
TensorFunction.

Of course, you could also copy FunctionParser to TensorFunctionParser 
and make the relevant adjustments. That also should not be very difficult.

In either case, this would actually make for a nice addition to the library!

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b662a871-d53b-e855-b9a9-8b957264bd47%40colostate.edu.


[deal.II] Re: tutorial step-36 failed with PETSc error number 76 while calling a SLEPc function

2019-09-11 Thread Pragalv Karki
Dear Prof. Wolfgang,
Thank you very much for your reply and insight. I will 
change the PETSc/SLEPc versions to 3.7.4/3.7.3 and see if that will solve 
the issue. 

On Wednesday, September 11, 2019 at 10:39:08 AM UTC-7, Pragalv Karki wrote:
>
> Dear deal.II community,
>I am new to deal.II and I had posted a question about 
> this same issue yesterday. Perhaps I did not ask it correctly. I have been 
> trying to simulate the wave equation on a particular kind of surface (I 
> have attached a picture of that surface and the mesh). I created the mesh 
> for that surface using gmsh. I was able to simulate the wave equation from 
> step-23 on that surface and I am currently working to change the initial 
> excitation, density, and tension. I also want to get the eigenfrequencies 
> for this surface and as a start, I just wanted to learn how deal.II solves 
> eigenspectrum using PETSc and SlEPc from step-36. But, when I try to run 
> that tutorial as is it shows the error I have posted below. I thought the 
> problem was not having the PETSc and SlEPc libraries so I went ahead and 
> installed them but that still does not resolve the issue. Please let me 
> know if you have run into this issue before and what you did to solve it. 
> Thank you!
>
> THE ERROR message:
>
> *[0]PETSC ERROR: - Error Message 
> --*
>
> [0]PETSC ERROR: Error in external library
>
> [0]PETSC ERROR: Error in LAPACK subroutine steqr: info=0
>
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trouble shooting.
>
> [0]PETSC ERROR: Petsc Release Version 3.10.0, Sep, 12, 2018 
>
> [0]PETSC ERROR: ./step-36 on a  named Pragalvs-MacBook-Air.local by 
> pragalvkarki Tue Sep 10 15:29:22 2019
>
>  
>
> (Lot of unnecessary stuff (I think ) in between these error messages)
>
>  
>
> An error occurred in line <182> of file 
> 
>  
> in function
>
> void dealii::SLEPcWrappers::SolverBase::solve(const unsigned int, 
> unsigned int *)
>
> The violated condition was: 
>
> ierr == 0
>
> Additional information: 
>
> An error with error number 76 occurred while calling a SLEPc 
> function 
>
>  
>
>
>
> Thank you for your time,
>
> Pragalv
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/1dea64c2-dae0-40d5-af00-4e172054df20%40googlegroups.com.


Re: [deal.II] step-21 and 43

2019-09-11 Thread Wolfgang Bangerth


> I read step-21 and 43 tutorials which are closer to my task.
> 
> I'd like to ask some guidelines on how to implement a water/oil with 
> polymer concentration in that model. We still have two phases, but in 
> the water phase there will be two components, aqueous and polymeric.
> 
> Is there any development in this direction?

Not that I know of. I assume that as far as the multiphase flow is 
concerned, both aqueous and polymeric behave the same, but within the 
water phase these two components may have different concentrations that 
affect the properties of the aqueous phase?

The way I would probably describe this is via another field that is 
interpreted as a volume or mass fraction *within the water phase* of the 
polymeric phase.

It would probably also be useful for you to take a look at the 
literature on the "black-oil equations". That's a 3-phase flow problem 
(oil, water, gas) but where some of the gas can be dissolved in the 
other phases. You might find interesting techniques for this problem 
that are also applicable to your problem.

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/82567208-646a-3128-22b6-4bf9286a584c%40colostate.edu.


Re: [deal.II] tutorial step-36 failed with PETSc error number 76 while calling a SLEPc function

2019-09-11 Thread Wolfgang Bangerth


Pragalv,
I've tried to run step-36 on my workstation and it works just fine. Are 
you running in parallel? You might also want to try a different version 
of PETSc/SLEPc -- I am using 3.7.4/3.7.3 for the two.

I also tried to find out what error 76 actually means. I *think* that it 
is this error defined in include/petscerror.h:
 #define PETSC_ERR_LIB 76   /* error in library called by PETSc */
This error code is, unfortunately, widely used. Here's the part of the 
error message that gives more information:

> [0]PETSC ERROR: Error in external library
> 
> [0]PETSC ERROR: Error in LAPACK subroutine steqr: info=0

This is not particularly useful. I've tried to find where that error 
message is generated in PETSc or SLEPc, but am unable to find the 
location -- maybe that was only added in versions between my 3.7.x and 
your 3.10.x. You might want to look into this -- but generally, all it 
means that the LAPACK function steqr has returned an error, and you 
still have to find out why that happened...

I know that's not what you were looking for, but that's the best I can 
suggest for the moment. Since the program works for me, I can't really 
debug it. But it provides you an avenue: Try a different machine and/or 
a different PETSc/SLEPc version and see whether that makes a difference 
for you!

Best
  W.


-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0e58f5e8-619b-ef31-298a-c20d4207fad7%40colostate.edu.


Re: [deal.II] Question about (parallel) linear solver

2019-09-11 Thread Wolfgang Bangerth
On 9/11/19 11:03 AM, Konrad wrote:
> 
> That indeed depends on the geometry of the domain and it is not an 
> obvious result. Example: Think of a 3D hyper-shell with an inner and 
> outer radius. There is a one-dimensional space of non-trivial radial 
> vector fields (with respect to the origin) that do not have neither a 
> divergence nor a curl. Such fields are referred to as harmonic forms and 
> the dimensionality of this space depends on the Betti numbers of the 
> domain (buzz word is Hodge decomposition).

Ah, yes -- fun example! Thanks for the education!
W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f525b10d-130b-f38a-6de8-a8a87853557f%40colostate.edu.


[deal.II] Re: On using project_boundary_values_div_conforming for (RT-DG) in parallel case

2019-09-11 Thread Konrad
Hi Charlie, 

I can compile and run it. Could there be a problem with your deal.ii setup? 
I am using v9.1.1. 

Best,
Konrad

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/47f7594a-e5e5-44dd-99c0-1c41a1757992%40googlegroups.com.


Re: [deal.II] Requirements for citing deal.II

2019-09-11 Thread 'Maxi Miller' via deal.II User Group
Ok, will do that, thanks!

Am Mittwoch, 11. September 2019 16:52:51 UTC+2 schrieb Wolfgang Bangerth:
>
> On 9/11/19 3:27 AM, 'Maxi Miller' via deal.II User Group wrote: 
> > In one of my side projects I include the library deal.II, but use it 
> > only for reading in parameter files and printing out data in the HDF5 
> > format. I included it mostly because it is in use in most of my main 
> > projects, and thus it already was available. Furthermore, being able to 
> > reuse parameter files and the rather simple way of storing data were the 
> > main reasons for including the library in this project. 
> > Thus, I was now wondering if that usage is enough to support citing the 
> > library in publications, even though I never use the main features of 
> it? 
>
> My general rule of thumb when I'm faced with similar dilemmas is "if I 
> use, I cite it". That's because someone still put it many weeks of work 
> to write the parameter handling and HDF5 capabilities (in your case; or 
> whatever else it is that I use), and it doesn't cost me very much to 
> cite some other paper. Indeed, my general approach (in this and any 
> other regard) is to be generous with praise. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/d92ed4cc-37dd-41c7-b498-28f16a9a9d4b%40googlegroups.com.


[deal.II] tutorial step-36 failed with PETSc error number 76 while calling a SLEPc function

2019-09-11 Thread Pragalv Karki
Dear deal.II community,
   I am new to deal.II and I had posted a question about 
this same issue yesterday. Perhaps I did not ask it correctly. I have been 
trying to simulate the wave equation on a particular kind of surface (I 
have attached a picture of that surface and the mesh). I created the mesh 
for that surface using gmsh. I was able to simulate the wave equation from 
step-23 on that surface and I am currently working to change the initial 
excitation, density, and tension. I also want to get the eigenfrequencies 
for this surface and as a start, I just wanted to learn how deal.II solves 
eigenspectrum using PETSc and SlEPc from step-36. But, when I try to run 
that tutorial as is it shows the error I have posted below. I thought the 
problem was not having the PETSc and SlEPc libraries so I went ahead and 
installed them but that still does not resolve the issue. Please let me 
know if you have run into this issue before and what you did to solve it. 
Thank you!

THE ERROR message:

*[0]PETSC ERROR: - Error Message 
--*

[0]PETSC ERROR: Error in external library

[0]PETSC ERROR: Error in LAPACK subroutine steqr: info=0

[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.

[0]PETSC ERROR: Petsc Release Version 3.10.0, Sep, 12, 2018 

[0]PETSC ERROR: ./step-36 on a  named Pragalvs-MacBook-Air.local by 
pragalvkarki Tue Sep 10 15:29:22 2019

 

(Lot of unnecessary stuff (I think ) in between these error messages)

 

An error occurred in line <182> of file 

 
in function

void dealii::SLEPcWrappers::SolverBase::solve(const unsigned int, 
unsigned int *)

The violated condition was: 

ierr == 0

Additional information: 

An error with error number 76 occurred while calling a SLEPc 
function 

 



Thank you for your time,

Pragalv

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0bc3dcb5-f1af-4799-9099-a142bc056c74%40googlegroups.com.


Re: [deal.II] Question about (parallel) linear solver

2019-09-11 Thread Konrad
Hi Wolfgang,
 

> > Using approach 1.) I get timing results like that: 
> > (using MPI with Petsc, AMG preconditioning for *A* on one machine with 
> > four cores, no preconditioning for the Schur complement *S = 
> > -B*A^{-1}*B^T + C* from 1.) 
> > 
> > Running using PETSc. 
> > Number of active cells: 32768 
> > Total number of cells: 21449 (on 6 levels) 
> > Number of degrees of freedom: 205920 (104544+101376) 
> >Iterative Schur complement solver converged in 177 iterations. 
> >Outer solver completed. 
> > 
> > 
> > 
> +-+++ 
> > | Total wallclock time elapsed since start|37s | 
>| 
> > | || 
>| 
> > | Section | no. calls |  wall time | % of total 
> | 
> > 
> +-+---+++ 
> > | Schur complement solver (for u) | 1 |  27.2s |74% 
> | 
> > | assembly| 1 |  6.61s |18% 
> | 
>
> That's actually an acceptable time: My rule of thumb, maybe a bit 
> outdated, is that it takes a minute to solve for 100,000 unknowns for 
> non-trivial problems. You've got twice that many unknowns and solve it 
> in half the time. 
>
Thank you, that calms me down.
 

> > You will notice that the time for the Schur complement solver is 
> > dominating. So either I did something wrong in the implementation 
> > conceptually (see class for Schur complement below) or I need to do a 
> > better job in solving/preconditioning the system. Note that the Schur 
> > complement is usually not definite in contrast to Stokes or Darcy 
> systems. 
>
> Really? Is this an issue related to boundary conditions? Your matrix 
> corresponds to the operator 
>curl I curl + grad div 
> I thought there are no vector fields that are both divergence-free and 
> curl-free. 
>

That indeed depends on the geometry of the domain and it is not an obvious 
result. Example: Think of a 3D hyper-shell with an inner and outer radius. 
There is a one-dimensional space of non-trivial radial vector fields (with 
respect to the origin) that do not have neither a divergence nor a curl. 
Such fields are referred to as harmonic forms and the dimensionality of 
this space depends on the Betti numbers of the domain (buzz word is Hodge 
decomposition).

But this is actually not the case in here, so you are right here. If there 
are non-trivial harmonic forms one must have an additional condition and a 
Lagrange multiplier in the weak form.
 

> > Does anyone have an idea of how to attack systems like *(P)*? Are there 
> > optimal preconditioners? Any hint would be appreciated. 
>
> Take a look at step-20 and step-22. The idea of preconditioning a Schur 
> complement is to use some sort of approximation of the Schur complement 
> that is easier to solve for. I suspect that in your case, a simple 
> Laplace matrix would make for a nice approximation of S, and so using 
> one SSOR step with the Laplace matrix might be a good preconditioner for 
> S. Alternatively, an algebraic multigrid initialized with the Laplace 
> matrix might work. 
>
> step-20 plays with these ideas, solving for the Schur complement 
> directly. step-22 (in particular the code in the "Possibilities for 
> extensions" section) expands on this: If you have a preconditioner for 
> the Schur complement, then you can just use that as part of the 
> preconditioner for the overall system. It's the best approach we have 
> for the Stokes equation (used successfully on systems with billions of 
> unknowns) and worth understanding! 
>

This is great to know and I will then go in this direction. Again, many 
thanks :-)

Konrad

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/e92249cc-9464-436e-97a2-36bf3710ebca%40googlegroups.com.


Re: [deal.II] Question about (parallel) linear solver

2019-09-11 Thread Wolfgang Bangerth
On 9/11/19 7:04 AM, Konrad wrote:
> 
> The system that needs to be solved formally reads
> *(P)   A sigma + B^T u = 0
> *
> *        B sigma + C u = f*
> 
> *A* is sort of a mass matrix so it is positive (corresponds to 
> ** in *H(curl)*) and *C* is positive semi-definite 
> (corresponds to ** in *H(div)*).
> 
> First thing that crossed my mind was a Schur complement solver. Now one 
> a priori has two options:
> 
> 1.) Solve first for u and then for sigma, i.e. invert *A* and use the 
> Schur complement *S = -B*A^{-1}*B^T + C*
> 2.) The other way around *S = -B^T*C^{-1}*B + A*. My gut feeling tells 
> me that this could actually turn out to be not an option since either 
> *C* is very ill-conditioned or not invertible at all. It is symmetric 
> and contains at least one zero eigenvalue. Thinking of it: it formally 
> corresponds to *C~(grad div)^{-1}* (and I am of course having 
> convergence issues)

Correct, your approach 2 will not work precisely because C has a kernal 
(all divergence free vector fields) and is consequently not invertible.


> Using approach 1.) I get timing results like that:
> (using MPI with Petsc, AMG preconditioning for *A* on one machine with 
> four cores, no preconditioning for the Schur complement *S = 
> -B*A^{-1}*B^T + C* from 1.)
> 
> Running using PETSc.
> Number of active cells: 32768
> Total number of cells: 21449 (on 6 levels)
> Number of degrees of freedom: 205920 (104544+101376)
>    Iterative Schur complement solver converged in 177 iterations.
>    Outer solver completed.
> 
> 
> +-+++
> | Total wallclock time elapsed since start    |    37s |    |
> | |    |    |
> | Section | no. calls |  wall time | % of total |
> +-+---+++
> | Schur complement solver (for u) | 1 |  27.2s |    74% |
> | assembly    | 1 |  6.61s |    18% |

That's actually an acceptable time: My rule of thumb, maybe a bit 
outdated, is that it takes a minute to solve for 100,000 unknowns for 
non-trivial problems. You've got twice that many unknowns and solve it 
in half the time.


> You will notice that the time for the Schur complement solver is 
> dominating. So either I did something wrong in the implementation 
> conceptually (see class for Schur complement below) or I need to do a 
> better job in solving/preconditioning the system. Note that the Schur 
> complement is usually not definite in contrast to Stokes or Darcy systems.

Really? Is this an issue related to boundary conditions? Your matrix 
corresponds to the operator
   curl I curl + grad div
I thought there are no vector fields that are both divergence-free and 
curl-free.


> Does anyone have an idea of how to attack systems like *(P)*? Are there 
> optimal preconditioners? Any hint would be appreciated.

Take a look at step-20 and step-22. The idea of preconditioning a Schur 
complement is to use some sort of approximation of the Schur complement 
that is easier to solve for. I suspect that in your case, a simple 
Laplace matrix would make for a nice approximation of S, and so using 
one SSOR step with the Laplace matrix might be a good preconditioner for 
S. Alternatively, an algebraic multigrid initialized with the Laplace 
matrix might work.

step-20 plays with these ideas, solving for the Schur complement 
directly. step-22 (in particular the code in the "Possibilities for 
extensions" section) expands on this: If you have a preconditioner for 
the Schur complement, then you can just use that as part of the 
preconditioner for the overall system. It's the best approach we have 
for the Stokes equation (used successfully on systems with billions of 
unknowns) and worth understanding!

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/326ff4ac-b905-634d-c125-92d0f60c28ab%40colostate.edu.


Re: [deal.II] Stabilization for Q1Q0 Stokes by Bochev [2006]

2019-09-11 Thread Richard Schussnig


Am Dienstag, 10. September 2019 13:26:56 UTC+2 schrieb Bruno Blais:
>
> I second Wolfgang comment on the fact that Q1Q1 is not difficult to 
> implement. You can also scale it to arbitrary Qn-Qn elements if you are 
> interested in higher order.
> We have implemented such an approach in our code based on dealii (
> https://github.com/lethe-cfd/lethe)
> Q1Q1 is already very diffusive (which is why it is so robust I guess), so 
> I am not sure that going with Q1-Q0 to save a few pressure degree of 
> freedom is actually worth it.
> Best!
> Bruno
>

Thanks Bruno,
also very appreciated! I have been looking for dealii implementations of 
NS, 
since especially the stabilizations (not only inf-sup but SUPG GLS or 
similar)
are always a bit tricky ... or at least what is considered/or may be 
ignored!

I totally agree, that saving a few pressure dofs does not pay off, 
especially since I am having 
vector-valued velocities and displacements and only the single 
pressure ...
The motivation was solely coming from the fact, that I did not think of 
using FE_nothing,
otherwise I would not have started trying to use Q1Q0. 

Kind regards,
Richard 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/d01d6cf4-70d4-45aa-93ed-c2a63e5bae66%40googlegroups.com.


Re: [deal.II] Stabilization for Q1Q0 Stokes by Bochev [2006]

2019-09-11 Thread Richard Schussnig
Am Dienstag, 10. September 2019 13:26:56 UTC+2 schrieb Bruno Blais:
>
> I second Wolfgang comment on the fact that Q1Q1 is not difficult to 
> implement. You can also scale it to arbitrary Qn-Qn elements if you are 
> interested in higher order.
> We have implemented such an approach in our code based on dealii (
> https://github.com/lethe-cfd/lethe)
> Q1Q1 is already very diffusive (which is why it is so robust I guess), so 
> I am not sure that going with Q1-Q0 to save a few pressure degree of 
> freedom is actually worth it.
> Best!
> Bruno
>

Thanks Bruno,
also very appreciated! I have been looking for dealii implementations of 
NS, 
since especially the stabilizations (not only inf-sup but SUPG GLS or 
similar)
are always a bit tricky ... or at least what is considered/or may be 
ignored!

I totally agree, that saving a few pressure dofs does not pay off, 
especially since I am having 
vector-valued velocities and displacements and only the single 
pressure ...
The motivation was solely coming from the fact, that I did not think of 
using FE_nothing,
otherwise I would not have started trying to use Q1Q0. 

Kind regards,
Richard


 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/12f3accc-8cb8-4f28-8d78-7e84736d7491%40googlegroups.com.


Re: [deal.II] Stabilization for Q1Q0 Stokes by Bochev [2006]

2019-09-11 Thread Richard Schussnig
Dear Wolfgang,
Thank you very much for looking taking the time to answer my question, 
I really appreciate it - such things do not happen to often on the 
internet! ; )
Please find below my response per answer, I hope it is not too unstructured 
that way!

Am Dienstag, 10. September 2019 06:03:27 UTC+2 schrieb Wolfgang Bangerth:
>
> On 9/9/19 1:57 AM, Richard Schussnig wrote: 
> > 
> > FINALLY, MY QUESTIONS: 
> > 
> > Using the Q1Q1, I would in the end (FSI) need to come up with a space 
> made 
> >  from Q1 elements with a discontinuity at the interface - which shall be 
> > realized using different material_id(). - how may I do that other than 
> > using a FE_DGQ space for the pressure and enforce continuity 'manually' 
> > through a giant ConstraintMatrix? 
>
> That's inefficient, of course :-) I assume that your interface is in the 
> interior of the domain? In that case, take a look at step-46, where 
> solution 
> variables only live on certain cells, and are discontinuous at the 
> interface 
> between the two parts of the domain. 
>
>
The interface is indeed in the interior of the domain, resolved of course, 
since I only use
the material_id() to distinguish between solid and fluid.
And the hint is perfect, thanks a lot, I have not looked into step-46 for a 
few months and almost forgot about it.
- I will look into having 2 seperate pressure spaces using FE_nothing, that 
may do the trick (for the Q1Q1 stabilized case).


> > Using the Q1Q0, the main problem is data transfer and 'node searching' 
> in 
> > the parallel case - example: the stabilization matrix from cell 16 has 
> > pressure dof 45 and shares edges or maybe only a single vertex (!) with 
> > cells with pressure dofs 1 2 3 4 5. The cell matrix for the projection 
> from 
> > Q0(dc) to Q1(c) is an area-weighted sum of the pressures on the cells 
> > touching the vertex of the support of the matching bilinear function, 
> > therefore we get a 6x6 local matrix and entries into all 'touching' 
> cells. 
>
> Yes, you'd have to create a map that for each vertex gives you a list of 
> all 
> adjacent cells. I think I recall that there is a function in GridTools for 
> this, though. 


>
> > Since these cells are not only the direct neighbors of the current cell, 
> > things may get complicated quite fast, if we consider the 3d case with 
> > hanging nodes, but on the other hand side, looping in the element loop 
> over 
> > all elements again(!) to check the vertex_index() is extremely slow. 
>
> Yes, you'd reverse this approach by looping over all vertices first, and 
> then 
> in this loop over all adjacent cells. 
>
>
Thanks a ton, I coded that up myself, but there are:
find_cells_adjacent_to_vertex
&
vertex_to_cell_map
that come in handy at this point and probably are written in a more 
efficient way.
Somehow I did not find the functions, sorry for bothering you
(added for people coming after me).
 

>
> > Do you know of any better-fitting stabilizations for the Q1Q0 pair? Or 
> do 
> > you think there are better options around? 
>
> Q1-Q1 is a pretty good method, and not very difficult to implement. I'll 
> note 
> that Q1-Q0 *sounds* like a good idea, but has a very low convergence rate 
> and 
> so will not yield particularly good accuracy if that's what you actually 
> care 
> about. Of course, Q2-Q1 is the standard for good reasons. 
>
> Best 
>   W. 
>
>
>
I was under the misconception, that I should have the orders matching since 
I am interested in
stresses in the fluid & on the interface, but apparently looking at the 
parameters in the interesting 
case (~4*1e-6 kinem. viscosity for blood [Bazilevs et al 2010, Küttler et 
al 2009, Deparis et al 2016])
it might be worth a lot having a good pressure approximation!
So the conclusion is for me now: 
test Q1Q0dc and perhaps also use Q1Q1 with FE_Nothing!

Thanks once again for your thoughts on the topic!

Kind regards,
Richard

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/97a9811e-6fc1-4379-a265-b407f6d5ff0e%40googlegroups.com.


[deal.II] Re: On using project_boundary_values_div_conforming for (RT-DG) in parallel case

2019-09-11 Thread Charlie Z

>
> The minimal code.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/006ec2fb-d714-4798-a1ca-35ae0b14d0be%40googlegroups.com.
#include 
#include 
#include 

#include 

namespace LA
{
#if defined(DEAL_II_WITH_TRILINOS)
  using namespace dealii::LinearAlgebraTrilinos;
#else
#  error DEAL_II_WITH_TRILINOS required
#endif
}

#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 

#include 

#include 
#include 
#include 

namespace Step55
{
  using namespace dealii;

  template 
  class StokesProblem
  {
  public:
StokesProblem (unsigned int degree);

void run ();

  private:
void make_grid ();
void setup_system ();

unsigned int  degree;
MPI_Comm  mpi_communicator;

FESystem fe;
parallel::distributed::Triangulation triangulation;
DoFHandler   dof_handler;

std::vector owned_partitioning;
std::vector relevant_partitioning;

ConstraintMatrix  constraints;

  };




  template 
  StokesProblem::StokesProblem (unsigned int degree)
:
degree (degree),
mpi_communicator (MPI_COMM_WORLD),
fe (FE_RaviartThomas(degree), 1,
FE_DGQ(degree), 1),
triangulation (mpi_communicator,
   typename Triangulation::MeshSmoothing
   (Triangulation::smoothing_on_refinement |
Triangulation::smoothing_on_coarsening)),
dof_handler (triangulation)
  {}


  template 
  void StokesProblem::make_grid()
  {
GridGenerator::hyper_cube (triangulation, 0, 1.0);
triangulation.refine_global (3);
  }

  template 
  void StokesProblem::setup_system ()
  {

dof_handler.distribute_dofs (fe);

DoFRenumbering::component_wise(dof_handler);
std::vector dofs_per_component(dim + 1);
DoFTools::count_dofs_per_component(dof_handler, dofs_per_component);
const unsigned int n_u = dofs_per_component[0],
   n_p = dofs_per_component[dim];
std::cout << "Number of active cells: " << triangulation.n_active_cells()
  << std::endl
  << "Total number of cells: " << triangulation.n_cells()
  << std::endl
  << "Number of degrees of freedom: " << dof_handler.n_dofs()
  << " (" << n_u << '+' << n_p << ')' << std::endl;

owned_partitioning.resize(2);
owned_partitioning[0] = dof_handler.locally_owned_dofs ().get_view(0, n_u);
owned_partitioning[1] = dof_handler.locally_owned_dofs ().get_view(n_u, n_u+n_p);

IndexSet locally_relevant_dofs;
DoFTools::extract_locally_relevant_dofs (dof_handler,
 locally_relevant_dofs);

relevant_partitioning.resize(2);
relevant_partitioning[0] = locally_relevant_dofs.get_view(0, n_u);
relevant_partitioning[1] = locally_relevant_dofs.get_view(n_u, n_u+n_p);


constraints.reinit (locally_relevant_dofs);
DoFTools::make_hanging_node_constraints (dof_handler, constraints);

VectorTools::project_boundary_values_div_conforming(dof_handler,
0,
ZeroFunction(dim),
0,
constraints);
constraints.close ();
  }

  template 
  void StokesProblem::run ()
  {
make_grid ();

setup_system ();


  }
}

int main(int argc, char *argv[])
{

  using namespace dealii;
  using namespace Step55;

  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);

  StokesProblem<2> problem (0);
  problem.run ();

  return 0;
}
##
#  CMake script for the step-21 tutorial program:
##

# Set the name of the project and target:
SET(TARGET "test")

# Declare all source files the target consists of. Here, this is only
# the one step-X.cc file, but as you expand your project you may wish
# to add other source files as well. If your project becomes much larger,
# you may want to either replace the following statement by something like
#  FILE(GLOB_RECURSE TARGET_SRC  "source/*.cc")
#  FILE(GLOB_RECURSE TARGET_INC  "include/*.h")
#  SET(TARGET_SRC ${TARGET_SRC}  ${TARGET_INC})
# or switch altog

Re: [deal.II] Requirements for citing deal.II

2019-09-11 Thread Wolfgang Bangerth
On 9/11/19 3:27 AM, 'Maxi Miller' via deal.II User Group wrote:
> In one of my side projects I include the library deal.II, but use it 
> only for reading in parameter files and printing out data in the HDF5 
> format. I included it mostly because it is in use in most of my main 
> projects, and thus it already was available. Furthermore, being able to 
> reuse parameter files and the rather simple way of storing data were the 
> main reasons for including the library in this project.
> Thus, I was now wondering if that usage is enough to support citing the 
> library in publications, even though I never use the main features of it?

My general rule of thumb when I'm faced with similar dilemmas is "if I 
use, I cite it". That's because someone still put it many weeks of work 
to write the parameter handling and HDF5 capabilities (in your case; or 
whatever else it is that I use), and it doesn't cost me very much to 
cite some other paper. Indeed, my general approach (in this and any 
other regard) is to be generous with praise.

Best
  W.

-- 

Wolfgang Bangerth  email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5da8ed4a-3f2f-9208-4704-427a42032388%40colostate.edu.


[deal.II] On using project_boundary_values_div_conforming for (RT-DG) in parallel case

2019-09-11 Thread Charlie Z
Hi there,

I'm trying to make step-20 parallel through trilinos (by following 
step-40/55), but haven't succeeded. In setup_system() function, I set 
owned_partitioning, relevant_partitioning and constraints in the following 
way:

dof_handler.distribute_dofs (fe);

DoFRenumbering::component_wise(dof_handler);
std::vector dofs_per_component(dim + 1);
DoFTools::count_dofs_per_component(dof_handler, dofs_per_component);
const unsigned int n_u = dofs_per_component[0],
   n_p = dofs_per_component[dim];
std::cout << "Number of active cells: " << triangulation.n_active_cells()
  << std::endl
  << "Total number of cells: " << triangulation.n_cells()
  << std::endl
  << "Number of degrees of freedom: " << dof_handler.n_dofs()
  << " (" << n_u << '+' << n_p << ')' << std::endl;

owned_partitioning.resize(2);
owned_partitioning[0] = dof_handler.locally_owned_dofs ().get_view(0, n_u);
owned_partitioning[1] = dof_handler.locally_owned_dofs ().get_view(n_u, 
n_u+n_p);

IndexSet locally_relevant_dofs;
DoFTools::extract_locally_relevant_dofs (dof_handler,
 locally_relevant_dofs);

relevant_partitioning.resize(2);
relevant_partitioning[0] = locally_relevant_dofs.get_view(0, n_u);
relevant_partitioning[1] = locally_relevant_dofs.get_view(n_u, n_u+n_p);


constraints.reinit (locally_relevant_dofs);
DoFTools::make_hanging_node_constraints (dof_handler, constraints);

VectorTools::project_boundary_values_div_conforming(dof_handler,
0,
ZeroFunction(dim),
0,
constraints);
constraints.close ();

The code runs fine in serial; In parallel, however, it gives the following 
error:

An error occurred in line <1721> of file 
<../include/deal.II/lac/constraint_matrix.h> in function
dealii::types::global_dof_index 
dealii::ConstraintMatrix::calculate_line_index(dealii::ConstraintMatrix::size_type)
 
const
The violated condition was: 
local_lines.is_element(line)
Additional information: 
The index set given to this constraint matrix indicates constraints for 
degree of freedom 4294967295 should not be stored by this object, but a 
constraint is being added.

The error comes from the call to project_boundary_values_div_conforming, 
but I'm not sure why. Could you please give me a hint?

Attached is a minimal, reproducible code.

Thanks,
Charlie

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f5231031-0471-4239-a215-9dc512c47331%40googlegroups.com.


[deal.II] Re: On the usage of Utilities::MPI

2019-09-11 Thread Konrad

>
> However, when I run the command "mpirun -np 5 ./MPI_TEST", I get the 
> following message,
>
> Hello world from process 0 of 1
> Hello world from process 0 of 1
> Hello world from process 0 of 1
> Hello world from process 0 of 1
> Hello world from process 0 of 1
>
> It seems that 5 cpu cores are used, however, they seemed to have the same 
> rank. 
> Besides, my operating system is Ubuntu 16.04. Looking forward to hearing 
> from you soon. Thank you!
>

Hi Yang Liu,

Try that code (explanations below): 

#include 
#include 
#include 

using namespace dealii;

template 
class MPI_TEST
{
public:
  MPI_TEST();
  void print_test();

private:
  MPI_Comm mpi_communicator;
  ConditionalOStream pcout;  
};

template 
void MPI_TEST::print_test()
{
  for (unsigned int i = 0;
  i < Utilities::MPI::n_mpi_processes(mpi_communicator);
  ++i)
  {
  std::cout << "Hello world from process "
  << Utilities::MPI::this_mpi_process(mpi_communicator)
  << " of "
  <<  Utilities::MPI::n_mpi_processes(mpi_communicator)
  << std::endl;
  }
}

template 
MPI_TEST::MPI_TEST()
  : mpi_communicator(MPI_COMM_WORLD)
  , pcout(std::cout, (Utilities::MPI::this_mpi_process(mpi_communicator) == 
0))
{}


int main(int argc, char ** argv)
{

  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);

  MPI_TEST<2> hw_mpi;
  hw_mpi.print_test();

  return 0;
}

The out put of mpirun -np 3 ./MPI_TEST is:

Hello world from process 1 of 3
Hello world from process 1 of 3
Hello world from process 1 of 3
Hello world from process 0 of 3
Hello world from process 0 of 3
Hello world from process 0 of 3
Hello world from process 2 of 3
Hello world from process 2 of 3
Hello world from process 2 of 3

Looks weird if you are not used to MPI but it is as expected. 

1. You must call Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, 
argv, 1); before you invoke any action of MPI. It will create an object 
that calls MPI_Init upon creation and MPI_Finalize in the destructor, i.e., 
when the object goes out of scope. Only then

2. When you implement MPI code it is important to get into the right 
mindset: You do not program code for one compute node only. You are 
programming code for all nodes (at the same time). In your example the for 
loop gets executed on all nodes( =3 here).

Hope that helps.

Best,
Konrad

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5164476e-19e6-4e08-a0a2-f938a9adab89%40googlegroups.com.


[deal.II] Re: On the usage of Utilities::MPI

2019-09-11 Thread Bruno Turcksin
Yang,

Are sure that you compiled deal.II with MPI support? Also can you post  
what you see when you do make VERBOSE=1

Best,

Bruno

On Wednesday, September 11, 2019 at 8:44:29 AM UTC-4, Yang Liu wrote:
>
> Dear Deal.II developers,
>
> Greetings!
>
> I am a new user of Deal.II. Currently I would like to test the hello world 
> program on Utilities::MPI. 
>
> A minimum working example is attached to this email. 
>
> However, when I run the command "mpirun -np 5 ./MPI_TEST", I get the 
> following message,
>
> Hello world from process 0 of 1
> Hello world from process 0 of 1
> Hello world from process 0 of 1
> Hello world from process 0 of 1
> Hello world from process 0 of 1
>
> It seems that 5 cpu cores are used, however, they seemed to have the same 
> rank. 
> Besides, my operating system is Ubuntu 16.04. Looking forward to hearing 
> from you soon. Thank you!
>
> Best Regards,
>
> Yang Liu
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/00946786-1eac-4cd6-9871-8d705e1e913d%40googlegroups.com.


[deal.II] Question about (parallel) linear solver

2019-09-11 Thread Konrad
Dear deal.ii community,

Apologies for so many questions lately... but here is another one. This one 
is a bit technical and aims at good solvers. In a simplified form you can 
formulate it this way:

I implemented a PDE solver with a for the 3D vector Laplacian *curl(curl u) 
- grad(div u) = f*. I use a Nedelec-Raviart-Thomas pairing to solve the 
mixed form which reads
*sigma - curl u = 0*
*curl sigma - grad(div u) = f*

You can think of this as a Helmholtz decomposition of the vector field *f*. 
I am using Nedelec elements for *sigma* and Raviart-Thomas elements for *u*. 
Fine so far. Besides the fact that this is technically not so easy when it 
comes to certain domains with holes etc (forget about non-trivial harmonic 
forms if this tells you something - I don't have that here) it seems quite 
challenging to come up with a good solver (what is "good" may depend on *f* 
as well).

The system that needs to be solved formally reads

*(P)   A sigma + B^T u = 0*
*B sigma + C u = f*

*A* is sort of a mass matrix so it is positive (corresponds to ** 
in *H(curl)*) and *C* is positive semi-definite (corresponds to ** in *H(div)*).

First thing that crossed my mind was a Schur complement solver. Now one a 
priori has two options:

1.) Solve first for u and then for sigma, i.e. invert *A* and use the Schur 
complement *S = -B*A^{-1}*B^T + C*
2.) The other way around *S = -B^T*C^{-1}*B + A*. My gut feeling tells me 
that this could actually turn out to be not an option since either *C* is 
very ill-conditioned or not invertible at all. It is symmetric and contains 
at least one zero eigenvalue. Thinking of it: it formally corresponds to 
*C~(grad 
div)^{-1}* (and I am of course having convergence issues)

Using approach 1.) I get timing results like that:
(using MPI with Petsc, AMG preconditioning for *A* on one machine with four 
cores, no preconditioning for the Schur complement *S = -B*A^{-1}*B^T + C* 
from 1.)

Running using PETSc. 
Number of active cells: 32768 
Total number of cells: 21449 (on 6 levels) 
Number of degrees of freedom: 205920 (104544+101376) 
  Iterative Schur complement solver converged in 177 iterations. 
  Outer solver completed. 


+-+++ 
| Total wallclock time elapsed since start|37s || 
| ||| 
| Section | no. calls |  wall time | % of total | 
+-+---+++ 
| Schur complement solver (for u) | 1 |  27.2s |74% | 
| assembly| 1 |  6.61s |18% | 
| mesh generation | 1 | 0.163s |  0.44% | 
| outer CG solver (for sigma) | 1 | 0.136s |  0.37% | 
| system and constraint setup | 1 |  1.15s |   3.1% | 
| vtu output  | 1 |  1.68s |   4.5% | 
+-+---+++ 


You will notice that the time for the Schur complement solver is 
dominating. So either I did something wrong in the implementation 
conceptually (see class for Schur complement below) or I need to do a 
better job in solving/preconditioning the system. Note that the Schur 
complement is usually not definite in contrast to Stokes or Darcy systems.

Does anyone have an idea of how to attack systems like *(P)*? Are there 
optimal preconditioners? Any hint would be appreciated. Are there relevant 
tutorials? Parallelization is an issue since I would like to solve very 
large problems in 3D.

Thanks in advance and best regards,
Konrad :-)

Here the Schur complement code:

template 
class SchurComplementMPI : public Subscriptor
{
private:
using BlockType = typename BlockMatrixType::BlockType;

public:
SchurComplementMPI (const BlockMatrixType &system_matrix,
   const InverseMatrix 
&relevant_inverse_matrix,
   const std::vector &owned_partitioning,
   MPI_Comm mpi_communicator);

void vmult (VectorType   &dst,
  const VectorType &src) const;

private:
const SmartPointer system_matrix;
const SmartPointer> 
relevant_inverse_matrix;

const std::vector& owned_partitioning;

MPI_Comm mpi_communicator;

mutable VectorType tmp1, tmp2, tmp3;
};


template 
SchurComplementMPI::SchurComplementMPI(
const BlockMatrixType &system_matrix,
const InverseMatrix &relevant_inverse_matrix,
const std::vector &owned_partitioning,
MPI_Comm mpi_communicator)
:
system_matrix (&system_matrix),
relevant_inverse_matrix (&relevant_inverse_matrix),
owned_partitioning(owned_partitioning),
mpi_communicator(mpi_communicator),
tmp1 (owned_partitioning[0],
mpi_communicator),
tmp2 (owned_partitioning[0],
mpi_communicator),
tmp3 (owned_partitioning[1],
mpi_communicator)
{}


template 
void
SchurComplementMPI::vmult(
VectorType   &dst,
const VectorType &src) const
{
system_matrix->block(0,1).vmult (tmp

[deal.II] On the usage of Utilities::MPI

2019-09-11 Thread Yang Liu
Dear Deal.II developers,

Greetings!

I am a new user of Deal.II. Currently I would like to test the hello world
program on Utilities::MPI.

A minimum working example is attached to this email.

However, when I run the command "mpirun -np 5 ./MPI_TEST", I get the
following message,

Hello world from process 0 of 1
Hello world from process 0 of 1
Hello world from process 0 of 1
Hello world from process 0 of 1
Hello world from process 0 of 1

It seems that 5 cpu cores are used, however, they seemed to have the same
rank.
Besides, my operating system is Ubuntu 16.04. Looking forward to hearing
from you soon. Thank you!

Best Regards,

Yang Liu

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAEmspPAS%3DSgYOdUOU8twH7OAr%3DTFJiJrEDJFRB3M9fJk14hgLQ%40mail.gmail.com.
##
#  CMake script for the step-6 tutorial program:
##

# Set the name of the project and target:
SET(TARGET "MPI_TEST")

# Declare all source files the target consists of. Here, this is only
# the one step-X.cc file, but as you expand your project you may wish
# to add other source files as well. If your project becomes much larger,
# you may want to either replace the following statement by something like
#FILE(GLOB_RECURSE TARGET_SRC  "source/*.cc")
#FILE(GLOB_RECURSE TARGET_INC  "include/*.h")
#SET(TARGET_SRC ${TARGET_SRC}  ${TARGET_INC}) 
# or switch altogether to the large project CMakeLists.txt file discussed
# in the "CMake in user projects" page accessible from the "User info"
# page of the documentation.
SET(TARGET_SRC
  ${TARGET}.cc
  )

# Usually, you will not need to modify anything beyond this point...

CMAKE_MINIMUM_REQUIRED(VERSION 2.8.12)



FIND_PACKAGE(deal.II 9.1.0 QUIET
  HINTS ${deal.II_DIR} ${DEAL_II_DIR} ../ ../../ $ENV{DEAL_II_DIR}
  )

  

IF(NOT ${MPI_FOUND})
  MESSAGE(FATAL_ERROR "\n"
"*** Could not locate a (sufficiently recent) version of MPI. ***\n\n"
)
ENDIF()

IF(NOT ${deal.II_FOUND})
  MESSAGE(FATAL_ERROR "\n"
"*** Could not locate a (sufficiently recent) version of deal.II. ***\n\n"
"You may want to either pass a flag -DDEAL_II_DIR=/path/to/deal.II to 
cmake\n"
"or set an environment variable \"DEAL_II_DIR\" that contains this path."
)
ENDIF()

DEAL_II_INITIALIZE_CACHED_VARIABLES()
PROJECT(${TARGET})
DEAL_II_INVOKE_AUTOPILOT()
#include 
#include 
#include 

using namespace dealii;

template 
class MPI_TEST
{
public:
  MPI_TEST();
  void print_test();

private:
  MPI_Comm mpi_communicator;
  const unsigned int n_mpi_processes;
  const unsigned int this_mpi_process;
  ConditionalOStream pcout;  
};

template 
void MPI_TEST::print_test()
{
  for (unsigned int i = this_mpi_process; i < n_mpi_processes; ++i)
  {
  std::cout << "Hello world from process " <<  this_mpi_process << " of " << n_mpi_processes << std::endl;
  }
}

template 
MPI_TEST::MPI_TEST()
  : mpi_communicator(MPI_COMM_WORLD)
  , n_mpi_processes(Utilities::MPI::n_mpi_processes(mpi_communicator))
  , this_mpi_process(Utilities::MPI::this_mpi_process(mpi_communicator))
  , pcout(std::cout, (this_mpi_process == 0))
{}


int main(int argc, char ** argv)
{
  MPI_TEST<2> hw_mpi;
  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
  hw_mpi.print_test();
  return 0;
}


[deal.II] Requirements for citing deal.II

2019-09-11 Thread 'Maxi Miller' via deal.II User Group
In one of my side projects I include the library deal.II, but use it only 
for reading in parameter files and printing out data in the HDF5 format. I 
included it mostly because it is in use in most of my main projects, and 
thus it already was available. Furthermore, being able to reuse parameter 
files and the rather simple way of storing data were the main reasons for 
including the library in this project.
Thus, I was now wondering if that usage is enough to support citing the 
library in publications, even though I never use the main features of it?
Thanks!

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/051ece36-9853-403d-9c34-01e974413347%40googlegroups.com.