Re: [deal.II] Looking for more info on FE_RaviartThomas

2017-12-12 Thread Praveen C
Dear allI have some doubts on the construction of FE_RaviartThomas space in deal.II. I have written my question in the attached pdf.Also, is it possible to get this notes that Guido has postedhttp://www.mathsim.eu/~gkanscha/notes/mixed.pdfwhich seems to be inaccessible now. I seem to have lost the copy that I saved :-(Thanks a lotpraveen



-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


main.pdf
Description: Adobe PDF document




-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] p4est critical number of elements in each Proc

2017-12-12 Thread Phani Motamarri
Thanks for your reply Dr Wolfgang. It was my bad, I had a MPI bug in my
code which was causing this problem. Now everything works fine!

On Mon, Dec 11, 2017 at 12:32 AM, Wolfgang Bangerth 
wrote:

> On 12/10/2017 09:39 PM, Phani Motamarri wrote:
>
>> Thank you Dr WolfGang for your reply. I do not know if I was clear in my
>> email. I was only intending to check with you about this critical number of
>> elements on the base mesh only when I am trying to generate hanging node
>> mesh from the base mesh using p4est.
>>
>> When I debug with ddt, the stack trace shows
>> execute_coarsening_refinement calls p8est_refine which calls
>> p8est_refine_ext which calles p8est_is_valid and finally it calls
>> p8est_comm_sync_flag after it which it calls PMPI_Abort() and my code get
>> aborted.
>>
>> This happens whenever I run more than 8procs. Here in this case, I have a
>> base mesh with 64 elements and I try to adaptively refine to generate
>> hanging node mesh using triangulation.execute_coarsening_and_refinement()
>> after marking some cells for refinement.
>>
>
> I understood what you said. But my answer is still correct: there is no
> limitation, and to find out what causes the error/abort, you need to find
> out whether all processors abort in the same place.
>
>
> Best
>  W.
>
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
>www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see https://groups.google.com/d/fo
> rum/dealii?hl=en
> --- You received this message because you are subscribed to the Google
> Groups "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] step-22 partial boundary conditions

2017-12-12 Thread Jane Lee
Hi Wolfgang,

right so in the Stokes subsystem of my equations, in simplified form I am 
trying to solve:

div tau - grad p = rhs1
div v = rhs2
where my tau is 2 epsilon as in step-22 
The boundary conditions I now want to implement to solve the 'real' problem 
is:
on the sides (boundary 0): zero tangential stress and no normal flux, so we 
have v dot n = 0 and n.(-pI + tau)t = 0. 
at the top (boundary 1): zero tangential stress and prescribed normal 
component of normal stress, so that n.(-pI+tau)n= top stress value,
and similarly, on the bottom (boundary 2): zero tangential stress and 
prescribed normal component of normal stress, so that n.(-pI+tau)n= bottom 
stress value.

n and t here are the normal and tangential vectors relative to the surface 
at the respective boundaries, respectively. 
The test code works with Dirichlet conditions around the boundary it seems, 
but i need to implement the problem as above.

Since the last message, I realised that I can just expand n.(-pI+tau) into 
normal and tangential components, which means that for the above real 
problem, I can just implement Neumann conditions with top stress 
value*normal vector to the surface, as the tangential stress are zero and 
therefore in the expansions, just simply equal zero. ie. I have n.(-pI+tau) 
= stress_value*normal vector. This is for the top and bottom boundaries.
 For the sides, I use no_normal_flux_constraints in vectortools.
This means that I can indeed implement the boundary condition in the weak 
form of my equation. 

I am now having a small issue with this, and tried to look up more 
information about fe_face_values, but I am a little stuck. 
I am doing (eg. for boundary id 1, and i have the same for 2):

for (unsigned int face_number=0; 
face_numberface(face_number)->at_boundary()
  &&
  (cell->face(face_number)->boundary_id() == 1))
{
 fe_face_values.reinit (cell, face_number);
 
 for (unsigned int q_point=0; q_point')
I understand that the condition is on all velocities and pressure, not just 
the velocities, so I didn't use a component mask or similar. 
In the case of the mixed formulation like this one, how am I supposed to be 
extracting the normal vector correctly in a way I am applying it to the 
whole system?

I hope that makes it clearer!
Thanks for your help again. 



On Monday, December 4, 2017 at 9:17:20 PM UTC+1, Wolfgang Bangerth wrote:
>
>
> Jane, 
>
> > I get that the stress boundary can be put into the weak form, but only 
> > really when you have Neumann conditions. 
>
> Correct, but prescribing the stress in an elasticity problem is exactly 
> a Neumann boundary condition. 
>
>
> > And why would I have to use a 
> > compute_nonzero_normal_flux for a zero one? 
>
> I thought you had a nonzero displacement/velocity on parts of the 
> boundary? If it is zero, then of course you would want to use a 
> different function. 
>
>
> > I believe it my have been my fault not being clearer. 
> > 
> > On the 'side' boundaries, I have no normal flux. 
> > On the other boundaries, I have a prescribed nonzero inhomogeneous 
> > normal component of the normal stress (so this is only one of the 
> > components). 
> > On all boundaries, I have zero tangential stresses (which sorts out the 
> > other 2 components in 2D in additions to the normal component conditions 
> > from the above). 
>
> Can you just put that into formulas, to make things really clear? State 
> the equations you want to solve and for each type of boundary what 
> conditions hold there. 
>
> Maybe that would make it easier for everyone to understand what the 
> other side is saying :-) 
>
>
> > Pardon my potential ignorance, but I'm unsure how I can put these in the 
> > weak form when i have component-wise conditions (partial conditions). 
> > I had thought to use compute_no_normal_flux_constraints for the side 
> > boundaries and compute_nonzero_normal_flux_constraints for the other two 
> > as constraints, but this doesn't seem to work. 
>
> Be specific here as well: what doesn't work? Do you get an assertion, or 
> a wrong solution, or a compiler error, ...? 
>
> Best 
>   W. 
>
> -- 
> 

Re: [deal.II] Re: independent triangulations on different mpi processes

2017-12-12 Thread Jose Javier Munoz Criollo
I see. Then I'm not sure if any optimization I'll get from it would 
offset the error risks.


Thanks for the insight.


Best regards
Javier


On 11/12/17 17:56, Wolfgang Bangerth wrote:

On 12/11/2017 06:57 AM, Bruno Turcksin wrote:
    My current problem involves the use of a relatively big main 
domain that

    is distributed accross a number of mpi process. The solution on this
    domain is coupled with the solutions on smaller domains. At the 
moment the
    smaller domains are distributed on the same number of mpi 
processes as the
    main one. However, I wonder if it would be more efficient to 
assing each
    of these smaller domains to a specific mpi process instead of 
distribute

    them, and if so, what would be the best way to do this.

p::d::Triangulation takes a communicator in the constructor. So if 
you split your communicator, the smaller domains won't use the same 
processors as the largest one. However, I am not sure how the linear 
algebra is going to work. The distributed vectors also take 
communicators so depending if you work on a sub-problem or on the 
whole problem you will need to use different communicators. I have 
never done anything like that so I can't say if it is easy or hard to 
do.


It's conceptually not difficult to do -- every parallel object we have 
takes a communicator argument, and these communicators need not be 
equal to MPI_COMM_WORLD: they can contain a subset of processors, and 
in that case only a subset of processors will own the object and 
participate in communication.


The much more difficult aspect of this is that you now need to make 
sure that every processor knows which objects it owns and which 
communication it is supposed to participate in. In other words, you 
lose the fact that in all programs you've likely seen before, every 
processor does the same thing at all times, because every processor 
participates in every communication step. By splitting objects to a 
subset of processors, you will end up with code that has a lot of 
statements such as

  if (my processor participates in object A)
    {
   do something with object A;
   builds some linear system;
   solve linear system;
    }
  if (my processor participates in object B)
    ...

It's not *difficult* to write code like this, it's just error prone 
and unwieldy.


Best
 W.



--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: candi deal.ii

2017-12-12 Thread Daniel Arndt
Peimeng,

adding to what Uwe said, you don't need to a fortran compiler to build 
PETSc. In particular, you can try to also set `--with-fc=0` in 
deal.II-toolchain/packages/petsc.package CONFOPTS.

Best,
Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: candi deal.ii

2017-12-12 Thread 'Uwe Köcher' via deal . II User Group
Dear Peimeng,

the error is:
Fortran error! mpif.h could not be located at:

looks like that you either have not installed the development packages of
your mpi compiler, or something else is wrong with your mpi installation
(e.g. multiple installations, etc.)

It is hard to say what is going wrong.

Kind regards
Uwe

On Tuesday, 12 December 2017 07:35:14 UTC+1, Peimeng Yin wrote:
>
> Install deal.ii by candi
>
>
> ===
>  
> TESTING: FortranMPICheck from 
> config.packages.MPI(config/BuildSystem/config/pack***
>  UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for 
> details):
>
> ---
> Fortran error! mpif.h could not be located at: []
>
> ***
>
> Failure with exit status: 
> 1 
> Exit message: petsc ./configure failed
>
>
> How to fix it?
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.