Re: [deal.II] Re: Renumbering dofs with petsc + block + MPI + Direct solver work around

2017-02-09 Thread Daniel Jodlbauer
Actually Mumps is included in the Amesos solver used by 
TrilinosWrappers::SolverDirect("Amesos_Mumps"). You may have to recompile 
Trilinos with the corresponding flags to enable it (and probably deal.II as 
well).

Am Donnerstag, 9. Februar 2017 20:52:16 UTC+1 schrieb Bruno Turcksin:
>
> 2017-02-09 14:33 GMT-05:00 Spencer Patty >: 
>
> > Interesting,  I wondered if SuperLU_dist might be parallel but I hadn't 
> > looked into it yet.  If it does work, then that certainly makes things 
> much 
> > simpler since I have trilinos integrated well.  I will look into 
> installing 
> > it and see if it will work. I see what you mean by it not being the 
> easiest 
> > code to install. 
> > 
> > Once it is installed, I then have to link it into trilinos?  Then it is 
> > available as an option in AdditionalData. 
> Yes, deal.II just passes the options to Amesos. You need to install 
> parmetis, SuperLU_dist, and then finally Trilinos. I would encourage 
> you to use candi or spack to install deal with SuperLU_dist supports. 
> If you want to install everything yourself, you can take a look at 
> candi to see how to install SuperLU_dist and enable it in Trilinos. 
>
> Best, 
>
> Bruno 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Fully distributed triangulation (level 0)

2017-02-09 Thread Timo Heister
see 
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_dealii_dealii_pull_3956&d=CwIBaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=Kumuf8l3Od3XM93KDoNo2AFeuWqKbGvpUufysXzdnAE&s=847Vw5LGq1DTVk1pUvZNIpjoFirAOxRIkxnlS0VHYC8&e=
  for the work in
progress pull request.

On Wed, Feb 8, 2017 at 1:53 PM, Timo Heister  wrote:
>> Are you willing to share that code, Timo?
>
> Yes, we will be creating a PR for that soon.
>
>> I suspect that if implemented right, it should not be terribly difficult to
>> do refinement of the mesh, but because you can't repartition the coarse
>> mesh, it will quickly become unbalanced if processors refine differently
>> (i.e., in practice, if processors do not all refine globally).
>
> The issue with that is that you would need to update the ghost layer
> correctly. Doable, but definitely requires some extra thought.
>
>> Do you implement this by building another class on top of
>> dealii::Triangulation so that the base class only stored the coarse mesh
>> plus one layer of ghosts, and the derived class is responsible for the
>> communication? And then derive another class from DoFHandlerPolicy to deal
>> with this triangulation?
>
> Yes. It is derived from parallel::Triangulation (and an alternative to
> shared::Tria and distributed::Tria).
>
> --
> Timo Heister
> http://www.math.clemson.edu/~heister/



-- 
Timo Heister
http://www.math.clemson.edu/~heister/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Renumbering dofs with petsc + block + MPI + Direct solver work around

2017-02-09 Thread Bruno Turcksin
2017-02-09 14:33 GMT-05:00 Spencer Patty :
> Interesting,  I wondered if SuperLU_dist might be parallel but I hadn't
> looked into it yet.  If it does work, then that certainly makes things much
> simpler since I have trilinos integrated well.  I will look into installing
> it and see if it will work. I see what you mean by it not being the easiest
> code to install.
>
> Once it is installed, I then have to link it into trilinos?  Then it is
> available as an option in AdditionalData.
Yes, deal.II just passes the options to Amesos. You need to install
parmetis, SuperLU_dist, and then finally Trilinos. I would encourage
you to use candi or spack to install deal with SuperLU_dist supports.
If you want to install everything yourself, you can take a look at
candi to see how to install SuperLU_dist and enable it in Trilinos.

Best,

Bruno

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Renumbering dofs with petsc + block + MPI + Direct solver work around

2017-02-09 Thread Spencer Patty
Interesting,  I wondered if SuperLU_dist might be parallel but I hadn't 
looked into it yet.  If it does work, then that certainly makes things much 
simpler since I have trilinos integrated well.  I will look into installing 
it and see if it will work. I see what you mean by it not being the easiest 
code to install.  

Once it is installed, I then have to link it into trilinos?  Then it is 
available as an option in AdditionalData.

I suppose for my own benefit and better understanding of the systems, I am 
still interested in understanding what it would take to accomplish the 
original question above so that I can use the general linear algebra system 
in every case I have in front of me.  If it is too much work and the above 
works then I suppose I will be content with just having the trilinos system 
for direct solvers...  :)


On Thursday, February 9, 2017 at 12:46:18 PM UTC-6, Bruno Turcksin wrote:
>
> Hi,
>
> this is not the answer to your question but if I understand correctly, 
> everything works fine with Trilinos and the only reason why you need PETSc 
> is to use MUMPS. If that's the case, instead of using Amesos_KLU with 
> Trilinos, you can use SuperLU_dist (
> http://crd-legacy.lbl.gov/~xiaoye/SuperLU/#superlu_dist). It's not the 
> simplest code to install but SuperLU_dist is parallel and the only thing 
> that you need to change in your code is the option in AdditiionalData.
>
> Best,
>
> Bruno
>
> On Thursday, February 9, 2017 at 1:19:03 PM UTC-5, Spencer Patty wrote:
>>
>>
>> A problem I am working on results in a non symmetric 4x4 block matrix 
>> system with the first block representing a vector valued velocity and the 
>> remaining 3 blocks scalar quantities that are all coupled.  
>>
>> The fe system is represented as 
>>
>> FESystem (FESystem(FE_Q 
>> (parameters.degree_of_fe_velocity),dim), 1,  // velocity
>>
>>FE_Q(parameters.degree_of_fe_velocity), 1, // normal 
>> velocity
>>
>>FE_Q(parameters.degree_of_fe_velocity), 1, // 
>> curvature
>>
>>FE_Q(parameters.degree_of_fe_velocity), 1  // 
>> willmore force
>>
>>   )
>>
>>
>> The other terms are needed for the physics of the problem I am working on 
>> but in the end, all I really need is the velocity.  After solving for these 
>> components, we extract the velocity component and create a new DofHandler 
>> consisting only of the velocity dofs and pass those to a transport module 
>> where they are the velocity field to be used.  In order to accomplish this 
>> extraction it seems necessary to have the dofs separated blockwise so that 
>> we can extract the first block and use it as is.  We thus apply the 
>> renumberings
>>
>>
>> DoFRenumbering::hierarchical (*(dof_handler_ptr));
>>
>> DoFRenumbering::component_wise (*(dof_handler_ptr)),
>>
>> system_sub_blocks);
>>
>>
>> where system_sub_blocks has 0 for the first dim components and then 
>> increasing by one for the rest of the components.  (essentially this is the 
>> same as block wise renumbering)
>>
>>
>> Now, we have not been able to come up with a good preconditioner for this 
>> system yet so that iterative methods all currently fail.  Thus, I must 
>> resort to direct methods.  I have succeeded in coming up with a way to 
>> construct the system not as a block but as a TrilinosWrappers::SparseMatrix 
>> for putting into the Amesos_klu or available direct solvers through 
>> Trilinos and it works great.  Afterwards, we copy the non block solution 
>> vector back to a block vector for all the other parts since it is only the 
>> solver that needs a non block system.
>>
>>
>>   if (parameters.bUseBlockSystemMatrix == false)
>>
>>   {
>>
>> system_hanging_node_constraints_and_bv_velocity
>> .distribute(solution_notblock_lo);
>>
>> solution_notblock_lr = solution_notblock_lo;
>>
>> // copy solution_notblock_lo to solution_lo
>>
>> IndexSet::ElementIterator iter = locally_owned_dofs_ptr->begin(),
>>
>>end = locally_owned_dofs_ptr->end();
>>
>> for (; iter != end; ++iter)
>>
>>   solution_lo(*iter) = solution_notblock_lo(*iter);
>>
>>
>> // since we have inserted (set) the values of the solution_lo 
>> vector,
>>
>> // we must now compress with the insert operation to be complete.
>>
>> solution_lo.compress(VectorOperation::insert);
>>
>>   }
>>
>>   system_hanging_node_constraints_and_bv_velocity
>> .distribute(solution_lo);
>>
>>   solution_lr = solution_lo;
>>
>>
>>   This has worked well for us but these trilinos direct solvers are only 
>> serial solvers and we want to solve 2D and 3D systems so we need something 
>> that can handle larger problems.  Our next idea was to try out MUMPS in 
>> parallel through petsc to see if it would expand our available size of 
>> problem.  I have been able to rewrite the code base to use generic

[deal.II] Re: Renumbering dofs with petsc + block + MPI + Direct solver work around

2017-02-09 Thread Bruno Turcksin
Hi,

this is not the answer to your question but if I understand correctly, 
everything works fine with Trilinos and the only reason why you need PETSc 
is to use MUMPS. If that's the case, instead of using Amesos_KLU with 
Trilinos, you can use SuperLU_dist 
(http://crd-legacy.lbl.gov/~xiaoye/SuperLU/#superlu_dist). It's not the 
simplest code to install but SuperLU_dist is parallel and the only thing 
that you need to change in your code is the option in AdditiionalData.

Best,

Bruno

On Thursday, February 9, 2017 at 1:19:03 PM UTC-5, Spencer Patty wrote:
>
>
> A problem I am working on results in a non symmetric 4x4 block matrix 
> system with the first block representing a vector valued velocity and the 
> remaining 3 blocks scalar quantities that are all coupled.  
>
> The fe system is represented as 
>
> FESystem (FESystem(FE_Q 
> (parameters.degree_of_fe_velocity),dim), 1,  // velocity
>
>FE_Q(parameters.degree_of_fe_velocity), 1, // normal 
> velocity
>
>FE_Q(parameters.degree_of_fe_velocity), 1, // 
> curvature
>
>FE_Q(parameters.degree_of_fe_velocity), 1  // 
> willmore force
>
>   )
>
>
> The other terms are needed for the physics of the problem I am working on 
> but in the end, all I really need is the velocity.  After solving for these 
> components, we extract the velocity component and create a new DofHandler 
> consisting only of the velocity dofs and pass those to a transport module 
> where they are the velocity field to be used.  In order to accomplish this 
> extraction it seems necessary to have the dofs separated blockwise so that 
> we can extract the first block and use it as is.  We thus apply the 
> renumberings
>
>
> DoFRenumbering::hierarchical (*(dof_handler_ptr));
>
> DoFRenumbering::component_wise (*(dof_handler_ptr)),
>
> system_sub_blocks);
>
>
> where system_sub_blocks has 0 for the first dim components and then 
> increasing by one for the rest of the components.  (essentially this is the 
> same as block wise renumbering)
>
>
> Now, we have not been able to come up with a good preconditioner for this 
> system yet so that iterative methods all currently fail.  Thus, I must 
> resort to direct methods.  I have succeeded in coming up with a way to 
> construct the system not as a block but as a TrilinosWrappers::SparseMatrix 
> for putting into the Amesos_klu or available direct solvers through 
> Trilinos and it works great.  Afterwards, we copy the non block solution 
> vector back to a block vector for all the other parts since it is only the 
> solver that needs a non block system.
>
>
>   if (parameters.bUseBlockSystemMatrix == false)
>
>   {
>
> system_hanging_node_constraints_and_bv_velocity
> .distribute(solution_notblock_lo);
>
> solution_notblock_lr = solution_notblock_lo;
>
> // copy solution_notblock_lo to solution_lo
>
> IndexSet::ElementIterator iter = locally_owned_dofs_ptr->begin(),
>
>end = locally_owned_dofs_ptr->end();
>
> for (; iter != end; ++iter)
>
>   solution_lo(*iter) = solution_notblock_lo(*iter);
>
>
> // since we have inserted (set) the values of the solution_lo 
> vector,
>
> // we must now compress with the insert operation to be complete.
>
> solution_lo.compress(VectorOperation::insert);
>
>   }
>
>   system_hanging_node_constraints_and_bv_velocity
> .distribute(solution_lo);
>
>   solution_lr = solution_lo;
>
>
>   This has worked well for us but these trilinos direct solvers are only 
> serial solvers and we want to solve 2D and 3D systems so we need something 
> that can handle larger problems.  Our next idea was to try out MUMPS in 
> parallel through petsc to see if it would expand our available size of 
> problem.  I have been able to rewrite the code base to use generic linear 
> algebra to switch between petsc and trilinos type vectors matrices and 
> solvers/preconditioners. (That was a a surprisingly large amount of work to 
> get all the interfaces used consistently)  It works as expected for 
> problems which use the iterative solvers (still as block systems) but we 
> run into problems with the direct solvers. 
>
>
> It appears that for petsc, the assumption that the locally owned dofs 
> Index Sets are contiguous is really throwing a wrench in our plans for the 
> non block system approach.  I have seen the other discussions where 
> everyone says essentially it is not currently possible to get petsc and 
> petsc wrappers to a point where it could use non contiguous locally owned 
> index sets.  I guess I am ok with this since I believe I may have found a 
> way around it but I am having trouble figuring out exactly how to execute 
> this plan.
>
>
> Thus, in the case that we are using the direct solvers, we do not renumber 
> cellwise which gives us the contiguous index sets and we co

[deal.II] Renumbering dofs with petsc + block + MPI + Direct solver work around

2017-02-09 Thread Spencer Patty

A problem I am working on results in a non symmetric 4x4 block matrix 
system with the first block representing a vector valued velocity and the 
remaining 3 blocks scalar quantities that are all coupled.  

The fe system is represented as 

FESystem (FESystem(FE_Q 
(parameters.degree_of_fe_velocity),dim), 1,  // velocity

   FE_Q(parameters.degree_of_fe_velocity), 1, // normal 
velocity

   FE_Q(parameters.degree_of_fe_velocity), 1, // curvature

   FE_Q(parameters.degree_of_fe_velocity), 1  // willmore 
force

  )


The other terms are needed for the physics of the problem I am working on 
but in the end, all I really need is the velocity.  After solving for these 
components, we extract the velocity component and create a new DofHandler 
consisting only of the velocity dofs and pass those to a transport module 
where they are the velocity field to be used.  In order to accomplish this 
extraction it seems necessary to have the dofs separated blockwise so that 
we can extract the first block and use it as is.  We thus apply the 
renumberings


DoFRenumbering::hierarchical (*(dof_handler_ptr));

DoFRenumbering::component_wise (*(dof_handler_ptr)),

system_sub_blocks);


where system_sub_blocks has 0 for the first dim components and then 
increasing by one for the rest of the components.  (essentially this is the 
same as block wise renumbering)


Now, we have not been able to come up with a good preconditioner for this 
system yet so that iterative methods all currently fail.  Thus, I must 
resort to direct methods.  I have succeeded in coming up with a way to 
construct the system not as a block but as a TrilinosWrappers::SparseMatrix 
for putting into the Amesos_klu or available direct solvers through 
Trilinos and it works great.  Afterwards, we copy the non block solution 
vector back to a block vector for all the other parts since it is only the 
solver that needs a non block system.


  if (parameters.bUseBlockSystemMatrix == false)

  {

system_hanging_node_constraints_and_bv_velocity
.distribute(solution_notblock_lo);

solution_notblock_lr = solution_notblock_lo;

// copy solution_notblock_lo to solution_lo

IndexSet::ElementIterator iter = locally_owned_dofs_ptr->begin(),

   end = locally_owned_dofs_ptr->end();

for (; iter != end; ++iter)

  solution_lo(*iter) = solution_notblock_lo(*iter);


// since we have inserted (set) the values of the solution_lo 
vector,

// we must now compress with the insert operation to be complete.

solution_lo.compress(VectorOperation::insert);

  }

  system_hanging_node_constraints_and_bv_velocity
.distribute(solution_lo);

  solution_lr = solution_lo;


  This has worked well for us but these trilinos direct solvers are only 
serial solvers and we want to solve 2D and 3D systems so we need something 
that can handle larger problems.  Our next idea was to try out MUMPS in 
parallel through petsc to see if it would expand our available size of 
problem.  I have been able to rewrite the code base to use generic linear 
algebra to switch between petsc and trilinos type vectors matrices and 
solvers/preconditioners. (That was a a surprisingly large amount of work to 
get all the interfaces used consistently)  It works as expected for 
problems which use the iterative solvers (still as block systems) but we 
run into problems with the direct solvers. 


It appears that for petsc, the assumption that the locally owned dofs Index 
Sets are contiguous is really throwing a wrench in our plans for the non 
block system approach.  I have seen the other discussions where everyone 
says essentially it is not currently possible to get petsc and petsc 
wrappers to a point where it could use non contiguous locally owned index 
sets.  I guess I am ok with this since I believe I may have found a way 
around it but I am having trouble figuring out exactly how to execute this 
plan.


Thus, in the case that we are using the direct solvers, we do not renumber 
cellwise which gives us the contiguous index sets and we construct the non 
block system and give it to mumps and it solves the system just fine.  I 
have tried with 1 or 2 processors so far and it returns a solution ready to 
be passed on to the transport module.  Now, the extraction of the velocity 
component is the tricky part and the subject of this question.


With the trilinos system, the following code snippet allowed me to extract 
the desired solution vector and a corresponding dof handler. (where 
fe_system_ptr was the full 4 block fe system object)


  // Sadly, we have no recourse except to construct a new dof_handler

  // representing the velocity block and make sure it has the same 
ordering

  // as the system dof_handler does.  So we give it the desired 
fe_system

  //

Re: [deal.II] Nonhomogeneous Dirichlet Boundary conditions using a Dirichlet lift

2017-02-09 Thread Giulia Deolmi
Thanks a lot!
I will have a look at it,
kind regards,
Giulia

Il giorno giovedì 9 febbraio 2017 15:19:07 UTC+1, Wolfgang Bangerth ha 
scritto:
>
>
> > as far as I have understood (but I might be wrong), the functions 
> > VectorTools::interpolate_boundary_values 
> > <
> https://www.dealii.org/8.4.0/doxygen/deal.II/namespaceVectorTools.html#af6f700f193e9d5b52e9efe55e9b872d5>
>  
>
> > MatrixTools::apply_boundary_values 
> > <
> https://www.dealii.org/8.4.0/doxygen/deal.II/namespaceMatrixTools.html#a41a069894610445f84840d712d4f891e>
>  
>
> > find the nodes where Dirichlet BC's are applied and then there impose 
> the 
> > corrensponding boundary value, after having built the system matrix and 
> > right-hand side. 
> > 
> > Another possibility would be to use a Dirichlet lift, change the weak 
> > formulation and solve for homogeneous Dirichlet boundary conditions. I 
> am 
> > wondering if someone already did this or if it somewhere implemented in 
> deal.ii 
>
> It may not look like it, but that's really what the functions do that you 
> cite 
> above. 
>
> The algorithm is a bit complicated, but take a look at lectures 21.6 and 
> 21.65 
> here: 
>http://www.math.colostate.edu/~bangerth/videos.html 
>
> Best 
>   Wolfgang 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Nonhomogeneous Dirichlet Boundary conditions using a Dirichlet lift

2017-02-09 Thread Wolfgang Bangerth



as far as I have understood (but I might be wrong), the functions
VectorTools::interpolate_boundary_values

MatrixTools::apply_boundary_values

find the nodes where Dirichlet BC's are applied and then there impose the
corrensponding boundary value, after having built the system matrix and
right-hand side.

Another possibility would be to use a Dirichlet lift, change the weak
formulation and solve for homogeneous Dirichlet boundary conditions. I am
wondering if someone already did this or if it somewhere implemented in deal.ii


It may not look like it, but that's really what the functions do that you cite 
above.


The algorithm is a bit complicated, but take a look at lectures 21.6 and 21.65 
here:

  http://www.math.colostate.edu/~bangerth/videos.html

Best
 Wolfgang

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Assemble Righthand Side for vector-valued problem

2017-02-09 Thread Wolfgang Bangerth

On 02/09/2017 04:45 AM, Jaekwang Kim wrote:


The manufactured solution of velocities has not satisfies the continuity
equation.
(i.e. manufactured solution does not satisfy, div.u=0)...


Yes, that would do it :-)
Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Assemble Righthand Side for vector-valued problem

2017-02-09 Thread Jaekwang Kim

Dr. Bangerth 


Thank you, 
I just fixed up what was wrong... 
in most cases, it is usually easy problems. 

The manufactured solution of velocities has not satisfies the continuity 
equation. 
(i.e. manufactured solution does not satisfy, div.u=0)...

Jaekwang Kim  

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Assemble Righthand Side for vector-valued problem

2017-02-09 Thread Jaekwang Kim
Thank you for your advice. 

I would make the problem simpler. Do you get the right solution if you 
> have a constant viscosity? If you iterate, do you get the solution after 
> one iteration? 
>

Yes, I checked this. 
After one iteration, I get the solution after one iteration when constant 
viscosity case.
 

>
> I will also note that your manufactured solution is symmetric, but your 
> computational solution is not. This provides you with a powerful way 
> because you don't actually need to look at the convergence, it's enough 
> to check that on a *coarse* mesh the solution is *symmetric*. It may be 
> that solution is already non-symmetric on 1 cell, or on a 2x2 mesh -- in 
> which case you already know that something is wrong. The question then 
> is why that is so -- is your rhs correct, for example? 
>

After modified some of mistakes, now I have following result for pressure 
field.  
With same manufactured pressure solution...


  




<2x2 mesh> 





<4x4 mesh> 







For velocity field, I still have at least qualitatively good result as I 
posted previous comment.
Tho it does not converge with mesh refinement when it is compared to 
manufactured solution in L2_norm base. 


 
 

> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Nonhomogeneous Dirichlet Boundary conditions using a Dirichlet lift

2017-02-09 Thread Giulia Deolmi
Hi Praveen,

as far as I have understood (but I might be wrong), the functions
VectorTools::interpolate_boundary_values 

MatrixTools::apply_boundary_values 

find the nodes where Dirichlet BC's are applied and then there impose the 
corrensponding boundary value, after having built the system matrix and 
right-hand side.

Another possibility would be to use a Dirichlet lift, change the weak 
formulation and solve for homogeneous Dirichlet boundary conditions. I am 
wondering if someone already did this or if it somewhere implemented in 
deal.ii

I am currently dealing with parameter dependent Dirichlet boundary 
conditions and I would like to be able to see explicitly how these 
parameters enter the system matrix and the right-hand side, writing the 
dependency in an affine way, i.e. p1 A1 + p2 A2 +...= p1 f1 + p2 f2 + 
I am currently not able to do it using the function 
MatrixTools::apply_boundary_values 

 .

Thanks for your reply!
Kind regards,
Giulia



Il giorno mercoledì 8 febbraio 2017 17:21:46 UTC+1, Praveen C ha scritto:
>
> Hello Giulia
>
> The usual way of applying Dirichlet bc in deal.II essentially does a 
> lifting approach. If 
>
> u = g on boundary
>
> then the lifting is
>
> u_{g,h}(x) = sum_(i on boundary) g(x_i) \phi_i(x)
>
> Did you want to use a different lifting ?
>
> Best
> praveen
>
> On Wed, Feb 8, 2017 at 8:38 PM, Giulia Deolmi  > wrote:
>
>> Dear deal.ii users,
>>
>> is there someone who implemented Nonhomogeneous Dirichlet Boundary 
>> conditions using a Dirichlet lift? 
>>
>> Thanks a lot in advance,
>> Kind regards,
>> Giulia
>>
>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "deal.II User Group" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to dealii+un...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.