Dear Vikram,
 Yes. I do agree with your remarks on the geometrical singularity, indeed.
Of course, refinement can remedy this to some extents. But from the 
view of a practical large problem, between this weak implementation of
Dirichlet BC and the direct nodewise value assiging way, which is better?
May the latter way introduce other qeustions, such as convergence?
So this is my first question here.
  
 At the same time, I compared the result discussed above with the results 
from a Feel++ code with triangular meshes of roughly same mesh resolution
 (gmsh.hsize=0.05). Here further weak terms are added for
symmetry of the Jacobian matrix, such as terms of 
-\int_{Gamma_D} (\mu\nabla v-qI)\cdot\vec{n}\cdot (u-u_bc)

 Since Feel++ encapsulated all details of BC implementation quite well, I can't
tell you how it works yet at the level of source codes. 
I inserted these terms in my libmesh code as well. But The comparison
showed that the results did not match the accuracy of the Feel++ codes yet.
 If you like, I can send it to you later.
 Zhenyu

  

 

 ------------------ ???????? ------------------
  ??????: "Vikram Garg";<[email protected]>;
 ????????: 2015??3??19??(??????) ????0:59
 ??????: "grandrabbit"<[email protected]>; 
 ????: "libmesh-users"<[email protected]>; 
 ????: Re: [Libmesh-users] Proper way of imposing Weak Dirichlet boundary 
condition

 

 What happens if you refine the mesh ? The spurious oscillation you are seeing 
could be because of the singularity at the corners where the boundary condition 
is not well defined.
 
 On Tue, Mar 17, 2015 at 7:59 PM, grandrabbit <[email protected]> wrote:
 Thank you for your timely response.

I reset h_elem to 1.0. No big difference. In fact in this case the mesh is 
uniform, with 20 cells along one dimension, h_elem is a constant, say, 1.25e-2. 
Since I set penalty to 1.e+8, the actually penalty is 8.e9,
similar to intro_ex3. Therefore I think the true reason is located elsewhere .  
BTW, is there any example for TransientNonlinearImplicitSystem? When I start 
the coding, I referred to systems_of_equations_ex2, which uses 
TransientLinearImplicitSystem and runs well. I changed to 
TransientNonlinearImplicitSystem and rewrite the residual function based on 
that.

Anyway, I list the results below as reference. Some screenshots attached as 
well. Further suggestion's welcome.

case 1(good one): DirichletBoundary class used, no spurious velocity near the 
wall
penalty 1.e8, residual 1.e-16~1.e-17, nonlinear convergence 7.28e-4
u in [-0.196,1.0]; p in [-110, +111]
DirichletBoundary_class.png

case 2(problem now): self coded penalty integral (see laminar2dnl.cpp), nonzero 
velocity on the corner wall
penalty 1.e8, residual 1.e-9~1.e-10, nonlinear convergence 7.286e-4
u in [-0.203,1.07]; p in [-195, +197]
h_elem: 1.25e-2
h_elem_0.0125.png
h_elem_0.0125_corner.png

case 3(problem as well): self coded penalty integral (see laminar2dnl.cpp), 
nonzero velocity on the corner wall
penalty 1.e8, residual 1.e-11~1.e-12, nonlinear convergence 7.28e-4
u in [-0.203,1.07]; p in [-195, +197]
h_elem: 1.0
h_elem_1.0.png
h_elem_1.0_corner.png

Cheers,

Zhenyu
-----????????-----
 ??????: "Vikram Garg" <[email protected]>
 ????????: 2015-03-18 01:45:12 (??????)
 ??????: Zhang <[email protected]>
 ????: "[email protected]" 
<[email protected]>
 ????: Re: [Libmesh-users] Proper way of imposing Weak Dirichlet boundary 
condition

Zhenyu,  can you tell us what happens if you remove the h_elem term in your  
penalty formulation ? Usually, we just set the penalty to a very large  number, 
and do not involve the mesh size in setting it. See example 3: 
http://libmesh.github.io/examples/introduction_ex3.html
  


On Tue, Mar 17, 2015 at 8:09 AM, Zhang <[email protected]> wrote:
Dear Libmesher,

 I tried to write an incompressible Naiver-Stokes solver based Libmesh.
 Following features includes:
 coupled pressure-velocity method
 TransientNonlinearImplicitSystem used with
 compute_jacobian(..) and compute_residual(..) run separatedly
 Lid-driven cavity case for initial tests
 BDF1 time marching
 QUAD4 element,
 vel_order=2, pres-order=1, both LAGRANGE elements
 Pressure pinned at node_id=0

 I tried DirichletBoundary class to impose the velocity BC, the code runs OK.

 Now the problem is :
 I coded myself the common penalty method for term 
\int_{\Gamma_D}dt*\gamma/h*u*v,
  and applied the forms in boundary integral part of jacobian function
 Jxw_face[qp_face]*dt*penalty/h_elem*phi_face_[j][qp_face] 
*phi_face_[i][qp_face]
 and
 Jxw_face[qp_face]*dt*penalty/h_elem*phi_face_[i][qp_face] *(u-u_bc)
 at counterparts of residual function
 the results show the boundary velocity not fully applied ,esp. at the side 
walls near
 the two top corners. Actually there are nonzero u normal to the local side 
wall.
 So I wonder anybody here will kindly show me some hints.

 Furthermore, I tried applied the weak Dirichlet conditions by add futher 
boundary integrals like
 dt\int_{\partial\Omega} (-\mu\nabla \vec{ u}+pII)\cdot \vec{n}\cdot\vec{v} +
 dt\int_{\partial\Omega} (-\mu\nabla \vec{ v}+qII)\cdot \vec{n}\cdot\vec{u}
 at left and
 dt\int_{\partial\Omega} (-\mu\nabla \vec{ v}+qII)\cdot \vec{n}\cdot\vec{u_bc}



 The case is even worse.

 I also attached the code, and I am looking forward to any suggestions. Many 
thanks.

 Zhenyu

------------------------------------------------------------------------------
 Dive into the World of Parallel Programming The Go Parallel Website, sponsored
 by Intel and developed in partnership with Slashdot Media, is your hub for all
 things parallel software development, from weekly thought leadership blogs to
 news, videos, case studies, tutorials and more. Take a look and join the
 conversation now. http://goparallel.sourceforge.net/
_______________________________________________
 Libmesh-users mailing list
 [email protected]
 https://lists.sourceforge.net/lists/listinfo/libmesh-users





--
Vikram GargPostdoctoral Associate
Predictive Engineering and Computational Science (PECOS)
The University of Texas at Austin
http://web.mit.edu/vikramvg/www/



http://vikramvgarg.wordpress.com/

http://www.runforindia.org/runners/vikramg
------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users




 

-- 
      Vikram Garg Postdoctoral Associate
 Predictive Engineering and Computational Science (PECOS)
 The University of Texas at Austin
 http://web.mit.edu/vikramvg/www/

 

 http://vikramvgarg.wordpress.com/

 http://www.runforindia.org/runners/vikramg
------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to