Re: [deal.II] Periodic boundary conditions in Newton's method

2020-09-08 Thread Jimmy Ho
Hi Dr. Bangerth, Thanks so much for the suggestion! My code works after applying the constraint on the Newton update! Best, Jimmy On Monday, September 7, 2020 at 11:01:53 PM UTC-5, Wolfgang Bangerth wrote: > > On 9/7/20 6:40 PM, Jimmy Ho wrote: > > > > I have a general ques

[deal.II] Periodic boundary conditions in Newton's method

2020-09-07 Thread Jimmy Ho
Hi All, I have a general question regarding the application of periodic boundary conditions in Newton's method when solving nonlinear equations. Should the periodic constraint be applied to the incremental Newton update, or directly to the total solution vector? Thanks in advance for any sugge

Re: [deal.II] Connecting UMFPACK in deal.II to Trilinos's Amesos solver

2020-08-04 Thread Jimmy Ho
Hi Dr. Bangerth, Thanks a lot for your guidance! I will install an external copy of UMFPACK and see what happens! Best, Jimmy On Tuesday, August 4, 2020 at 5:23:09 PM UTC-5, Wolfgang Bangerth wrote: > > On 8/4/20 3:56 PM, Jimmy Ho wrote: > > > > So, what should I do t

[deal.II] Connecting UMFPACK in deal.II to Trilinos's Amesos solver

2020-08-04 Thread Jimmy Ho
Hi All, I am trying to use UMFPACK through Trilinos (V.12.18.1) Amesos solver. Since UMFPACK is bundled with deal.ii and was configured to work in deal.ii before, I was trying to point Trilinos to that installation of UMFPACK. Following the README file on interfacing deal.II with Trilinos, I us

Re: [deal.II] Question about the numbering of DoFs

2020-08-02 Thread Jimmy Ho
Hi Yuesu, To be more precise: Yes, you do have two sets of basis functions in each element. A quadratic one for interpolating the vector components, and a linear one for interpolating the scalar. But when calculating DOFs associated with the vector components, you should only count the basis fu

Re: [deal.II] Question about the numbering of DoFs

2020-08-02 Thread Jimmy Ho
Hi Yuesu, The 2 in the initialization means that the basis functions (hence the finite element for the vector part) are quadratic. Which means that each element has 9 nodes. But you should still only have one basis function associated with each node. That's why you have 9*2=18 DOFs associated w

[deal.II] Question about the numbering of DoFs

2020-08-02 Thread Jimmy Ho
Hi Yuesu, When you have a vector-valued finite element, different components of the vector are still interpolated using the same basis functions. So you can have two DOFs on each node, but there's only one basis function associated with this node. Hope that helps! Best, Jimmy -- The deal.I

Re: [deal.II] Unexpected behavior when using GridGenerator::subdivided_hyper_rectangle in parallel

2020-07-30 Thread Jimmy Ho
Hi Dr. Bangerth, Thanks a lot for the clarification! They are really helpful! Best, Jimmy On Thursday, July 30, 2020 at 11:47:21 AM UTC-5, Wolfgang Bangerth wrote: > > On 7/30/20 10:11 AM, Jimmy Ho wrote: > > > > As a follow-up question, upon calling compress(), will the lo

Re: [deal.II] Unexpected behavior when using GridGenerator::subdivided_hyper_rectangle in parallel

2020-07-30 Thread Jimmy Ho
Hi Dr. Bangerth, As a follow-up question, upon calling compress(), will the local copy of the system matrix on a specific processor get updated to contain information from all other processors? In other words, if I print out the system matrix from a particular processor after calling compress()

Re: [deal.II] Unexpected behavior when using GridGenerator::subdivided_hyper_rectangle in parallel

2020-07-30 Thread Jimmy Ho
Hi Dr. Bangerth, Thanks a lot for your guidance! I compared the solution in the vtu files using the minimal example above, they are nearly identical. Looking back into the code, I am outputting the system matrix from processor 0, which probably only printed the part that it locally owns, hence

[deal.II] Unexpected behavior when using GridGenerator::subdivided_hyper_rectangle in parallel

2020-07-28 Thread Jimmy Ho
Hi All, I am using the Step 40 tutorial to build a parallel program using MPI. The code runs but generates different results when using one and multiple processors. After stripping it down to the bare minimum, it appears that when the mesh is built using GridGenerator::subdivided_hyper_rectangl

Re: [deal.II] Output cell variables as cell arrays in vtu files

2020-07-28 Thread Jimmy Ho
Hi Dr. Bangerth, Thanks a lot for pointing me to the right direction! That's what I am looking for! Best, Jimmy On Monday, July 27, 2020 at 3:55:41 PM UTC-5, Wolfgang Bangerth wrote: > > On 7/27/20 2:47 PM, Jimmy Ho wrote: > > > > data_out.add_data_v

Re: [deal.II] Output cell variables as cell arrays in vtu files

2020-07-27 Thread Jimmy Ho
point variables. Thanks again for the help! Best, Jimmy On Monday, July 27, 2020 at 3:55:41 PM UTC-5, Wolfgang Bangerth wrote: > > On 7/27/20 2:47 PM, Jimmy Ho wrote: > > > > data_out.add_data_vector( cellData , "Name" , data_out.type_cell_data ); > > > >

[deal.II] Output cell variables as cell arrays in vtu files

2020-07-27 Thread Jimmy Ho
Hi All, I am trying to compute the average of integration point variables and store them as cell variables, and subsequently write them as cell arrays in the visualization. I used: data_out.add_data_vector( cellData , "Name" , data_out.type_cell_data ); to force deal.ii to recognize that "cell

[deal.II] Re: An iterator for all nodes (including midside) in the mesh?

2020-06-22 Thread Jimmy Ho
ped-position-of-support-points-of-my-element > > what you are looking for? > > Best, > > Bruno > > On Monday, June 22, 2020 at 1:32:43 AM UTC-4, Jimmy Ho wrote: >> >> Dear All, >> >> I am trying to write a code to give initial condition for a vector of

[deal.II] An iterator for all nodes (including midside) in the mesh?

2020-06-21 Thread Jimmy Ho
Dear All, I am trying to write a code to give initial condition for a vector of FullMatrix objects. The initial value of each entry of each matrix is uniquely defined by the corresponding nodal location. Hence, I am looking for an easy way to iterate all nodes (including mid-side nodes) in the