Re: [deal.II] ERROR WHEN RUNNING THE CODE ON MULTIPLE NODES OF HPC

2020-08-02 Thread 孙翔
Hi, Wolfgang, Yes, it cannot run on a cluster. Both of them run in release mode. I'm also curious about the error. I debug the code by outputting some specific values. Best, Xiang On Sunday, 2 August 2020 20:45:18 UTC-7, Wolfgang Bangerth wrote: > > On 8/2/20 1:50 AM, 孙翔 wrote: > > Hi, I

Re: [deal.II] Question about the numbering of DoFs

2020-08-02 Thread Jimmy Ho
Hi Yuesu, To be more precise: Yes, you do have two sets of basis functions in each element. A quadratic one for interpolating the vector components, and a linear one for interpolating the scalar. But when calculating DOFs associated with the vector components, you should only count the basis

Re: [deal.II] Question about the numbering of DoFs

2020-08-02 Thread Jimmy Ho
Hi Yuesu, The 2 in the initialization means that the basis functions (hence the finite element for the vector part) are quadratic. Which means that each element has 9 nodes. But you should still only have one basis function associated with each node. That's why you have 9*2=18 DOFs associated

Re: [deal.II] Question about the numbering of DoFs

2020-08-02 Thread yuesu jin
Dear Jimmy, Thank you for your reply. Yes, I can set up only one base function for each node. But the problem is that the example sets up two base functions for a two-component vector. FESystems fe_basis(FE_Q (2),dim,FE_Q (1),1), which the (2) means the vector has order-2 base functions. Best

Re: [deal.II] ERROR WHEN RUNNING THE CODE ON MULTIPLE NODES OF HPC

2020-08-02 Thread Wolfgang Bangerth
On 8/2/20 1:50 AM, 孙翔 wrote: Hi, I run my parallelized code which is similar to step-18 on HPC. When I run it in multiple mpi processes on a single node. It can give me a good solution. However, when I run it on multiple nodes of HPC and each node has one mpi process, it reported an error as

[deal.II] Question about the numbering of DoFs

2020-08-02 Thread Jimmy Ho
Hi Yuesu, When you have a vector-valued finite element, different components of the vector are still interpolated using the same basis functions. So you can have two DOFs on each node, but there's only one basis function associated with this node. Hope that helps! Best, Jimmy -- The

[deal.II] Question about the numbering of DoFs

2020-08-02 Thread yuesu jin
Dear all, I am reading the page : https://www.dealii.org/current/doxygen/deal.II/classFiniteElement.html. I feel confused by the numbering of the degrees of freedom. For example: FESystem fe_basis(FE_Q

[deal.II] ERROR WHEN RUNNING THE CODE ON MULTIPLE NODES OF HPC

2020-08-02 Thread 孙翔
Hi, I run my parallelized code which is similar to step-18 on HPC. When I run it in multiple mpi processes on a single node. It can give me a good solution. However, when I run it on multiple nodes of HPC and each node has one mpi process, it reported an error as follows. I checked the l2 norm

Re: [deal.II] ABOUT OUTPUT IN PARALLEL

2020-08-02 Thread 孙翔
Thank you very much. It works now. On Wednesday, 29 July 2020 13:49:15 UTC-7, Jean-Paul Pelteret wrote: > > Hi, > > The problem is most likely that you’ve got on (static) variable > “times_and_names” that is referencing two completely different of output > files. This means that for every

Re: [deal.II] HOW TO OUTPUT SYSTEM MATRIX IN A FILE

2020-08-02 Thread 孙翔
Thank you very much. On Wednesday, 29 July 2020 12:31:30 UTC-7, Daniel Arndt wrote: > > Of course, there is also PETScWrappers::MatrixBase::print( > https://www.dealii.org/current/doxygen/deal.II/classPETScWrappers_1_1MatrixBase.html#a7515e640202d1ad50bd9baa13c404cb1) > > that should work. > >

Re: [deal.II] HOW TO OUTPUT SYSTEM MATRIX IN A FILE

2020-08-02 Thread 孙翔
Thank you very much. On Wednesday, 29 July 2020 09:59:43 UTC-7, Wolfgang Bangerth wrote: > > On 7/28/20 12:11 PM, 孙翔 wrote: > > Hi, I followed step-17 and build a system matrix of which type > > is PETScWrappers::MPI::SparseMatrix. I want to output it after > assembling. How > > should I do