Re: [deal.II] Hanging nodes of a read-in mesh

2020-08-10 Thread Feimi Yu
I see. I will try if I can find something. For now, the slowdown happens in the FGMRES solver by the way. Thanks! Feimi On Mon, Aug 10, 2020 at 10:41 AM Wolfgang Bangerth wrote: > On 8/9/20 10:10 PM, Feimi Yu wrote: > > I'm solving for a SUPG stabilized slightly compressible Navi

Re: [deal.II] Hanging nodes of a read-in mesh

2020-08-09 Thread Feimi Yu
Forgot to say, an FGMRES iterative solver On Monday, August 10, 2020 at 12:10:59 AM UTC-4, Feimi Yu wrote: > > I'm solving for a SUPG stabilized slightly compressible Navier-Stokes > equation, with a Schur complement type preconditioner. > > Feimi > > On Sun, Aug 9, 2020

Re: [deal.II] Hanging nodes of a read-in mesh

2020-08-09 Thread Feimi Yu
I'm solving for a SUPG stabilized slightly compressible Navier-Stokes equation, with a Schur complement type preconditioner. Feimi On Sun, Aug 9, 2020 at 11:16 PM Wolfgang Bangerth wrote: > On 8/9/20 7:22 PM, Feimi Yu wrote: > > > > I realized something after I posted my questio

Re: [deal.II] Hanging nodes of a read-in mesh

2020-08-09 Thread Feimi Yu
: > On 8/9/20 1:58 PM, Feimi Yu wrote: > > > > I am doing a grid study of a 2D mesh. At first I simply applied a local > > refinement in the code for a specific region, > > but it turns out this caused the load to be unbalanced among the ranks > (the > > rank ca

[deal.II] Hanging nodes of a read-in mesh

2020-08-09 Thread Feimi Yu
Hi All, I am doing a grid study of a 2D mesh. At first I simply applied a local refinement in the code for a specific region, but it turns out this caused the load to be unbalanced among the ranks (the rank carrying the refined mesh is much more loaded than others) and the computation became

Re: [deal.II] Re: Instantiation problem for Utilities::MPI::sum (const ArrayView< const T > , const MPI_Comm _communicator, const ArrayView< T > )

2020-01-28 Thread Feimi Yu
t; works as well as > > Utilities::MPI::sum(my_view, mpi_communicator, my_view); > > Best, > Daniel > > > > Am Di., 28. Jan. 2020 um 12:57 Uhr schrieb Feimi Yu >: > >> Hi Bruno, >> >> Thanks for your reply! >> That's not true. The input and output

[deal.II] Re: Instantiation problem for Utilities::MPI::sum (const ArrayView< const T > , const MPI_Comm _communicator, const ArrayView< T > )

2020-01-28 Thread Feimi Yu
; On Tuesday, January 28, 2020 at 11:40:03 AM UTC-5, Feimi Yu wrote: >> >> I uploaded a minimal code to reproduce the problem I encountered. >> >> Thanks! >> Feimi >> >> On Monday, January 27, 2020 at 1:42:18 PM UTC-5, Feimi Yu wrote: >>&g

[deal.II] Re: Instantiation problem for Utilities::MPI::sum (const ArrayView< const T > , const MPI_Comm _communicator, const ArrayView< T > )

2020-01-28 Thread Feimi Yu
I uploaded a minimal code to reproduce the problem I encountered. Thanks! Feimi On Monday, January 27, 2020 at 1:42:18 PM UTC-5, Feimi Yu wrote: > > Hi, > > I was trying to use the the function Utilities::MPI::sum > <https://www.dealii.org/current/doxygen/deal.II/namespaceUti

Re: [deal.II] Instantiation problem for Utilities::MPI::sum (const ArrayView< const T > , const MPI_Comm _communicator, const ArrayView< T > )

2020-01-28 Thread Feimi Yu
hope that this helps. > Best, > Jean-Paul > > On 27 Jan 2020, at 19:42, Feimi Yu > wrote: > > Hi, > > I was trying to use the the function Utilities::MPI::sum > <https://www.dealii.org/current/doxygen/deal.II/namespaceUtilities_1_1MPI.html#afaf69c2cc054b615707da05e

[deal.II] Instantiation problem for Utilities::MPI::sum (const ArrayView< const T > , const MPI_Comm _communicator, const ArrayView< T > )

2020-01-27 Thread Feimi Yu
Hi, I was trying to use the the function Utilities::MPI::sum (const ArrayView < const T > , const MPI_Comm

Re: [deal.II] Deprecated function PETScWrappers::VectorBase::ratio()

2018-05-15 Thread Feimi Yu
-4, Wolfgang Bangerth wrote: > > On 05/14/2018 11:52 PM, Feimi Yu wrote: > > > > Thank you for the detailed explanation! Actually what I want to do is to > evaluate > > the inverse of a lumped diagonal matrix to mimic the matrix inverse. > > I see. Since floatin

Re: [deal.II] Deprecated function PETScWrappers::VectorBase::ratio()

2018-05-14 Thread Feimi Yu
, Wolfgang Bangerth wrote: > > On 05/12/2018 12:02 PM, Feimi Yu wrote: > > > > I'm currently using the ratio function for PETSc vectors to compute > vector > > inverse in my parallel code. However, it looks like that ratio() is > > deprecated. I know there is a poten

[deal.II] Re: Deprecated function PETScWrappers::VectorBase::ratio()

2018-05-14 Thread Feimi Yu
xpect this > particular function to be removed any time soon. > > Best, > > Bruno > > On Saturday, May 12, 2018 at 12:02:14 AM UTC-4, Feimi Yu wrote: >> >> Hi, >> >> I'm currently using the ratio function for PETSc vectors to compute >> vector inverse in

[deal.II] Deprecated function PETScWrappers::VectorBase::ratio()

2018-05-11 Thread Feimi Yu
Hi, I'm currently using the ratio function for PETSc vectors to compute vector inverse in my parallel code. However, it looks like that ratio() is deprecated. I know there is a potential floating point exception due to the undefined behavior when the denominator vector contains zero (because

Re: [deal.II] Reason for SolverGMRES being slower in parallel?

2018-04-08 Thread Feimi Yu
was > super easy for certain problems that ILU serial outperforms > ILU in parallel. I experienced that in problems like when I have elliptic > equations in subdomains > connected with hyperbolic-type (upwinding) interface conditions. > > 在 2018年4月5日星期四 UTC-7下午4:42:17,Wolfgang Ba

Re: [deal.II] Reason for SolverGMRES being slower in parallel?

2018-04-04 Thread Feimi Yu
I wish I can only deal with small problem, but that is only a test problem and I guess we need to use this code to compute much larger 3-D cases. Now it is too slow so I cannot test any larger cases than 400k dofs. I tried 400k dofs, still, 2 cores are slower than 1 core. The weirdest thing is

[deal.II] Reason for SolverGMRES being slower in parallel?

2018-03-22 Thread Feimi Yu
Hi, I'm going to solve a Schur complement in my preconditioner for SUPG stabilized fluid solver and will be using ILU(0) decomposition. Since the BlockJacobi does not turn out to work well, I wrote a wrapper for Pilut, which is a package for ILUT decomposition in Hypre and wrapped by PETSc. I

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-21 Thread Feimi Yu
Got it. Thank you so much! Thanks, Feimi On Wednesday, March 21, 2018 at 10:51:24 AM UTC-4, Wolfgang Bangerth wrote: > > On 03/18/2018 04:41 PM, Feimi Yu wrote: > > Please ignore my last post. I made a mistake there. > > Attached is the revised version to better illu

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-18 Thread Feimi Yu
Please ignore my last post. I made a mistake there. Attached is the revised version to better illustrate the problem. Thanks, Feimi On Saturday, March 10, 2018 at 4:48:23 AM UTC-5, Wolfgang Bangerth wrote: > > On 03/08/2018 02:55 PM, Feimi Yu wrote: > > > > The probl

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-18 Thread Feimi Yu
before. I added the file for your reference. Thanks! Feimi On Saturday, March 10, 2018 at 4:48:23 AM UTC-5, Wolfgang Bangerth wrote: > > On 03/08/2018 02:55 PM, Feimi Yu wrote: > > > > The problem is that I still encounter the "out of range" problem even > when I

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-08 Thread Feimi Yu
Hi Wolfgang, Fortunately, I managed to solve that problem. I found that every single operation on the iterator needs to call row_length(), which requires the assembled status of the matrix, and apparently set() operation will break this status. My solution is to iterate and cache the rows,

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-06 Thread Feimi Yu
atrix." Putting the compress() in the loop does not throw exceptions, instead, it iterates forever. So just as you said it is very inefficient and not an option. On Tuesday, March 6, 2018 at 4:19:54 AM UTC-5, Wolfgang Bangerth wrote: > > On 03/05/2018 02:54 PM, Feimi Yu wrote: > > >

[deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-05 Thread Feimi Yu
This time it says: *** Error in ' ' :free(): invalid size: 0x556443d63e50 *** This seems like a out of bounds access. This happens even when I run with 1 process. Thanks, Feimi On Monday, March 5, 2018 at 12:21:46 PM UTC-5, Feimi Yu wrote: > > Hi, > > I'm using PETScWrapper to

[deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-05 Thread Feimi Yu
Thanks for any help! Feimi On Monday, March 5, 2018 at 12:21:46 PM UTC-5, Feimi Yu wrote: > > Hi, > > I'm using PETScWrapper to parallelize my code. In my preconditioner for > the GMRES solver, there is one step that requires a matrix copied from the > system

[deal.II] Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-05 Thread Feimi Yu
Hi, I'm using PETScWrapper to parallelize my code. In my preconditioner for the GMRES solver, there is one step that requires a matrix copied from the system matrix, and set all the elements to be the absolute value. It was fine in serial because I can iterator over all the entries simply

Re: [deal.II] Re: the viscous term in SUPG stabilization terms

2018-01-17 Thread Feimi Yu
Thanks a lot, Daniel and Wolfgang! What I understood was that since the SUPG and PSPG terms are integrated over the cell interiors so the laplacians can be ignored while using Q1/Q1 element without any singularity problems on the cell edges. Thanks! Feimi On Wednesday, January 17, 2018 at

[deal.II] the viscous term in SUPG stabilization terms

2018-01-17 Thread Feimi Yu
Hi, I'm working on a solver to solve Navier-Stokes equations using the Streamline Upwind Petrov-Galerkin (SUPG) and Pressure Stabilization Petrov-Galerkin (PSPG) method, combined with Newton's iteration and FGMRES iterative method. However, I had some difficulties with implementing the