Re: [deal.II] Re: Anistropic refinement DG - saddle point problem
Thank you Wolfgang, I just realized the new version of step 12. I was working with the older one, which made it complicated, especially on the inner faces. Regards, Juan On Tue, Aug 18, 2020 at 7:28 AM Wolfgang Bangerth wrote: > On 8/17/20 6:02 AM, jfgir...@gmail.com wrote: > > > > I would like to open again the topic with another question. Is there any > way > > to use the Meshworker to solve the DG formulation but using block matrix > and > > vectors? I couldn't find a proper way to do it with the Meshworker > because I > > have to choose the component of the shape function when I am using the > blocks > > matrix. > > Ignore MeshWorker and instead build on the underlying > MeshWorker::mesh_loop() > functions. This is what the current versions of step-12 and step-47 do, > for > example. They don't care what matrix and vector type you use. > > Best > W. > > > -- > > Wolfgang Bangerth email: bange...@colostate.edu > www: http://www.math.colostate.edu/~bangerth/ > > -- > The deal.II project is located at http://www.dealii.org/ > For mailing list/forum options, see > https://groups.google.com/d/forum/dealii?hl=en > --- > You received this message because you are subscribed to the Google Groups > "deal.II User Group" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to dealii+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/dealii/0ca0575b-41d2-cc78-66a5-052a15aa77c9%40colostate.edu > . > -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/CAEzTcfYEEVu1po_ZT8CyeWZBqr68sT8dXeUBOj1EugTwRw%3Dacw%40mail.gmail.com.
[deal.II] METIS Issue during Installation Process
Hello, I am trying to install Deal.ii on my new cluster, from the build screen output it looks there is a problem in finding METIS (which is installed). I am not exactly sure what is going on? Is there a way to hard code the METIS Path during the make process? I have attached the screen out from the CMAKE. Any thoughts or suggestions would greatly be appreciated. Thanks! -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/18b4e1fa-eeed-4b27-a643-6af35104f58bn%40googlegroups.com. [ 76%] Built target obj_simplex_release In file included from /home/tad/dealii/source/lac/sparsity_tools.cc:39:0: /usr/local/include/metis.h:175:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_PartGraphRecursive(idx_t *nvtxs, idx_t *ncon, idx_t *xadj, ^~~~ ETIMEDOUT /usr/local/include/metis.h:180:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_PartGraphKway(idx_t *nvtxs, idx_t *ncon, idx_t *xadj, ^~~~ ETIMEDOUT /usr/local/include/metis.h:185:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_MeshToDual(idx_t *ne, idx_t *nn, idx_t *eptr, idx_t *eind, ^~~~ ETIMEDOUT /usr/local/include/metis.h:188:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_MeshToNodal(idx_t *ne, idx_t *nn, idx_t *eptr, idx_t *eind, ^~~~ ETIMEDOUT /usr/local/include/metis.h:191:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_PartMeshNodal(idx_t *ne, idx_t *nn, idx_t *eptr, idx_t *eind, ^~~~ ETIMEDOUT /usr/local/include/metis.h:195:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_PartMeshDual(idx_t *ne, idx_t *nn, idx_t *eptr, idx_t *eind, ^~~~ ETIMEDOUT /usr/local/include/metis.h:200:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_NodeND(idx_t *nvtxs, idx_t *xadj, idx_t *adjncy, idx_t *vwgt, ^~~~ ETIMEDOUT /usr/local/include/metis.h:203:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_Free(void *ptr); ^~~~ ETIMEDOUT /usr/local/include/metis.h:205:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_SetDefaultOptions(idx_t *options); ^~~~ ETIMEDOUT /usr/local/include/metis.h:210:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_NodeNDP(idx_t nvtxs, idx_t *xadj, idx_t *adjncy, idx_t *vwgt, ^~~~ ETIMEDOUT /usr/local/include/metis.h:214:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_ComputeVertexSeparator(idx_t *nvtxs, idx_t *xadj, idx_t *adjncy, ^~~~ ETIMEDOUT /usr/local/include/metis.h:217:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_NodeRefine(idx_t nvtxs, idx_t *xadj, idx_t *vwgt, idx_t *adjncy, ^~~~ ETIMEDOUT /home/tad/dealii/source/lac/sparsity_tools.cc: In function ‘void dealii::SparsityTools::{anonymous}::partition_metis(const dealii::SparsityPattern&, const std::vector&, unsigned int, std::vector&)’: /home/tad/dealii/source/lac/sparsity_tools.cc:93:7: error: ‘METIS_SetDefaultOptions’ was not declared in this scope METIS_SetDefaultOptions(options); ^~~ In file included from /home/tad/dealii/source/lac/sparsity_tools.cc:39:0: /usr/local/include/metis.h:175:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_PartGraphRecursive(idx_t *nvtxs, idx_t *ncon, idx_t *xadj, ^~~~ ETIMEDOUT /usr/local/include/metis.h:180:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_PartGraphKway(idx_t *nvtxs, idx_t *ncon, idx_t *xadj, ^~~~ ETIMEDOUT /usr/local/include/metis.h:185:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_MeshToDual(idx_t *ne, idx_t *nn, idx_t *eptr, idx_t *eind, ^~~~ ETIMEDOUT /usr/local/include/metis.h:188:1: error: ‘METIS_EXPORT’ does not name a type; did you mean ‘ETIMEDOUT’? METIS_EXPORT int METIS_MeshToNodal(idx_t *ne, idx_t *nn, idx_t *eptr, idx_t *eind, ^~~~ ETIMEDOUT /home/tad/dealii/source/lac/sparsity_tools.cc:138:16: error: ‘METIS_PartGraphRecursive’ was not declared in this scope ierr = METIS_PartGraphRecursive(, ^~~~ /usr/local/include/metis.h:191:1:
Re: [deal.II] PETSc iteration does not converge
OK, I see. I will do it, thank you! I will let you know if I get any result. On Mon, Aug 17, 2020 at 7:06 PM Wolfgang Bangerth wrote: > On 8/17/20 6:01 PM, yuesu jin wrote: > > I did nothing to verify those properties. because the single thread > CG > > solver converged well. I used different preconditioners in parallel > version > > and single thread version. In the parallel version I used block Jacobi > and in > > the single thread version I used Jacobi. How can I check if the parallel > > blocked sparse matrix is/ isn't symmetrical and positive definite? > > Think of tests such as this: > * run on 1 processor, multiply a vector w of all 1s from the right, and > output > the resulting vector v=Aw on all processors > * run on >1 processors and repeat > Are the vectors the same for both cases? The matrix should be the same > regardless of partitioning, but is it? > > * repeat the same with Tvmult (multiplication from the left) > > You can probably come up with many similar tests that check properties of > the > matrix, comparing between the single-processor and multiple-processor > cases. > The point is that you may not know the exact answer, but you know that the > two > cases should result in the same output. > > Best > W. > > -- > > Wolfgang Bangerth email: bange...@colostate.edu > www: http://www.math.colostate.edu/~bangerth/ > > -- > The deal.II project is located at http://www.dealii.org/ > For mailing list/forum options, see > https://groups.google.com/d/forum/dealii?hl=en > --- > You received this message because you are subscribed to the Google Groups > "deal.II User Group" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to dealii+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/dealii/1c573208-1579-e2c9-672d-f6463630eead%40colostate.edu > . > -- Yuesu Jin, Ph.D student, University of Houston, College of Natural Sciences and Mathematics, Department of Earth and Atmospheric Sciences, Houston, Texas 77204-5008 346-404-2062 -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/CA%2B25a%3D%2B2a%3D_bGh620BWu2__SViuzefkQYNxGMz_q4UX8JA4iDw%40mail.gmail.com.
Re: [deal.II] PETSc iteration does not converge
On 8/17/20 6:01 PM, yuesu jin wrote: I did nothing to verify those properties. because the single thread CG solver converged well. I used different preconditioners in parallel version and single thread version. In the parallel version I used block Jacobi and in the single thread version I used Jacobi. How can I check if the parallel blocked sparse matrix is/ isn't symmetrical and positive definite? Think of tests such as this: * run on 1 processor, multiply a vector w of all 1s from the right, and output the resulting vector v=Aw on all processors * run on >1 processors and repeat Are the vectors the same for both cases? The matrix should be the same regardless of partitioning, but is it? * repeat the same with Tvmult (multiplication from the left) You can probably come up with many similar tests that check properties of the matrix, comparing between the single-processor and multiple-processor cases. The point is that you may not know the exact answer, but you know that the two cases should result in the same output. Best W. -- Wolfgang Bangerth email: bange...@colostate.edu www: http://www.math.colostate.edu/~bangerth/ -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/1c573208-1579-e2c9-672d-f6463630eead%40colostate.edu.
Re: [deal.II] PETSc iteration does not converge
Dear Dr.Bangerth, I did nothing to verify those properties. because the single thread CG solver converged well. I used different preconditioners in parallel version and single thread version. In the parallel version I used block Jacobi and in the single thread version I used Jacobi. How can I check if the parallel blocked sparse matrix is/ isn't symmetrical and positive definite? Best regards, Yuesu -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/CA%2B25a%3D%2BhNaz2y6p-KGhEf26qi1g7jpS_%3D1J_vjtJDTrrw-GBVA%40mail.gmail.com.
Re: [deal.II] PETSc iteration does not converge
I tried both, first I tried 1e-4*system_rhs.l2_norm(), it failed. Then either the matrix is not symmetric/positive definite/whatever other property your iterative solver requires, or your preconditioner is unsuitable. What have you don to verify that your matrix has the necessary properties? Best W. -- Wolfgang Bangerth email: bange...@colostate.edu www: http://www.math.colostate.edu/~bangerth/ -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/b87ee936-8e48-6f21-8d5c-5403e4dadb73%40colostate.edu.
Re: [deal.II] mmult memory leak with petsc
Richard, I am working on incompressible flow problems and stumbled upon an issue when calling PETScWrappers::SparseMatrix::mmult(), but before I describe the problem in more detail, let me comment on the basic building blocks of the MWE: (i) parallel::distributed::Triangulation & either PETSc or Trilinos linear algebra packages (ii) dim-dimensional vector-valued FE space for velocity components & scalar-valued FE space for pressure, simply constructed via: FESystem fe (FE_Q(vel_degree), dim, FE_Q(press_degree), 1); So, after integrating the weak form -- or just filling the matrices with some entries -- we end up with a block system A u + B p = f C u + D p = g. To construct some preconditioners, we have to perform some matrix-matrix products: either for the Schur complement (a) S = D - C inv(diag(A)) B or some A_gamma (b) A_gamma = A + gamma * B inv(diag(Mp)) C. Comletely ignoring now, why that might be necessary or not (I know that there is the possibility of assembling a grad-div term and using a Laplacian on the pressure space to account for the reaction term, but that is not really an option in the stabilized case), we need those matrix products, and here comes the problem: using either PETSc or Trilinos I get identical matrix-products when calling mmult(), BUT when using PETSc, the RAM is slowly but steadily filled (up to 500GB on our local cluster) I came up with the MWE attached, which does nothing else than initializing the system and then constructs the matrix product 1000 times in a row. Nice! Can I ask you to play with this some more? I think you can make that code even more minimal: * Remove all of the commented out stuff -- for the purposes of reproducing the problem it shouldn't matter. * Move the matrix initialization code out of the main loop. You want to show that it's the mmult that is the problem, but you're having a lot of other code in there as well that could in principle be the issue. If you move the initialization of the individual factors out of the loop and only leave whatever is absolutely necessary for the mmult in the loop, then you've reduced the number of places where one needs to look. * I bet you could trim down the list of #includes by a good bit :-) You seem to be using a pretty old version of deal.II. There are a number of header files that no longer exist, and some code that doesn't compile for me. For your reference, attached is a version that compiles on the current master branch (though with a number of warnings). That said, it seems like the memory doesn't explode for me -- which raises the question of which version of deal.II and PETSc you use. For me, this is deal.II dev and PETSc 3.7.5. Am I doing anything wrong or is this supposed to be used differently? I am using dealii v.9.0.1 installed via candi, so maybe the old version is the reason. Possible -- no need to chase an old bug that has already been fixed if you can simply upgrade. Bonus question: Is there a way similar to hand the sparsity patterns over to the mmult function? (For the dealii::SparseMatrix there is, which is why I am asking) DynamicSparsityPattern has compute_mmult_pattern, which should give you a sparsity pattern you can then use to initialize the resulting PETSc matrix. Best W. -- Wolfgang Bangerth email: bange...@colostate.edu www: http://www.math.colostate.edu/~bangerth/ -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/fa4bb1b7-6540-2c19-7d33-08f5871f7c25%40colostate.edu. /* This code is licensed under the "GNU GPL version 2 or later". See license.txt or https://www.gnu.org/licenses/gpl-2.0.html Copyright 2019: Richard Schussnig */ // Include files #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include // #include #include #include #include // (Block-)LinearOperator and operations among those for the sub-solvers. #include #include #include // Parameter handler for input-file processing at runtime. #include namespace LA { using namespace
Re: [deal.II] Re: Anistropic refinement DG - saddle point problem
On 8/17/20 6:02 AM, jfgir...@gmail.com wrote: I would like to open again the topic with another question. Is there any way to use the Meshworker to solve the DG formulation but using block matrix and vectors? I couldn't find a proper way to do it with the Meshworker because I have to choose the component of the shape function when I am using the blocks matrix. Ignore MeshWorker and instead build on the underlying MeshWorker::mesh_loop() functions. This is what the current versions of step-12 and step-47 do, for example. They don't care what matrix and vector type you use. Best W. -- Wolfgang Bangerth email: bange...@colostate.edu www: http://www.math.colostate.edu/~bangerth/ -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/0ca0575b-41d2-cc78-66a5-052a15aa77c9%40colostate.edu.
[deal.II] deal.II Newsletter #130
Hello everyone! This is deal.II newsletter #130. It automatically reports recently merged features and discussions about the deal.II finite element library. ## Below you find a list of recently proposed or merged features: #10833: Create 1D Advection-Diffusion Equation (proposed by syedtahirbukhari) https://github.com/dealii/dealii/pull/10833 #10831: Fix bug in FEFaceEvaluationSelector::process_and_io (proposed by peterrum; merged) https://github.com/dealii/dealii/pull/10831 #10830: Step-67: add ECL (proposed by peterrum) https://github.com/dealii/dealii/pull/10830 #10829: Add changelog on EvaluationFlags (proposed by kronbichler; merged) https://github.com/dealii/dealii/pull/10829 #10828: Improve performance of ReferenceCell::Info::get_cell (proposed by kronbichler; merged) https://github.com/dealii/dealii/pull/10828 #10827: MF: Fix PBC in the case of non-standard orientation (proposed by peterrum; merged) https://github.com/dealii/dealii/pull/10827 #10825: Avoid trailing whitespace check to modify timestamps unnecessarily (proposed by masterleinad; merged) https://github.com/dealii/dealii/pull/10825 #10824: Fix typo (proposed by peterrum; merged) https://github.com/dealii/dealii/pull/10824 #10821: ECL: enable contiguous data access (proposed by peterrum; merged) https://github.com/dealii/dealii/pull/10821 #10820: Remove n_components template argument from FEEvaluationImpl (proposed by kronbichler) https://github.com/dealii/dealii/pull/10820 #10818: Inform memory leak of sacado_rad_fad in doc. (proposed by dougshidong; merged) https://github.com/dealii/dealii/pull/10818 #10817: Add back asserts in create_triangulation (proposed by peterrum; merged) https://github.com/dealii/dealii/pull/10817 #10816: ECL: Generalize process_and_io (proposed by peterrum; merged) https://github.com/dealii/dealii/pull/10816 #10815: Fix a few typos (proposed by masterleinad; merged) https://github.com/dealii/dealii/pull/10815 #10814: Allow AffineConstraints argument to MGTools::make_sparsity_pattern (proposed by kronbichler; merged) https://github.com/dealii/dealii/pull/10814 #10813: Fix installation of files of step-49 (proposed by peterrum; merged) https://github.com/dealii/dealii/pull/10813 #10809: ECL: Merge gather and adjust_for_face_orientation (proposed by peterrum; merged) https://github.com/dealii/dealii/pull/10809 #10801: Add CUDA 10.2 CI build check (proposed by masterleinad; merged) https://github.com/dealii/dealii/pull/10801 #10794: Save face type in quad_dof_identities (proposed by peterrum; merged) https://github.com/dealii/dealii/pull/10794 #10784: Use face_no in FE (proposed by peterrum; merged) https://github.com/dealii/dealii/pull/10784 #10762: MF mapping info: Avoid invalid access of some face-data-by-cells (proposed by kronbichler; merged) https://github.com/dealii/dealii/pull/10762 #10402: Remove pointers of pointers in FEEvaluation (proposed by kronbichler; merged) https://github.com/dealii/dealii/pull/10402 #10093: Fix if-statements with check_for_distorted_cells. (proposed by dougshidong; merged) https://github.com/dealii/dealii/pull/10093 ## And this is a list of recently opened or closed discussions: #10832: build issue (opened) https://github.com/dealii/dealii/issues/10832 #10826: Compilation failure on Intel (opened) https://github.com/dealii/dealii/issues/10826 #10823: error: step-67 and gcc 7.2 and omp simd (opened) https://github.com/dealii/dealii/issues/10823 #10822: Fill arrays with signaling NaNs upon object destruction (opened) https://github.com/dealii/dealii/issues/10822 #10819: Indentation check touches too many files (opened and closed) https://github.com/dealii/dealii/issues/10819 #10798: step-49 currently fails (closed) https://github.com/dealii/dealii/issues/10798 A list of all major changes since the last release can be found at https://www.dealii.org/developer/doxygen/deal.II/recent_changes.html. Thanks for being part of the community! Let us know about questions, problems, bugs or just share your experience by writing to dealii@googlegroups.com, or by opening issues or pull requests at https://www.github.com/dealii/dealii. Additional information can be found at https://www.dealii.org/. -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/5f3aa99d.1c69fb81.9672b.a62cSMTPIN_ADDED_MISSING%40gmr-mx.google.com.
[deal.II] mmult memory leak with petsc
Hi everyone, I am working on incompressible flow problems and stumbled upon an issue when calling PETScWrappers::SparseMatrix::mmult(), but before I describe the problem in more detail, let me comment on the basic building blocks of the MWE: (i) parallel::distributed::Triangulation & either PETSc or Trilinos linear algebra packages (ii) dim-dimensional vector-valued FE space for velocity components & scalar-valued FE space for pressure, simply constructed via: FESystem fe (FE_Q(vel_degree), dim, FE_Q(press_degree), 1); So, after integrating the weak form -- or just filling the matrices with some entries -- we end up with a block system A u + B p = f C u + D p = g. To construct some preconditioners, we have to perform some matrix-matrix products: either for the Schur complement (a) S = D - C inv(diag(A)) B or some A_gamma (b) A_gamma = A + gamma * B inv(diag(Mp)) C. Comletely ignoring now, why that might be necessary or not (I know that there is the possibility of assembling a grad-div term and using a Laplacian on the pressure space to account for the reaction term, but that is not really an option in the stabilized case), we need those matrix products, and here comes the problem: using either PETSc or Trilinos I get identical matrix-products when calling mmult(), BUT when using PETSc, the RAM is slowly but steadily filled (up to 500GB on our local cluster) I came up with the MWE attached, which does nothing else than initializing the system and then constructs the matrix product 1000 times in a row. Am I doing anything wrong or is this supposed to be used differently? I am using dealii v.9.0.1 installed via candi, so maybe the old version is the reason. Any suggestions? Any help would be greatly appreciated! Bonus question: Is there a way similar to hand the sparsity patterns over to the mmult function? (For the dealii::SparseMatrix there is, which is why I am asking) Kind regards, Richard -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/68896f29-c57f-497d-9338-33a75094ec9do%40googlegroups.com. ## # CMake script for blockLDU-fsi_(...) program: ## # Set the name of the project and target: SET(TARGET "mmult_mwe") # Declare all source files the target consists of: SET(TARGET_SRC ${TARGET}.cc # You can specify additional files here! ) # Usually, you will not need to modify anything beyond this point... CMAKE_MINIMUM_REQUIRED(VERSION 2.8.8) FIND_PACKAGE(deal.II 9.0.1 QUIET HINTS ${deal.II_DIR} ${DEAL_II_DIR} ../ ../../ $ENV{DEAL_II_DIR} ) IF(NOT ${deal.II_FOUND}) MESSAGE(FATAL_ERROR "\n" "*** Could not locate deal.II. ***\n\n" "You may want to either pass a flag -DDEAL_II_DIR=/path/to/deal.II to cmake\n" "or set an environment variable \"DEAL_II_DIR\" that contains this path." ) ENDIF() DEAL_II_INITIALIZE_CACHED_VARIABLES() PROJECT(${TARGET}) DEAL_II_INVOKE_AUTOPILOT() /* This code is licensed under the "GNU GPL version 2 or later". See license.txt or https://www.gnu.org/licenses/gpl-2.0.html Copyright 2019: Richard Schussnig */ // Include files #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include // #include #include #include #include // (Block-)LinearOperator and operations among those for the sub-solvers. #include #include #include // Parameter handler for input-file processing at runtime. #include // Linear algebra packages - switch between PETSc & Trilinos //#define FORCE_USE_OF_TRILINOS // ### namespace LA { #if defined(DEAL_II_WITH_PETSC) && !(defined(DEAL_II_WITH_TRILINOS) && defined(FORCE_USE_OF_TRILINOS)) using namespace dealii::LinearAlgebraPETSc; #define USE_PETSC_LA #elif defined(DEAL_II_WITH_TRILINOS) using namespace dealii::LinearAlgebraTrilinos; #else #error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required #endif } // C++ #include #include #include //deal.II using namespace dealii; // Define flow problem. template class flow_problem { public: flow_problem (const FESystem ); ~flow_problem (); void run (); private: // Create system matrix, rhs and distribute degrees of freedom. void setup_system (); void
[deal.II] Re: Anistropic refinement DG - saddle point problem
Dear Comunity, I would like to open again the topic with another question. Is there any way to use the Meshworker to solve the DG formulation but using block matrix and vectors? I couldn't find a proper way to do it with the Meshworker because I have to choose the component of the shape function when I am using the blocks matrix. Thank you! Juan El miércoles, 12 de agosto de 2020 a las 13:41:06 UTC+8, jfgir...@gmail.com escribió: > Dear Bruno, > > Thank you so much, it fix the problem without any problem. > > Regards, > Juan. > > > On Wednesday, August 12, 2020 at 11:12:53 AM UTC+8, Bruno Turcksin wrote: >> >> Juan, >> >> Basically the problem is that MeshWorker was not design to handle >> anisotropic refinement. That assert checks that if the faces of two cells >> "match" then they have been refined the same number of times. This is >> obviously not true in case of anisotropic refinement. I think that this is >> just a sanity check and you should be able to remove that assert without >> any bad consequences. So commenting the assert in >> /include/deal.II/meshworker/loop.h line 357 and recompiling deal.II should >> be safe and fix your problem. >> >> Best, >> >> Bruno >> >> On Tuesday, August 11, 2020 at 6:57:24 AM UTC-4, Juan Felipe Giraldo >> wrote: >>> >>> Dear community, >>> >>> I am working on an adaptive stabilized finite element method which >>> consist in a saddle point problem to obtain: >>> >>> - A continuous solution >>> - A discontinuous error estimator (which I use as an adaptive >>> refinement). >>> >>> I successfully implemented the method using the sample of the step 30 >>> (for the DG formulation) combining with step 20 for the saddle point, and >>> the step 39 for the error marking. >>> >>> Now, I would like to implement an anisotropic refinement, so I am taking >>> the same step 30 as a reference but now with the anisotropic flag actived. >>> As I mentioned, If I use only isotropic refinement, it works very well; >>> but, if I activate the anisotropic flag for the adaptive refinement, it >>> can only refine the first iteration, and then I get the following error: >>> >>> >>> An error occurred in line <357> of file >>> >>> >>> in function >>> void dealii::MeshWorker::cell_action(ITERATOR, >>> dealii::MeshWorker::DoFInfoBox&, INFOBOX&, const >>> std::function&, const >>> std::function&, const >>> std::function>> typename INFOBOX::CellInfo&)>&, const dealii::MeshWorker::LoopControl&) >>> [with INFOBOX = dealii::MeshWorker::IntegrationInfoBox<2, 2>; DOFINFO = >>> dealii::MeshWorker::DoFInfo<2, 2, double>; int dim = 2; int spacedim = 2; >>> ITERATOR = >>> dealii::TriaActiveIterator>> 2>, false> >; typename INFOBOX::CellInfo = >>> dealii::MeshWorker::IntegrationInfo<2, 2>] >>> The violated condition was: >>> cell->level() == neighbor->level() >>> Additional information: >>> This exception -- which is used in many places in the library -- >>> usually indicates that some condition which the author of the code thought >>> must be satisfied at a certain point in an algorithm, is not fulfilled. An >>> example would be that the first part of an algorithm sorts elements of an >>> array in ascending order, and a second part of the algorithm later >>> encounters an element that is not larger than the previous one. >>> There is usually not very much you can do if you encounter such an >>> exception since it indicates an error in deal.II, not in your own program. >>> Try to come up with the smallest possible program that still demonstrates >>> the error and contact the deal.II mailing lists with it to obtain help. >>> >>> >>> I am wondering to know if anyone have any idea of what It is happening >>> and how can I solve that problem. I will be very grateful for your help. >>> >>> Thank you so much, >>> >>> >>> Juan Giraldo >>> >>> >>> -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/2e1d5871-8ea8-4441-8381-50163ef4f51en%40googlegroups.com.