Re: [deal.II] Exporting hdf5 file in a loop

2022-02-15 Thread Timo Heister
What we do typically in this situation is that we write one visualization output file per iteration with a different filename. This is done in many of the examples, especially for time dependent problems. I don't think there is an easy way to append data to an existing file like you suggested.

Re: [deal.II] Re: Cluster installation error with symengine library

2022-02-15 Thread Timo Heister
Sure, the easiest is to start over and delete the whole installation directory before you run Candi. You can try if it is enough to delete tmp/build/ but I am not sure. On Tue, Feb 15, 2022, 13:49 Stephanie Sparks wrote: > Hi Timo, > > I did use candi to re-install without symengine and then

Re: [deal.II] Exporting hdf5 file in a loop

2022-02-15 Thread Uclus Heis
Thank you very much for the answer. I need to evaluate a high number of frequencies, so in that case I would need to have a large number of vectors to track the results, which is not optimal in my case. Is it any other way to do that? Would be possible to call data_out.add_data_vector() and

Re: [deal.II] Re: Cluster installation error with symengine library

2022-02-15 Thread Stephanie Sparks
Hi Timo, I did use candi to re-install without symengine and then when attempting to install deal.ii I got an error that symengine library could not be found. Is there a way to configure it such that symengine is not necessary for deal.ii? Thanks, Stephanie On Tuesday, February 8, 2022 at

Re: [deal.II] Exporting hdf5 file in a loop

2022-02-15 Thread Timo Heister
The call to data_out.add_data_vector() does not copy the contents of the vector but it just keeps track of it until the data is actually written. You will need to store your solutions in different vectors without touching the old ones. On Tue, Feb 15, 2022, 11:05 Uclus Heis wrote: > Good

[deal.II] deal.II Newsletter #201

2022-02-15 Thread 'Rene Gassmoeller' via deal.II User Group
Hello everyone! This is deal.II newsletter #201. It automatically reports recently merged features and discussions about the deal.II finite element library. ## Below you find a list of recently proposed or merged features: #13392: CMake: Simplify some logic in DEAL_II_ADD_TEST and add verbose

[deal.II] Exporting hdf5 file in a loop

2022-02-15 Thread Uclus Heis
Good afternoon, I want to store my results in a hdf5 file using a distributed implementation. I am computing different frequencies, so I have a loop in my run() function where I solve for each frequency. When for example computing 5 frequencies, I get 5 results with the same value in my hdf5

Re: [deal.II] Solution mismatch (FullMatrix vs. PETScWrappers::MPI::SparseMatrix)

2022-02-15 Thread Hermes Sampedro
Thank you very much Regards El martes, 8 de febrero de 2022 a las 23:40:58 UTC+1, Timo Heister escribió: > > *L_operator(local_dof_indices[i],local_dof_indices[j]) = > cell_matrix(i,j);* > > *This looks incorrect, because global pairs of indices will happen more > than once in a normal

Re: [deal.II] MPI & component_wise

2022-02-15 Thread Hermes Sampedro
Thank you very much Regards El martes, 15 de febrero de 2022 a las 16:11:08 UTC+1, Wolfgang Bangerth escribió: > On 2/15/22 01:52, Joss G. wrote: > > *** Caution: EXTERNAL Sender *** > > > > Thank you again. It seems I am having bit of trouble. > > > > I used: parallel::distributed:Vector

Re: [deal.II] MPI & component_wise

2022-02-15 Thread Wolfgang Bangerth
On 2/15/22 01:52, Joss G. wrote: *** Caution: EXTERNAL Sender *** Thank you again. It seems I am having bit of trouble. I used: parallel::distributed:Vector locally_relevant_solution; In the documentation is written to call #include but the file is not there. Using #include I get the

Re: [deal.II] Impose values inside a material/ not on the boundary

2022-02-15 Thread 'Markus Mehnert' via deal.II User Group
Dear Wolfgang, Thank you for your response. I identified the correct dofs by trial and error which worked (9 dofs for a scalar quantity with quadratic polyorder). However, when I use *"fe_face.system_to_base_index().first.first" *to identify, which base element these dofs belong to, it does

Re: [deal.II] MPI & component_wise

2022-02-15 Thread Joss G.
Thank you again. It seems I am having bit of trouble. I used: parallel::distributed:Vector locally_relevant_solution; In the documentation is written to call #include but the file is not there. Using #include I get the following error: *:* *error: *‘*Vector*’ in namespace