Re: [deal.II] Parallel implementation

2017-10-05 Thread Wolfgang Bangerth
On 10/05/2017 07:45 AM, Anna Avdeeva wrote: I do compute the solution at many points along the profile, and while Ex and Ey look reasonable, the Ez does not. Is it possible that there is a problem in point_value function for z component for Nedelec elements? It's possible (any software has

Re: [deal.II] Parallel implementation

2017-10-05 Thread Anna Avdeeva
Dear Wolfgang, I do compute the solution at many points along the profile, and while Ex and Ey look reasonable, the Ez does not. Is it possible that there is a problem in point_value function for z component for Nedelec elements? Anna -- The deal.II project is located at

Re: [deal.II] Parallel implementation

2017-10-05 Thread Wolfgang Bangerth
On 10/05/2017 03:05 AM, Anna Avdeeva wrote: * * 2) For the file with the solution values at the receiver locations I think I followed the approach implemented in ASPECT. So I go through all the receivers look for the point value and then check how many processors found this point and if more

Re: [deal.II] Parallel implementation

2017-10-05 Thread Anna Avdeeva
Dear Wolfgang, Thank you for your replies. I am still struggling with outputs of the solution to the file. I have two type of outputs: 1) output vtu files for each processor on the whole mesh and 2) creating a simple txt file with values of the solution at receiver locations 1) For the first

Re: [deal.II] Parallel implementation

2017-09-25 Thread Wolfgang Bangerth
Anna, to compute values of the electric field at the receivers I follow the strategy of ASPECT code as you suggested To do this I sum the current_point_values across processors and divide by the number of processors that contain point p as following // Reduce all collected values into local

Re: [deal.II] Parallel implementation

2017-09-22 Thread Anna Avdeeva
Dear Wolfgang, to compute values of the electric field at the receivers I follow the strategy of ASPECT code as you suggested To do this I sum the current_point_values across processors and divide by the number of processors that contain point p as following // Reduce all collected values into

Re: [deal.II] Parallel implementation

2017-09-13 Thread Anna Avdeeva
Dear Wolfgang, thank you very much for the link. I have followed their approach, but before I can check the result, I have to substitute the solver I used for a parallel solver. Anna -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see

Re: [deal.II] Parallel implementation

2017-09-11 Thread Wolfgang Bangerth
Anna, Yes I would like to create a LA::MPI::BlockVector solution_at_receiver. And then copy it to a BlockVector localized_solution_at_receiver(solution_at_receiver) And output localized_solution_at_receiver only on the 0 processor. I am having trouble initializing and filling

Re: [deal.II] Parallel implementation

2017-09-11 Thread Anna Avdeeva
Dear Wolfgang, Yes I would like to create a LA::MPI::BlockVector solution_at_receiver. And then copy it to a BlockVector localized_solution_at_receiver(solution_at_receiver) And output localized_solution_at_receiver only on the 0 processor. I am having trouble initializing and filling

Re: [deal.II] Parallel implementation

2017-09-11 Thread Wolfgang Bangerth
On 09/11/2017 12:59 AM, Anna Avdeeva wrote: Now I would like to create a file containing the solution vector and some function of the solution vector at the set of receiver locations. If I had a solution vector on one processor I could easily create such file (using

[deal.II] Parallel implementation

2017-09-11 Thread Anna Avdeeva
Dear All, I am not fluent with C++ and MPI and would very much appreciate your help. I am now following steps 40 and 55 to write parallel implementation of the solution of the system of Maxwell's equations and am having many questions, but let us start with the following: After solving the