Hi Bruno,

Thanks, now it works and i have a full deal.II suite built together with 
python part. Yay!

Before starting writing the wrappers, i think we should have some tests for 
export strategy.
I would say the python preprocessor (leave along GUI for now) for mesh 
generation should work with normal triangulation (no MPI).
Then, the user C++ code should be able to read the output together with 
hanging nodes either to dealii::Triangulation or
dealii::p::d::Triangulation in case of MPI run.

Doing something like GridOut.write_msh() and GridOut.read_msh() will not 
work as information about hanging nodes
will not be read in correctly.

As far as i understand it is also not possible to do Triangulation< dim, 
spacedim >::save()     and then 
parallel::distributed::Triangulation< dim, spacedim >::load() .

We could of course have different python wrappers for the two classes, but 
then the question is if one can 
save() p::d::Tria with a single MPI core run (from python wrappers) and 
then read it in in C++ code with an arbitrary number of MPI cores?

What do you guys think will be the best way to go around this issue?

Regards,
Denis.


On Wednesday, May 25, 2016 at 9:35:35 PM UTC+2, Bruno Turcksin wrote:
>
> Denis, 
>
> 2016-05-25 10:40 GMT-04:00 Bruno Turcksin <bruno.t...@gmail.com 
> <javascript:>>: 
> > I will let you know once I have fixed the problem. 
> It should work now. 
>
> Best, 
>
> Bruno 
>

-- 
You received this message because you are subscribed to the Google Groups 
"deal.II developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii-developers+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to