Hi Alex, I've done the communication between the two codes with IO transfer, and I'd like to explore the Cython idea. How are you able to build the Cython wrapper?
- In order to wrap a class making use of, say, #include <deal.II/grid/tria.h> , I can generate a shared object file using CMake - however, the setup.py file for the Cython compilation knows nothing about the complicated Deal II build process and can't resolve the dependencies - Specifying them directly without CMake would be like compiling a deal.ii tutorial program by hand with gcc Were you able to build the Cython wrapper in CMake directly? That seems like it would be much better Thank you, Corbin On Thursday, December 3, 2020 at 3:22:12 AM UTC-5 Alex Cobb wrote: > Hi Corbin, > > On Wed, 2 Dec 2020 at 14:55, Corbin Foucart <[email protected]> wrote: > >> The code will have to be driven in Python, but hopefully I can find a >> good way to access the deal.ii data structures without writing to file. I'm >> using a time-dependent HDG scheme on an adaptive grid. From a python >> session, is it possible to access: >> >> 1. The state of the triangulation ( this seems doable from the python >> bindings, as in tutorial 1 in the notebooks Bruno linked) >> 2. For every cell in the mesh >> 1. the nodal DG coefficients u_h >> 2. the solution jumps [[u_h]] on all faces of the cell (between >> the solution on this cell and its neighbors) >> 3. the right hand side (source term and time-dependent forcing), >> ideally at the nodal points of the cell >> 3. The coarsen or refine flags for the triangulation (seems possible >> as in 1)? >> >> I think the best way to do this would be with the Cython approach >> suggested by Alex, however, it's unclear to me the best way to pass the >> cell-based data to Python. Is it possible to define a struct on from >> deal.ii containing the data in (2) and pass an array of these structs (one >> per cell) to the python session? >> > > You always have the option of retrieving primitives as their Python > equivalents (e,g., double -> Python floats) via some kind of getter > function; this means that you will have the overhead of a Python function > call and creation of an object on the heap for each such retrieval (from > Python). If you have large amounts of data to pass back and forth via > memory, Numpy arrays are a good option (you could also use the Python > Standard Library struct module, but I can't see any advantages to this). > Cython can give you the pointer to the block of memory where a Numpy > array's data reside, and you can use that to read and write data to > contiguous arrays. If you want to store data in structs (or the data you > want are already packed in structs) you could store the data in a Numpy > record array (array of structs). Your preferred strategy will probably > depend on how exactly you will use the data on the Python side. > > Once you start passing pointers to memory around, an important > consideration is always who owns the memory and is responsible for > deallocating it. The strategy I have mostly used (and been happy with) is > to allocate all the memory I need from the Python side, either by creating > Numpy arrays or by calling PyMem_Malloc in the constructor and PyMem_Free > in the destructor of a Cython extension class (so that the memory is tied > to the life cycle of the object). Another option would be to allocate the > memory in a C++ class and wrap that with Cython. > > Good luck! > > Alex > > On Tuesday, November 24, 2020 at 11:02:21 PM UTC-5 Alex Cobb wrote: >> >>> On Wed, 25 Nov 2020 at 00:05, Bruno Turcksin <[email protected]> >>> wrote: >>> >>>> deal.II has some limited support for python mainly for mesh >>>> manipulation. We have some python notebooks here >>>> <https://github.com/dealii/dealii/blob/master/contrib/python-bindings/notebooks/index.ipynb>. >>>> >>>> I think what you want to do is similar to the step-62 notebook. Right now, >>>> the only way to interact with numpy is to print the data to a file and >>>> then >>>> load it (see here >>>> <https://dealii.org/current/doxygen/deal.II/classLinearAlgebra_1_1Vector.html#a2fadcc595d3e7e9ba44de8c85fd7595c> >>>> >>>> and here >>>> <https://dealii.org/current/doxygen/deal.II/classSparseMatrix.html#a3141075e3ad6362fce005d2f1c8da699>). >>>> >>>> If you want to manipulate the mesh directly in python, you need >>>> boost.python and you need to configure deal.II with >>>> -DDEAL_II_COMPONENT_PYTHON_BINDING=ON. It's sometimes a little bit tricky >>>> to enable the python binding so don't hesitate to ask any question on the >>>> mailing list if you need help. >>>> >>>> On Tuesday, November 24, 2020 at 12:16:18 AM UTC-5 [email protected] >>>> wrote: >>>> >>>>> >>>>> - what is the best practice for exporting deal.ii solution data in >>>>> a way that Python / numpy can interact with it? >>>>> >>>>> Depending on what you want from the data, another option might be to >>> export the data as .vtu (*not* .vtk) and read it with VTKPython. For >>> example, you can use VTKPython to grid it, and then work with the gridded >>> data with numpy. >>> >>> If I/O turns out to be a bottleneck, you could also consider writing the >>> data as HDF5 >>> https://www.dealii.org/current/doxygen/deal.II/namespaceHDF5.html >>> and then accessing the HDF5 file from Python (using h5py, for example). >>> I haven't used deal.II's HDF5 export but HDF5 can sometimes vastly improve >>> I/O performance compared to text files or even other binary formats (e.g., >>> with blosc compression). >>> >>>> >>>>> - Is there a good way for external software to 'hook' into the >>>>> deal.ii pipeline? Something like: >>>>> - initialize a triangulation / grid >>>>> - run the solver >>>>> - make a call like: new_data = >>>>> external_software(deal_ii_output, grid) >>>>> - reinitialize the grid based on new_data >>>>> - loop >>>>> >>>>> Have you used Cython at all? >>> https://cython.org/ >>> It is my favorite way to use C and C++ libraries from Python (compared >>> to binding generators, or writing extensions by hand, or cffi). It >>> provides a very natural way to transfer data to and from C or C++ library >>> code using typed numpy arrays. It can take some patience to get what you >>> want from the documentation, but if you are already used to Python and have >>> some familiarity with C and C++ it makes it very easy to migrate code >>> between C / C++ and Python. So, for example, you can prototype in Python >>> and gradually shift functionality over to C or C++. >>> >>> What I have found easiest is to put the heavy-lifting code in a static >>> C++ library, building from the (awesome) deal.II tutorials and >>> documentation, and then wrap that C++ library into a loadable Python >>> extension module using Cython. Then you can pass arguments from Python to >>> your solver using the extension module. >>> >>> In my experience, the trickiest part of this is not Cython per se, but >>> getting testing and continuous integration working with the mix of >>> languages: C++, Cython, Python, and your build system (CMake?) >>> mini-language. >>> >>> If you choose this route, another option for controlling your solver >>> from Python (possibly with less pain / dependencies than Boost.Python?) >>> might be pybind11 (disclaimer: haven't tried it myself). >>> >> > -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/82f71eca-a8d5-4f4a-9f47-08968049bd06n%40googlegroups.com.
