Hi Hans,

On Thu, Aug 16, 2018 at 10:51 AM, Hans Dembinski <hans.dembin...@gmail.com>
wrote:

> Hi Sylvain,
>
> On 15. Aug 2018, at 19:38, Sylvain Corlay <sylvain.cor...@gmail.com>
> wrote:
>
> If `pybind11` is included, it could be interesting to also include
> `xtensor` and `xtensor-python`.
>
>  - Xtensor is a C++ dynamic N-d array library that offers numpy-like
> features including broadcasting and universal functions. It is also lazy
> evaluated and continuously benchmarked against numpy, eigen, pythran and
> numba. You can check out the numpy to xtensor cheat sheet:
> https://xtensor.readthedocs.io/en/latest/numpy.html.
>
>  - Xtensor-python makes it possible to operate on numpy arrays inplace
> using the xtensor API. So that e.g. an xtensor reshape will result in a
> reshape on the python side (using the numpy C API under the hood).
>
> Xtensor-python is built upon pybind11, but brings it much closer to
> feature parity with NumPy. There is a vibrant community of users and
> developers, actively working to make xtensor faster and cover more of numpy
> APIs.
>
> I would argue that xtensor-python is one of the easiest ways to make use
> of numpy arrays from a C++ program, given the similar high level API, and
> tools to make ufuncs and bindings with one-liners.
>
> Resources:
>
> - xtensor: https://github.com/QuantStack/xtensor (documentation:
> https://xtensor.readthedocs.io/)
> - xtensor-python: https://github.com/QuantStack/xtensor-python
> (documentation: https://xtensor-python.readthedocs.io/)
> - xtensor-blas:  https://github.com/QuantStack/xtensor-blas (documentation:
> https://xtensor-blas.readthedocs.io)
> - xtensor-io: https://github.com/QuantStack/xtensor-io (documentation:
> https://xtensor-io.readthedocs.io) for reading and writing various file
> formats
>
> Other language bindings:
>
> - xtensor-julia: https://github.com/QuantStack/xtensor-julia
> (documentation: https://xtensor-julia.readthedocs.io/en/latest/)
> - xtensor-r: https://github.com/QuantStack/xtensor-r (documentation:
> https://xtensor-r.readthedocs.io/en/latest/)
>
>
> sounds good, I think it should be mentioned in the pybind11 part. I just
> stumbled over xtensor yesterday. Based on your post I read a bit more about
> it. I like the expression engine and lazy evaluation, the concept is
> similar to Eigen. xtensor itself has nothing to do with binding, but makes
> working with numpy arrays on the C++ side easier - especially when you are
> familiar with the numpy API.
>


Actually, xtensor-python does a lot more in terms of numpy bindings, as it
uses the C APIs of numpy directly for a number of things.

Plus, the integration into the xtensor expression system lets you do things
such as view / broadcasting / newaxis / ufuncs directly from the C++ side
(and all that is in the cheat sheets).


> The docs say:
> "Xtensor operations are continuously benchmarked, and are significantly
> improved at each new version. Current performances on statically
> dimensioned tensors match those of the Eigen library. Dynamically dimension
> tensors for which the shape is heap allocated come at a small additional
> cost."
>
> I couldn't find these benchmark results online, though, could you point me
> to the right page? Google only produced an outdated SO post where numpy
> performed better than xtensor.
>
>
That is because we run the benchmarks on our own hardware. Since xtensor is
explicitly SIMD accelerated for a variety of architectures including e.g.
avx512, it is hard to have a consistent environment to run the benchmarks.
We have a I9 machine that runs the benchmarks with various options, and
manually run them on raspberry pis for the neon acceleration benchmarks
(continuous testing of neon instruction sets are tested with an emulator on
travisci in the xsimd project).

Cheers,

Sylvain



> Best regards,
> Hans
>
> PS: A bit of nitpicking: you use the term "tensor" for an n-dimensional
> block of numbers - a generalisation of "matrix",  but the term "tensor" in
> mathematics and physics is more specific. A tensor has well-defined
> transformation properties when you change the basis of your vector space,
> just like a "vector" (a vector is a one-dimensional tensor), while a
> general block of numbers does not.
>
> https://en.wikipedia.org/wiki/Tensor
>
>
it is clearly a very overloaded term.
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion

Reply via email to