Also, Rasmus if you are interested, I can provide some use cases that would
be well suited to additional vectorization.  Not sure if I can peer too
deeply into the internals of Eigen, but I'm happy to help around the
margins.

On Mon, May 10, 2021 at 8:09 PM Ian Bell <[email protected]> wrote:

> Rasmus, your commits are these, right:
>
>
> https://gitlab.com/libeigen/eigen/-/commit/88d4c6d4c870f53d129ab5f8b43e01812d9b500e
>
> https://gitlab.com/libeigen/eigen/-/commit/be0574e2159ce3d6a1748ba6060bea5dedccdbc9
>
> Which Array methods pick up these new packet methods?
>
> On Mon, May 10, 2021 at 7:53 PM Rasmus Munk Larsen <[email protected]>
> wrote:
>
>> I recently vectorized the implementation of pow in Eigen for float and
>> double arguments. It does not apply to pow(float, int), however, but
>> should give you a significant speedup if you cast your exponents to
>> double. I thought about implementing a more efficient algorithm if the
>> exponents are all integers, but didn't get round to it. Could you
>> please try if this helps you? The improvements are in the master
>> branch (as well as the 3.4 branch that we are preparing for release).
>>
>> On Mon, May 10, 2021 at 4:28 PM Marc Glisse <[email protected]> wrote:
>> >
>> > On Mon, 10 May 2021, Ian Bell wrote:
>> >
>> > > Of course, shortly after having sent this message I figured it out,
>> but it
>> > > doesn't actually result in an increase in my throughput sadly. For
>> > > posterity:
>> > >
>> > > #include <Eigen/Dense>
>> > > #include <iostream>
>> > >
>> > > using namespace Eigen;
>> > >
>> > > struct myUnaryFunctor {
>> > >  const double m_base;
>> > >  myUnaryFunctor(double base): m_base(base) {};
>> > >  typedef double result_type;
>> > >  result_type operator()(const int &e) const
>> > >  {
>> > >      return pow(m_base, e);
>> > >  }
>> > > };
>> > >
>> > > int main()
>> > > {
>> > >    auto e = Eigen::ArrayXi::LinSpaced(11, 0, 10).eval();
>> > >    double base = 2.9;
>> > >    std::cout << e.unaryExpr(myUnaryFunctor(base));
>> > > }
>> >
>> > Assuming pow is actually your own function and does the usual repeated
>> > squaring, unlike std::pow, this may do a lot of redundant computation
>> (in
>> > particular base*base is computed many times). Do you know anything about
>> > the integers? In particular, are they always small? I assume the
>> LinSpaced
>> > example doesn't look like the true data. Does your pow function already
>> > cache some results?
>> >
>> > --
>> > Marc Glisse
>> >
>> >
>>
>>
>>

Reply via email to