Actually, the visitor pattern or variants thereof can produce very
performant linear algebra implementations.  You can't usually get quite
down to optimized BLAS performance, but you get pretty darned fast code.

The reason is that the visitor is typically a very simple class which is
immediately inlined by the JIT.  Then it is subject to all of the normal
optimizations exactly as if the code were written as a single concrete
loop.  For many implementations, the bounds checks will be hoisted out of
the loop so you get pretty decent code.

More importantly in many cases, visitors allow in place algorithms.
 Combined with view operators that limit visibility to part of a matrix,
and the inlining phenomenon mentioned above, this can have enormous
implications to performance.

A great case in point is the Mahout math library.  With no special efforts
taken and using the visitor style fairly ubiquitously, I can get about 2 G
flops from my laptop.  Using Atlas as a LINPAK implementation gives me
about 5 G flops.

I agree with the point that linear algebra operators should be used where
possible, but that just isn't feasible for lots of operations in real
applications.  Getting solid performance with simple code in those
applications is a real virtue.

On Sat, Dec 29, 2012 at 3:22 PM, Konstantin Berlin <kber...@gmail.com>wrote:

> While visitor pattern is a good abstraction, I think it would make for
> terrible linear algebra performance. All operations should be based on
> basic vector operations, which internally can take advantage of sequential
> memory access. For large problems it makes a difference. The visitor
> pattern is a nice add on, but it should not be the engine driving the
> package under the hood, in my opinion.
>

Reply via email to