`linalg` was developed with fixed size container in mind, that is, vectors and 
matrices whose size is known at compile time. This allows better compile time 
checks, and also permits to actually erase the size information at runtime, 
which can be a gain for small matrices.

Still, containers of unknown (at compile time) size are essential, and for this 
reason `linalg` has two flavours of everything: static and dynamic. While this 
is neat, it turned out to be a lot of duplicated work. Also, it was not easy 
make the static size containers generic in their parameter (float32 and 
float64) and this resulted in even more duplication. Templates allowed to share 
a lot of the logic, but still.

There are many dimensions along which one wants to move: 32 vs 64 bit, real vs 
complex, dense vs sparse, CPU vs GPU, on the stack vs on the heap, and so 
having both static and dynamic flavours turned out to be too much work to 
support.

I thought that static matrices and vectors could still be useful for someone, 
so instead of ditching them, I ported the dynamic ones to a new library, so 
`neo` was born. I was able to get rid of a lot of code, and then I started 
adding new features to `neo`:

  * both CUDA and LAPACK are now in separate packages that contain the complete 
bindings (while `linalg` still only embeds the bindings it uses)
  * I started adding some support for sparse vectors and matrices (more to come)
  * matrices and vectors are now generic, and can also contain other types such 
as `int` (even though linear algebra operations are restricted to real and 
complex numbers)
  * I added eigenvalues, eigenvectors and Schur form computations
  * the layout for matrices and vectors is slightly changed, and this allows 
slicing and taking rows and columns without copying
  * the new layout also allows for flags such as a matrix being symmetric, that 
will eventually be used for more specialized BLAS operations
  * the new layout will also allow to do some operations with matrices and 
vectors stored on the stack or in the shared heap


Reply via email to