I'm with SomeDude: This project is doomed to failure from the get go, and under no circumstances should be put into std.

First, some background so this doesn't seem too harsh. Numerical linear algebra is easily the single oldest and most well-developed area of computing. Period. There's a huge history there, and if you don't know it then whatever you make won't be used and will be universally derided. Second, beyond the background of implementations is the theoretical background of what they're used for and how that influences the API.

You also have understand that linear algebra algorithms and data structures come in 3 major varieties: Dense, Sparse Direct, and Iterative methods. For just the theoretical background on these and associated floating point issues there is an absolute minimum of the material similar to what's covered in these two standard references: Matrix Computations by Golub and Van Loan: http://www.amazon.com/Computations-Hopkins-Mathematical-Sciences-ebook/dp/B00BD2DVIC/ref=sr_1_6?ie=UTF8&qid=1381614419&sr=8-6&keywords=nick+higham

Accuracy and Stability of Numerical Algorithms by Nick Higham: http://www.amazon.com/Accuracy-Stability-Numerical-Algorithms-Nicholas/dp/0898715210/ref=sr_1_7?ie=UTF8&qid=1381614419&sr=8-7&keywords=nick+higham

The old, well-tested, and venerable BLAS and LAPACK APIs/packages are just for dense matrices. There are other variants for different kinds of parallelism. And notice there are differences in numerical linear algebra between instruction level parallelism (like SSE and AVX), shared memory, distributed memory, and coprocessors (like OpenCL or CUDA). In terms of popular existing implementations there are more recent packages like Magma (http://icl.utk.edu/magma/), PLASMA(http://icl.cs.utk.edu/plasma/), PETSc (http://www.mcs.anl.gov/petsc/), Trillinos (http://trilinos.sandia.gov/), or Elemental (http://libelemental.org/). The most popular sparse direct package, as far as I know, is SuiteSparse (http://www.cise.ufl.edu/research/sparse/SuiteSparse/), which is wrapped by Eigen (http://eigen.tuxfamily.org/index.php?title=Main_Page). Eigen is so popular in part because it does a good job of balancing expression template magic for dense matrices and wrapping/calling BLAS/LAPACK and SuiteSparse.

The very popular NumPy and SciPy packages are also basically wrappers around BLAS and LAPACK. There is also a scipy.sparse package, but its API differences are an ongoing headache.

All of which is to say this is a very deep area, the differences in algorithms and storage are sufficiently fundamental to make a unified API very difficult, and in any case the D community simply cannot commit to developing and maintaining such a library or commit to rolling in future developments from ongoing research in the area. This is exactly the sort of thing that should be a third-party extension, like in every other language, rather than in the standard library.


On Saturday, 12 October 2013 at 08:47:33 UTC, SomeDude wrote:
On Saturday, 12 October 2013 at 06:24:58 UTC, FreeSlave wrote:

For these cases we may let users to choose low-level backend if they need. High-level interface and default implementation are needed anyway.

I called it std.linalg because there is website http://www.linalg.org/ about C++ library for exact computational linear algebra. Also SciD has module scid.linalg. We can use std.linearalgebra or something else. Names are not really important now.

Ok, things are more clear now. Today I look what I can do.

There are litterally dozens of linear algebra packages: Eigen, Armadillo, Blitz++, IT++, etc.

I was not complaining about the linalg name, but about the fact that you want to make it a std subpackage. I contend that if you want to make it a std package, it must be nearly perfect, i.e better performing than ALL the other alternatives, even the C++ ones, and that it's really good as an API. Else it will be deprecated because someone will have made a better alternative.

Given the number of past tries, I consider this project is very likely doomed to failure. So no std please.

Reply via email to