Github user mikiobraun commented on the pull request:

    https://github.com/apache/incubator-spark/pull/575#issuecomment-35073967
  
    @fommil @MLnick Hello, I'm the author of jblas, and I'd like to clarify 
what fommil said about netlib-java exceeding my original goals for JBLAS, 
because I don't quite agree with that statement.
    
    netlib-java provides BLAS/LAPACK for Java. If native libraries are found, 
it uses those, otherwise, it will fall back to a (very slow) implementation of 
Fortran which is basically a Java version of the netlib blas and lapack 
implementations (no optimization whatsoever). netlib-java has full coverage of 
the BLAS and LAPACK routines while jblas only includes what I needed. 
netlib-java also has the fallback mode (which I honestly think you wouldn't 
want to use because it's quite slow).
    
    jblas however, is not just a wrapper around BLAS/LAPACK but a matrix 
library on top of that, so it's netlib-java wrong to say that it exceeds the 
goals of jblas, because netlib-java has different goals.
    
    In my view, the biggest feature of jblas is the way I've packaged the ATLAS 
libraries for Windows, Mac, and Linux, 32bit and 64bit into the jar so you can 
really just put in a maven dependency and get native performance. I don't think 
other libraries have gone through that pain, mainly to recompile ATLAS on all 
those platforms, and so on. So I guess jblas is the simplest way to get native 
performance on Java.
    
    It has no support for sparse matrices, however, and I had relatively little 
time to work on it lately, so I'm not mad if you choose to go another way ;)

Reply via email to