One day, Factor's compiler will generate such incomprehensibly fast  
code for math.vectors and math.matrices that nobody will even consider  
using C or Fortran ever again. In the meantime, you can now call out  
to everyone's favorite Fortran fast-math library through a few  
hopefully easy-to-use vocabs--check out math.blas.vectors,  
math.blas.matrices, and, for purists, math.blas.cblas from git:// 
repo.or.cz/factor/jcg.git . I've tested with Accelerate.framework on  
Intel Leopard, and refblas and atlas on Intel Linux; presumably it'd  
also work with refblas or atlas on Windows or any other platform.

Here's a retarded benchmark, multiplying a 4,000 x 4,000 matrix with a  
4,000-element vector, run on my Macbook Pro, using Accelerate.framework:

-- 8< --
USING: float-arrays math.matrices math.blas.vectors math.blas.matrices  
tools.time ;

4000 4000 >float-array <array> 4000 >float-array
[ m.v ] time ! 2556 ms, 2592 ms, 2616 ms, 2697 ms, 2684 ms

4000 4000 >array <array> >double-blas-matrix
4000 >double-blas-vector
[ M.V ] time !  32 ms, 32 ms, 33 ms, 31 ms, 32 ms
-- 8< --

The >double-blas-matrix step takes a while because it's implemented a  
bit naively, so the second test still takes a couple seconds overall.

-Joe

-------------------------------------------------------------------------
Sponsored by: SourceForge.net Community Choice Awards: VOTE NOW!
Studies have shown that voting for your favorite open source project,
along with a healthy diet, reduces your potential for chronic lameness
and boredom. Vote Now at http://www.sourceforge.net/community/cca08
_______________________________________________
Factor-talk mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/factor-talk

Reply via email to