Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-10-25 Thread Dragan Djuric
WRT their documentation, I do not think it means much for me now, since I do 
not need that library and its functionality. So I guess the integration depends 
on whether
1) it is technically viable
2) someone needs it

I will support whomever wants to take on that task, but have no time and need 
to do it myself.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-10-23 Thread Dragan Djuric
Neanderthal 0.4.0 has just been released with OpenCL-based GPU support and 
pluggable engines.

http://neanderthal.uncomplicate.org

On Tuesday, June 23, 2015 at 12:39:40 AM UTC+2, Dragan Djuric wrote:
>
> As it is a *sparse matrix*, C++ library unavailable on JVM, I don't 
> consider it relevant for comparison as these are really apples and 
> pineapples. For now, at least.
>
> On Tue, Jun 23, 2015 at 12:13 AM, A <> wrote:
>
>>
>> Here's another benchmark for comparison: 
>> https://code.google.com/p/redsvd/wiki/English
>>
>> -A
>>
>>
>>
>> On Monday, June 22, 2015 at 12:27:57 PM UTC-7, Dragan Djuric wrote:
>>>
>>> core.matrix claims that it is fast on its project page (with which I 
>>> agree in some cases). I expected from that, and from the last couple of 
>>> your posts in this discussion, that there are some concrete numbers to 
>>> show, which I can't find.
>>>
>>> My claim to win "ALL benchmarks" (excluding maybe tiny objects) came 
>>> only as a response to mike's remarks that I have only proven that 
>>> neanderthal is faster for dgemm etc.
>>>
>>> OK, maybe the point is that other libraries do not care that much about 
>>> speed, or that current speed is enough, or whatever, and I am ok with that. 
>>> I would just like it to be explicitly said, so I do not lose time arguing 
>>> about what is not important. Or it would be nice to see some numbers shown 
>>> to draw at least rough picture of what can be expected. I am glad if my 
>>> raising this issue would improve the situation, but I do not insist...
>>>
>>> On Monday, June 22, 2015 at 9:16:15 PM UTC+2, Christopher Small wrote:

 Well, we also weren't claiming to win "ALL benchmarks" compared to 
 anything :-)

 But your point is well taken, better benchmarking should be pretty 
 valuable to the community moving forward.

 Chris


 On Mon, Jun 22, 2015 at 12:10 PM, Dragan Djuric  
 wrote:

> So, there are exactly two measurements there: matrix multiplication 
> and vector addition for dimension 100 (which is quite small and should 
> favor vectorz). Here are the results on my machine:
>
> Matrix multiplications are given at the neanderthal web site at 
> http://neanderthal.uncomplicate.org/articles/benchmarks.html in much 
> more details than that, so I won't repeat that here.
>
> Vector addition according to criterium: 124ns vectorz vs 78ns 
> neanderthal on my i7 4790k
>
> Mind you that the project you pointed uses rather old library 
> versions. I updated them to the latest versions. Also, the code does not 
> run for both old and new versions properly (it complains about :clatrix) 
> so 
> I had to evaluate it manually in the repl.
>
> I wonder why you complained that I didn't show more benchmark data 
> about my claims when I had shown much more (and relevant) data than it is 
> available for core.matrix, but I would use the opportunity to appeal to 
> core.matrix community to improve that.
>
> On Monday, June 22, 2015 at 8:13:29 PM UTC+2, Christopher Small wrote:
>>
>> For benchmarking, there's this: 
>> https://github.com/mikera/core.matrix.benchmark. It's pretty simple 
>> though. It would be nice to see something more robust and composable, 
>> and 
>> with nicer output options. I'll put a little bit of time into that now, 
>> but 
>> again, a bit busy to do as much as I'd like here :-)
>>
>> Chris
>>
>>
>> On Mon, Jun 22, 2015 at 9:14 AM, Dragan Djuric  
>> wrote:
>>
>>>
 As for performance benchmarks, I have to echo Mike that it seemed 
 strange to me that you were claiming you were faster on ALL benchmarks 
 when 
 I'd only seen data on one. Would you mind sharing your full 
 benchmarking 
 analyses?

>>>
>>> I think this might be a very important issue, and I am glad that you 
>>> raised it. Has anyone shared any core.matrix (or, to be precise, 
>>> core.matrix) benchmark data? I know about Java benchmark code project 
>>> that 
>>> include vectorz, but I think it would help core.matrix users to see the 
>>> actual numbers. One main thing vectorz (and core.matrix) is claiming is 
>>> that it is *fast*. Mike seemed a bit (pleasantly) surprised when I 
>>> shared 
>>> my results for vectorz mmul... 
>>>
>>> So, my proposal would be that you (or anyone else able and willing) 
>>> create a simple Clojure project that simply lists typical core.matrix 
>>> use 
>>> cases, or just the core procedures in core.matrix code that you want to 
>>> measure and that you are interested to see Neanderthal doing. Ready 
>>> criterium infrastructure is cool, but I'm not even ask for that if you 
>>> do 
>>> not have time. Just a setup with matrix objects and core.matrix 

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-10-23 Thread Gregg Reynolds
I'm glad to see that you and Mike are making a productive dialog out of
what could have gone the other way.  It's a credit to you both.

Simple question wrt documentation.  The Great White Whale of open source
software.  Suppose core.matrix had insanely great documentation.   What
difference would that make regarding your decision to support it or not?
(IOW, if the community were to manage to make good docs, would you change
your mind? )
On Jun 22, 2015 8:34 AM, "Dragan Djuric"  wrote:

> Just a quick addition, if it is still not clear what I mean regarding the
> documentation. Look at this fairly recent question on stack overflow:
>
> http://stackoverflow.com/questions/19982466/matrix-multiplication-in-core-matrix
>
> On Tuesday, January 13, 2015 at 2:13:13 AM UTC+1, Dragan Djuric wrote:
>>
>> I am pleased to announce a first public release of new *very fast *native
>> matrix and linear algebra library for Clojure based on ATLAS BLAS.
>> Extensive *documentation* is at http://neanderthal.uncomplicate.org
>> See the benchmarks at
>> http://neanderthal.uncomplicate.org/articles/benchmarks.html.
>>
>> Neanderthal is a Clojure library that
>>
>> Main project goals are:
>>
>>- Be as fast as native ATLAS even for linear operations, with no
>>copying overhead. It is roughly 2x faster than jBLAS for large matrices,
>>and tens of times faster for small ones. Also faster than core.matrix for
>>small and large matrices!
>>- Fit well into idiomatic Clojure - Clojure programmers should be
>>able to use and understand Neanderthal like any regular Clojure library.
>>- Fit well into numerical computing literature - programmers should
>>be able to reuse existing widespread BLAS and LAPACK programming know-how
>>and easily translate it to Clojure code.
>>
>> Implemented features
>>
>>- Data structures: double vector, double general dense matrix (GE);
>>- BLAS Level 1, 2, and 3 routines;
>>- Various Clojure vector and matrix functions (transpositions,
>>submatrices etc.);
>>- Fast map, reduce and fold implementations for the provided
>>structures.
>>
>> On the TODO list
>>
>>- LAPACK routines;
>>- Banded, symmetric, triangular, and sparse matrices;
>>- Support for complex numbers;
>>- Support for single-precision floats.
>>
>>
>> Call for help:
>> Everything you need for Linux is in Clojars. If you know your way around
>> gcc on OS X, or around gcc and MinGW on Windows, and you are willing to
>> help providing the binary builds for those (or other) systems, please
>> contact me. There is an automatic build script, but gcc, atlas and other
>> build tools need to be properly set up on those systems.
>>
>> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "Clojure" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to clojure+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-22 Thread Christopher Small
Just a couple of comments:


As I was typing, Mars0i's message came in, with much of much what I wanted
to say about documentation, but I'll reiterate a couple of key points: You
can have an intuitive API that's poorly documented, just as it's possible
to have a terrible API with great documentation (though certainly it's
_easier_ to document a nice, clean API...). Those two things are relatively
orthogonal. Your comments about documentation came in response to a line of
thought about shortcomings of the core.matrix API, but bad documentation is
not actually a shortcoming of the API proper. And not building around
core.matrix because the API is bad is different than not building around it
because the API is poorly documented (though both understandable),
particularly when the latter is relatively easy to fix (even after the
fact).

However, I have to personally disagree about the state of the
documentation. I think it's actually pretty decent. For example, on the
GitHub page (README) there is a link to a wiki page for implementers that I
think is fairly straight forward. If you go to the protocols namespace (as
suggested in the implementers wiki page), the protocols which _must_ be
implemented are well demarcated, and most of the protocols have pretty good
documentation. There is certainly room for improvement though. The example
of * vs mmul mentioned could have been avoided had
https://github.com/mikera/core.matrix/blob/master/src/main/clojure/clojure/core/matrix/examples.clj
(linked to from the README) more clearly denoted what * does, and added
mmul as well (since I think that's what folks would use more frequently
anyway...). There's probably also room for more thorough/robust high-level
descriptions of key concepts. But I think overall the documentation is far
from _bad_. That's just my opinion though; if other people are having
trouble with the docs, there may be more room for improvement than I
realize.


Dragan, you point out that in the Clojure community there are dozens of
Clojure libraries for SQL, http, visualization, etc, and all have their
place. That's certainly true, but you're comparing kittens to koalas here.
Matrices/arrays are *data types*, while the things you mention are
libraries that do things with already standardized data types. Something
common to every single Clojure SQL library out there is that it's likely
going to return a seq of vectors or maps; in either case, these are the
data types around which vast swaths of the Clojure code out there has
already been built. So there's less need for standardization here. The
situation around matrix/array programming is very different; here you have
tightly coupled abstract data types and functionality/operations, and there
is a lot that could be built up around them and which benefits from a
standardization. I've talked with folks at some of the big(gish) Clojure
companies doing ML, and they love core.matrix for this very reason.

A perfect example of something similar is graphs; graphs are data types
which are tightly coupled with a number of operations, and loom is a
project which has been around for a while to try and unify an API around
these data types and operations. When Mark Engelberg realized that loom
didn't fit all his needs (which had to do with more robust/general graph
types and operations; much more extensive than just beefing up performance
on a subset of the functionality...), he didn't go and create his own
thing. Well... he did... but he did it in such a way that his constructions
also implemented the loom protocols, so what you wrote would still be
relatively interoperable with loom code. This is a great example because
even though his approach is much more general and I think should/could be
*THE* graph api, he still was able to make things as compatible as he
possibly could. This is the beauty of Clojure at it's greatest.


As for complex matrices, there's already a project underway for this, with
a rather elegant approach that would allow for any underlying
implementation to be used: https://github.com/mikera/core.matrix.complex.
Of course, a native implementation that supported complex types would be
lovely, and could potentially be a lot faster than doing things this way,
but it's not true that there's no complex story.


As for performance benchmarks, I have to echo Mike that it seemed strange
to me that you were claiming you were faster on ALL benchmarks when I'd
only seen data on one. Would you mind sharing your full benchmarking
analyses?


With all that out of the way... I'm glad that you're willing to play ball
here with the core.matrix community, and thank you for what I think has
been a very productive discussion. I think we all went from talking _past_
each other, to understanding what the issues are and can now hopefully
start moving forward and making things happen. While I think we'd all love
to have you (Dragan) personally working on the core.matrix implementations,
I agree with Mars0i that just 

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-22 Thread Dragan Djuric



 As for performance benchmarks, I have to echo Mike that it seemed strange 
 to me that you were claiming you were faster on ALL benchmarks when I'd 
 only seen data on one. Would you mind sharing your full benchmarking 
 analyses?


I think this might be a very important issue, and I am glad that you raised 
it. Has anyone shared any core.matrix (or, to be precise, core.matrix) 
benchmark data? I know about Java benchmark code project that include 
vectorz, but I think it would help core.matrix users to see the actual 
numbers. One main thing vectorz (and core.matrix) is claiming is that it is 
*fast*. Mike seemed a bit (pleasantly) surprised when I shared my results 
for vectorz mmul... 

So, my proposal would be that you (or anyone else able and willing) create 
a simple Clojure project that simply lists typical core.matrix use cases, 
or just the core procedures in core.matrix code that you want to measure 
and that you are interested to see Neanderthal doing. Ready criterium 
infrastructure is cool, but I'm not even ask for that if you do not have 
time. Just a setup with matrix objects and core.matrix function calls that 
you want measured. Share your numbers and that project on Github and I will 
contribute comparative code for Neanderthal benchmarks, and results for 
both codes run on my machine. Of course, that would be micro benchmarks, 
but useful anyway for you, one Neanderthal user (me :) and for all 
core.matrix users.

You interested?

With all that out of the way... I'm glad that you're willing to play ball 
 here with the core.matrix community, and thank you for what I think has 
 been a very productive discussion. I think we all went from talking _past_ 
 each other, to understanding what the issues are and can now hopefully 
 start moving forward and making things happen. While I think we'd all love 
 to have you (Dragan) personally working on the core.matrix implementations, 
 I agree with Mars0i that just having you agree to work-with/advise others 
 who would do the actual work is great. I'd personally love to take that on 
 myself, but I already have about a half dozen side projects I'm working on 
 which I barely have time for. Oh, and a four month old baby :scream:! So if 
 there's anyone else who's willing, I may leave it to them :-)


I'm also glad we understand each other better now :) 

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-22 Thread Dragan Djuric


 (For the record I don't think it's fair to criticize core.matrix as not
 being an API because the documentation is limited.  The API is in the
 protocols, etc.


The problem with that view (for me, anyway) is that only a portion of an
API could be captured with protocols, and that is even an easier portion.
But, hey, I argued too much about that already. I'll let the core.matrix
developers think about whether improving these issues are worth their time.

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-22 Thread Dragan Djuric
core.matrix claims that it is fast on its project page (with which I agree 
in some cases). I expected from that, and from the last couple of your 
posts in this discussion, that there are some concrete numbers to show, 
which I can't find.

My claim to win ALL benchmarks (excluding maybe tiny objects) came only 
as a response to mike's remarks that I have only proven that neanderthal is 
faster for dgemm etc.

OK, maybe the point is that other libraries do not care that much about 
speed, or that current speed is enough, or whatever, and I am ok with that. 
I would just like it to be explicitly said, so I do not lose time arguing 
about what is not important. Or it would be nice to see some numbers shown 
to draw at least rough picture of what can be expected. I am glad if my 
raising this issue would improve the situation, but I do not insist...

On Monday, June 22, 2015 at 9:16:15 PM UTC+2, Christopher Small wrote:

 Well, we also weren't claiming to win ALL benchmarks compared to 
 anything :-)

 But your point is well taken, better benchmarking should be pretty 
 valuable to the community moving forward.

 Chris


 On Mon, Jun 22, 2015 at 12:10 PM, Dragan Djuric drag...@gmail.com 
 javascript: wrote:

 So, there are exactly two measurements there: matrix multiplication and 
 vector addition for dimension 100 (which is quite small and should favor 
 vectorz). Here are the results on my machine:

 Matrix multiplications are given at the neanderthal web site at 
 http://neanderthal.uncomplicate.org/articles/benchmarks.html in much 
 more details than that, so I won't repeat that here.

 Vector addition according to criterium: 124ns vectorz vs 78ns neanderthal 
 on my i7 4790k

 Mind you that the project you pointed uses rather old library versions. I 
 updated them to the latest versions. Also, the code does not run for both 
 old and new versions properly (it complains about :clatrix) so I had to 
 evaluate it manually in the repl.

 I wonder why you complained that I didn't show more benchmark data about 
 my claims when I had shown much more (and relevant) data than it is 
 available for core.matrix, but I would use the opportunity to appeal to 
 core.matrix community to improve that.

 On Monday, June 22, 2015 at 8:13:29 PM UTC+2, Christopher Small wrote:

 For benchmarking, there's this: 
 https://github.com/mikera/core.matrix.benchmark. It's pretty simple 
 though. It would be nice to see something more robust and composable, and 
 with nicer output options. I'll put a little bit of time into that now, but 
 again, a bit busy to do as much as I'd like here :-)

 Chris


 On Mon, Jun 22, 2015 at 9:14 AM, Dragan Djuric drag...@gmail.com 
 wrote:


 As for performance benchmarks, I have to echo Mike that it seemed 
 strange to me that you were claiming you were faster on ALL benchmarks 
 when 
 I'd only seen data on one. Would you mind sharing your full benchmarking 
 analyses?


 I think this might be a very important issue, and I am glad that you 
 raised it. Has anyone shared any core.matrix (or, to be precise, 
 core.matrix) benchmark data? I know about Java benchmark code project that 
 include vectorz, but I think it would help core.matrix users to see the 
 actual numbers. One main thing vectorz (and core.matrix) is claiming is 
 that it is *fast*. Mike seemed a bit (pleasantly) surprised when I shared 
 my results for vectorz mmul... 

 So, my proposal would be that you (or anyone else able and willing) 
 create a simple Clojure project that simply lists typical core.matrix use 
 cases, or just the core procedures in core.matrix code that you want to 
 measure and that you are interested to see Neanderthal doing. Ready 
 criterium infrastructure is cool, but I'm not even ask for that if you do 
 not have time. Just a setup with matrix objects and core.matrix function 
 calls that you want measured. Share your numbers and that project on 
 Github 
 and I will contribute comparative code for Neanderthal benchmarks, and 
 results for both codes run on my machine. Of course, that would be micro 
 benchmarks, but useful anyway for you, one Neanderthal user (me :) and for 
 all core.matrix users.

 You interested?

 With all that out of the way... I'm glad that you're willing to play 
 ball here with the core.matrix community, and thank you for what I think 
 has been a very productive discussion. I think we all went from talking 
 _past_ each other, to understanding what the issues are and can now 
 hopefully start moving forward and making things happen. While I think 
 we'd 
 all love to have you (Dragan) personally working on the core.matrix 
 implementations, I agree with Mars0i that just having you agree to 
 work-with/advise others who would do the actual work is great. I'd 
 personally love to take that on myself, but I already have about a half 
 dozen side projects I'm working on which I barely have time for. Oh, and 
 a 
 four month old baby :scream:! So if there's anyone else who's 

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-22 Thread A

Here's another benchmark for comparison: 
https://code.google.com/p/redsvd/wiki/English

-A


On Monday, June 22, 2015 at 12:27:57 PM UTC-7, Dragan Djuric wrote:

 core.matrix claims that it is fast on its project page (with which I agree 
 in some cases). I expected from that, and from the last couple of your 
 posts in this discussion, that there are some concrete numbers to show, 
 which I can't find.

 My claim to win ALL benchmarks (excluding maybe tiny objects) came only 
 as a response to mike's remarks that I have only proven that neanderthal is 
 faster for dgemm etc.

 OK, maybe the point is that other libraries do not care that much about 
 speed, or that current speed is enough, or whatever, and I am ok with that. 
 I would just like it to be explicitly said, so I do not lose time arguing 
 about what is not important. Or it would be nice to see some numbers shown 
 to draw at least rough picture of what can be expected. I am glad if my 
 raising this issue would improve the situation, but I do not insist...

 On Monday, June 22, 2015 at 9:16:15 PM UTC+2, Christopher Small wrote:

 Well, we also weren't claiming to win ALL benchmarks compared to 
 anything :-)

 But your point is well taken, better benchmarking should be pretty 
 valuable to the community moving forward.

 Chris


 On Mon, Jun 22, 2015 at 12:10 PM, Dragan Djuric drag...@gmail.com 
 wrote:

 So, there are exactly two measurements there: matrix multiplication and 
 vector addition for dimension 100 (which is quite small and should favor 
 vectorz). Here are the results on my machine:

 Matrix multiplications are given at the neanderthal web site at 
 http://neanderthal.uncomplicate.org/articles/benchmarks.html in much 
 more details than that, so I won't repeat that here.

 Vector addition according to criterium: 124ns vectorz vs 78ns 
 neanderthal on my i7 4790k

 Mind you that the project you pointed uses rather old library versions. 
 I updated them to the latest versions. Also, the code does not run for both 
 old and new versions properly (it complains about :clatrix) so I had to 
 evaluate it manually in the repl.

 I wonder why you complained that I didn't show more benchmark data about 
 my claims when I had shown much more (and relevant) data than it is 
 available for core.matrix, but I would use the opportunity to appeal to 
 core.matrix community to improve that.

 On Monday, June 22, 2015 at 8:13:29 PM UTC+2, Christopher Small wrote:

 For benchmarking, there's this: 
 https://github.com/mikera/core.matrix.benchmark. It's pretty simple 
 though. It would be nice to see something more robust and composable, and 
 with nicer output options. I'll put a little bit of time into that now, 
 but 
 again, a bit busy to do as much as I'd like here :-)

 Chris


 On Mon, Jun 22, 2015 at 9:14 AM, Dragan Djuric drag...@gmail.com 
 wrote:


 As for performance benchmarks, I have to echo Mike that it seemed 
 strange to me that you were claiming you were faster on ALL benchmarks 
 when 
 I'd only seen data on one. Would you mind sharing your full benchmarking 
 analyses?


 I think this might be a very important issue, and I am glad that you 
 raised it. Has anyone shared any core.matrix (or, to be precise, 
 core.matrix) benchmark data? I know about Java benchmark code project 
 that 
 include vectorz, but I think it would help core.matrix users to see the 
 actual numbers. One main thing vectorz (and core.matrix) is claiming is 
 that it is *fast*. Mike seemed a bit (pleasantly) surprised when I shared 
 my results for vectorz mmul... 

 So, my proposal would be that you (or anyone else able and willing) 
 create a simple Clojure project that simply lists typical core.matrix use 
 cases, or just the core procedures in core.matrix code that you want to 
 measure and that you are interested to see Neanderthal doing. Ready 
 criterium infrastructure is cool, but I'm not even ask for that if you do 
 not have time. Just a setup with matrix objects and core.matrix function 
 calls that you want measured. Share your numbers and that project on 
 Github 
 and I will contribute comparative code for Neanderthal benchmarks, and 
 results for both codes run on my machine. Of course, that would be micro 
 benchmarks, but useful anyway for you, one Neanderthal user (me :) and 
 for 
 all core.matrix users.

 You interested?

 With all that out of the way... I'm glad that you're willing to play 
 ball here with the core.matrix community, and thank you for what I think 
 has been a very productive discussion. I think we all went from talking 
 _past_ each other, to understanding what the issues are and can now 
 hopefully start moving forward and making things happen. While I think 
 we'd 
 all love to have you (Dragan) personally working on the core.matrix 
 implementations, I agree with Mars0i that just having you agree to 
 work-with/advise others who would do the actual work is great. I'd 
 personally love to take that on myself, but I 

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-22 Thread Dragan Djuric
As it is a *sparse matrix*, C++ library unavailable on JVM, I don't
consider it relevant for comparison as these are really apples and
pineapples. For now, at least.

On Tue, Jun 23, 2015 at 12:13 AM, A aael...@gmail.com wrote:


 Here's another benchmark for comparison:
 https://code.google.com/p/redsvd/wiki/English

 -A



 On Monday, June 22, 2015 at 12:27:57 PM UTC-7, Dragan Djuric wrote:

 core.matrix claims that it is fast on its project page (with which I
 agree in some cases). I expected from that, and from the last couple of
 your posts in this discussion, that there are some concrete numbers to
 show, which I can't find.

 My claim to win ALL benchmarks (excluding maybe tiny objects) came only
 as a response to mike's remarks that I have only proven that neanderthal is
 faster for dgemm etc.

 OK, maybe the point is that other libraries do not care that much about
 speed, or that current speed is enough, or whatever, and I am ok with that.
 I would just like it to be explicitly said, so I do not lose time arguing
 about what is not important. Or it would be nice to see some numbers shown
 to draw at least rough picture of what can be expected. I am glad if my
 raising this issue would improve the situation, but I do not insist...

 On Monday, June 22, 2015 at 9:16:15 PM UTC+2, Christopher Small wrote:

 Well, we also weren't claiming to win ALL benchmarks compared to
 anything :-)

 But your point is well taken, better benchmarking should be pretty
 valuable to the community moving forward.

 Chris


 On Mon, Jun 22, 2015 at 12:10 PM, Dragan Djuric drag...@gmail.com
 wrote:

 So, there are exactly two measurements there: matrix multiplication and
 vector addition for dimension 100 (which is quite small and should favor
 vectorz). Here are the results on my machine:

 Matrix multiplications are given at the neanderthal web site at
 http://neanderthal.uncomplicate.org/articles/benchmarks.html in much
 more details than that, so I won't repeat that here.

 Vector addition according to criterium: 124ns vectorz vs 78ns
 neanderthal on my i7 4790k

 Mind you that the project you pointed uses rather old library versions.
 I updated them to the latest versions. Also, the code does not run for both
 old and new versions properly (it complains about :clatrix) so I had to
 evaluate it manually in the repl.

 I wonder why you complained that I didn't show more benchmark data
 about my claims when I had shown much more (and relevant) data than it is
 available for core.matrix, but I would use the opportunity to appeal to
 core.matrix community to improve that.

 On Monday, June 22, 2015 at 8:13:29 PM UTC+2, Christopher Small wrote:

 For benchmarking, there's this:
 https://github.com/mikera/core.matrix.benchmark. It's pretty simple
 though. It would be nice to see something more robust and composable, and
 with nicer output options. I'll put a little bit of time into that now, 
 but
 again, a bit busy to do as much as I'd like here :-)

 Chris


 On Mon, Jun 22, 2015 at 9:14 AM, Dragan Djuric drag...@gmail.com
 wrote:


 As for performance benchmarks, I have to echo Mike that it seemed
 strange to me that you were claiming you were faster on ALL benchmarks 
 when
 I'd only seen data on one. Would you mind sharing your full benchmarking
 analyses?


 I think this might be a very important issue, and I am glad that you
 raised it. Has anyone shared any core.matrix (or, to be precise,
 core.matrix) benchmark data? I know about Java benchmark code project 
 that
 include vectorz, but I think it would help core.matrix users to see the
 actual numbers. One main thing vectorz (and core.matrix) is claiming is
 that it is *fast*. Mike seemed a bit (pleasantly) surprised when I shared
 my results for vectorz mmul...

 So, my proposal would be that you (or anyone else able and willing)
 create a simple Clojure project that simply lists typical core.matrix use
 cases, or just the core procedures in core.matrix code that you want to
 measure and that you are interested to see Neanderthal doing. Ready
 criterium infrastructure is cool, but I'm not even ask for that if you do
 not have time. Just a setup with matrix objects and core.matrix function
 calls that you want measured. Share your numbers and that project on 
 Github
 and I will contribute comparative code for Neanderthal benchmarks, and
 results for both codes run on my machine. Of course, that would be micro
 benchmarks, but useful anyway for you, one Neanderthal user (me :) and 
 for
 all core.matrix users.

 You interested?

 With all that out of the way... I'm glad that you're willing to play
 ball here with the core.matrix community, and thank you for what I think
 has been a very productive discussion. I think we all went from talking
 _past_ each other, to understanding what the issues are and can now
 hopefully start moving forward and making things happen. While I think 
 we'd
 all love to have you (Dragan) personally working on the core.matrix
 

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-22 Thread Mikera
Hi Dragan,

The situation as I see it:
- You've created a matrix library that performs well on one benchmark 
(dense matrix multiplication). 
- Neanderthal meets your own personal use cases. Great job!
- Neanderthal *doesn't* fit the use cases of many others (e.g. some need a 
portable pure JVM implementation, so Neanderthal is immediately out)
- Fortunately, in the Clojure world we have a unique way for such libraries 
to interoperate smoothly with a common API (core.matrix)
- Neanderthal could fit nicely in this ecosystem (possibly it could even 
replace Clatrix, which as you note hasn't really been maintained for a 
while...)
- For some strange reason, it *appears to me* that you don't want to 
collaborate. If I perceive wrongly, then I apologise.

If you want to work together with the rest of the community, that's great. 
I'm personally happy to help you make Neanderthal into a great matrix 
implementation that works well with core.matrix. I'm 100% sure that is an 
relatively simple and achievable goal, having done it already with 
vectorz-clj 

If on the other hand your intention is to go your own way and build 
something that is totally independent and incompatible, that is of course 
your right but I think that's a really bad idea and would be detrimental to 
the community as a whole. Fragmentation is a likely result. At worst, 
you'll be stuck maintaining a library with virtually no users (the Clojure 
community is fairly small anyway... and it is pretty lonely to be a 
minority within a minority)

I can see from your comments below that you still don't understand 
core.matrix. I'd be happy to help clarify if you are seriously interested 
in being part of the ecosystem. Ultimately I think you have some talent, 
you have obviously put in a decent amount of work and Neanderthal could be 
a great library *if and only if* it works well with the rest of the 
ecosystem and you are personally willing to collaborate. 

Your call.

On Monday, 22 June 2015 10:05:15 UTC+1, Dragan Djuric wrote:



 On Monday, June 22, 2015 at 2:02:19 AM UTC+2, Mikera wrote:


 There is nothing fundamentally wrong with BLAS/LAPACK, it just isn't 
 suitable as a general purpose array programming API. See my comments 
 further below.


 I was discussing it from the *matrix API* perspective. My comments follow:
  

 If you think the core.matrix API is unintuitive and complicated then 
 I'd love to hear specific examples. We're still open to changing things 
 before we hit 1.0


 I will only give a couple basic ones, but I think they draw a bigger 
 picture. Let's say I am a Clojure programmer with no huge experience in 
 numerical computing. I do have some knowledge about linear algebra and have 
 a textbook or a paper with an algorithm that I need, which is based on some 
 linear algebra operations. I'd say that this is the most common use case 
 for an API such as core.matrix, and I hope you agree. After trying to write 
 my own loops and recursion and fail to do it well, I shop around and find 
 core.matrix with its cool proposal: a lot of numerical stuff in Clojure, 
 with pluggable implementations. Yahooo! My problem is almost solved. Go to 
 the main work right away:

 1. I add it to my project and try the + example from the github page. It 
 works.
 2. Now I start implementing my algorithm. How to add-and-multiply a few 
 matrices? THERE IS NO API DOC. I have to google and find 
 https://github.com/mikera/core.matrix/wiki/Vectors-vs.-matrices so I 
 guess it's mmul, but there is a lot of talk of some loosely related 
 implementation details. Column matrixes, slices, ndarrays... What? A lot of 
 implementation dependent info, almost no info on what I need (API).
 3. I read the mailing list and the source code, and, if I manage to filter 
 API information from a lot of implementation discussion I manage to draw a 
 rough sketch of what I need (API).
 4. I implement my algorithm with the default implementation (vectorz) and 
 it works. I measure the performance, and as soon as the data size becomes a 
 little more serious, it's too slow. No problem - pluggable implementations 
 are here. Surely that Clatrix thing must be blazingly fast, it's native. I 
 switch the implementations in no time, and get even poorer performance. 
 WHAT?
 5. I try to find help on the mailing list. I was using the implementation 
 in a wrong way. WHY? It was all right with vectorz! Well, we didn't quite 
 implemented it fully. A lot of functions are fallback. The implementation 
 is not suitable for that particular call... Seriously? It's featured on the 
 front page!
 6. But, what is the right way to use it? I want to learn. THERE IS NO 
 INFO. But, look at this, you can treat a Clojure vector as a quaternion and 
 multiply it with a JSON hash-map, which is treated as a matrix of 
 characters (OK, I am exaggerating, but not that much :)
 etc, etc... 
  

But it certainly isn't arbitrarily invented. Please note that we have 
 collectively considered 

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-22 Thread Dragan Djuric


On Monday, June 22, 2015 at 2:02:19 AM UTC+2, Mikera wrote:


 There is nothing fundamentally wrong with BLAS/LAPACK, it just isn't 
 suitable as a general purpose array programming API. See my comments 
 further below.


I was discussing it from the *matrix API* perspective. My comments follow:
 

 If you think the core.matrix API is unintuitive and complicated then I'd 
 love to hear specific examples. We're still open to changing things before 
 we hit 1.0


I will only give a couple basic ones, but I think they draw a bigger 
picture. Let's say I am a Clojure programmer with no huge experience in 
numerical computing. I do have some knowledge about linear algebra and have 
a textbook or a paper with an algorithm that I need, which is based on some 
linear algebra operations. I'd say that this is the most common use case 
for an API such as core.matrix, and I hope you agree. After trying to write 
my own loops and recursion and fail to do it well, I shop around and find 
core.matrix with its cool proposal: a lot of numerical stuff in Clojure, 
with pluggable implementations. Yahooo! My problem is almost solved. Go to 
the main work right away:

1. I add it to my project and try the + example from the github page. It 
works.
2. Now I start implementing my algorithm. How to add-and-multiply a few 
matrices? THERE IS NO API DOC. I have to google and find 
https://github.com/mikera/core.matrix/wiki/Vectors-vs.-matrices so I guess 
it's mmul, but there is a lot of talk of some loosely related 
implementation details. Column matrixes, slices, ndarrays... What? A lot of 
implementation dependent info, almost no info on what I need (API).
3. I read the mailing list and the source code, and, if I manage to filter 
API information from a lot of implementation discussion I manage to draw a 
rough sketch of what I need (API).
4. I implement my algorithm with the default implementation (vectorz) and 
it works. I measure the performance, and as soon as the data size becomes a 
little more serious, it's too slow. No problem - pluggable implementations 
are here. Surely that Clatrix thing must be blazingly fast, it's native. I 
switch the implementations in no time, and get even poorer performance. 
WHAT?
5. I try to find help on the mailing list. I was using the implementation 
in a wrong way. WHY? It was all right with vectorz! Well, we didn't quite 
implemented it fully. A lot of functions are fallback. The implementation 
is not suitable for that particular call... Seriously? It's featured on the 
front page!
6. But, what is the right way to use it? I want to learn. THERE IS NO INFO. 
But, look at this, you can treat a Clojure vector as a quaternion and 
multiply it with a JSON hash-map, which is treated as a matrix of 
characters (OK, I am exaggerating, but not that much :)
etc, etc... 

But it certainly isn't arbitrarily invented. Please note that we have 
 collectively considered a *lot* of previous work in the development of 
 core.matrix. People involved in the design have had experience with BLAS, 
 Fortran, NumPy, R, APL, numerous Java libraries, GPU acceleration, low 
 level assembly coding etc. We'd welcome your contributions too but I 
 hope you will first take the time to read the mailing list history etc. and 
 gain an appreciation for the design decisions.


I read lots of those discussions before. I may or may not agree with the 
written fully or partially, but I see that the result is far from what I 
find recommended in numerical computing literature that I read, and I do 
not see the core.matrix implementations show that literature wrong. 

 


 In my opinion, the best way to create a standard API is to grow it from 
 successful implementations, instead of writing it first, and then 
 shoehorning the implementations to fit it.


 It is (comparatively) easy to write an API for a specific implementation 
 that supports a few specific operations and/or meets a specific use case. 
 The original Clatrix is an example of one such library.


Can you point me to some of the implementations where switching the 
implementation of an algorithm from vectorz to clatrix shows performance 
boost?
And, easy? Surely then the Clatrix implementation would be fully 
implemented and properly supported (and documented) after 2-3 years since 
it was included?
 

 But that soon falls apart when you realise that the API+implementation 
 doesn't meet  broader requirements, so you quickly get fragmentation e.g.
 - someone else creates a pure-JVM API for those who can't use native code 
 (e.g. vectorz-clj)


So, what is wrong with that? There are dozens of Clojure libraries for SQL, 
http, visualization, etc, and all have their place.
 

 - someone else produces a similar library with a new API that wins on some 
 benchmarks (e.g. Neanderthal)


I get your point, but would just note that Neanderthal wins *ALL* benchmark 
(that fit use cases that I need). Not because it is something too clever, 
but because it stands on 

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-22 Thread Christopher Small
For benchmarking, there's this:
https://github.com/mikera/core.matrix.benchmark. It's pretty simple though.
It would be nice to see something more robust and composable, and with
nicer output options. I'll put a little bit of time into that now, but
again, a bit busy to do as much as I'd like here :-)

Chris


On Mon, Jun 22, 2015 at 9:14 AM, Dragan Djuric draga...@gmail.com wrote:


 As for performance benchmarks, I have to echo Mike that it seemed strange
 to me that you were claiming you were faster on ALL benchmarks when I'd
 only seen data on one. Would you mind sharing your full benchmarking
 analyses?


 I think this might be a very important issue, and I am glad that you
 raised it. Has anyone shared any core.matrix (or, to be precise,
 core.matrix) benchmark data? I know about Java benchmark code project that
 include vectorz, but I think it would help core.matrix users to see the
 actual numbers. One main thing vectorz (and core.matrix) is claiming is
 that it is *fast*. Mike seemed a bit (pleasantly) surprised when I shared
 my results for vectorz mmul...

 So, my proposal would be that you (or anyone else able and willing) create
 a simple Clojure project that simply lists typical core.matrix use cases,
 or just the core procedures in core.matrix code that you want to measure
 and that you are interested to see Neanderthal doing. Ready criterium
 infrastructure is cool, but I'm not even ask for that if you do not have
 time. Just a setup with matrix objects and core.matrix function calls that
 you want measured. Share your numbers and that project on Github and I will
 contribute comparative code for Neanderthal benchmarks, and results for
 both codes run on my machine. Of course, that would be micro benchmarks,
 but useful anyway for you, one Neanderthal user (me :) and for all
 core.matrix users.

 You interested?

 With all that out of the way... I'm glad that you're willing to play ball
 here with the core.matrix community, and thank you for what I think has
 been a very productive discussion. I think we all went from talking _past_
 each other, to understanding what the issues are and can now hopefully
 start moving forward and making things happen. While I think we'd all love
 to have you (Dragan) personally working on the core.matrix implementations,
 I agree with Mars0i that just having you agree to work-with/advise others
 who would do the actual work is great. I'd personally love to take that on
 myself, but I already have about a half dozen side projects I'm working on
 which I barely have time for. Oh, and a four month old baby :scream:! So if
 there's anyone else who's willing, I may leave it to them :-)


 I'm also glad we understand each other better now :)

 --
 You received this message because you are subscribed to the Google
 Groups Clojure group.
 To post to this group, send email to clojure@googlegroups.com
 Note that posts from new members are moderated - please be patient with
 your first post.
 To unsubscribe from this group, send email to
 clojure+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/clojure?hl=en
 ---
 You received this message because you are subscribed to a topic in the
 Google Groups Clojure group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/clojure/dFPOOw8pSGI/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 clojure+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-22 Thread Dragan Djuric
So, there are exactly two measurements there: matrix multiplication and 
vector addition for dimension 100 (which is quite small and should favor 
vectorz). Here are the results on my machine:

Matrix multiplications are given at the neanderthal web site 
at http://neanderthal.uncomplicate.org/articles/benchmarks.html in much 
more details than that, so I won't repeat that here.

Vector addition according to criterium: 124ns vectorz vs 78ns neanderthal 
on my i7 4790k

Mind you that the project you pointed uses rather old library versions. I 
updated them to the latest versions. Also, the code does not run for both 
old and new versions properly (it complains about :clatrix) so I had to 
evaluate it manually in the repl.

I wonder why you complained that I didn't show more benchmark data about my 
claims when I had shown much more (and relevant) data than it is available 
for core.matrix, but I would use the opportunity to appeal to core.matrix 
community to improve that.

On Monday, June 22, 2015 at 8:13:29 PM UTC+2, Christopher Small wrote:

 For benchmarking, there's this: 
 https://github.com/mikera/core.matrix.benchmark. It's pretty simple 
 though. It would be nice to see something more robust and composable, and 
 with nicer output options. I'll put a little bit of time into that now, but 
 again, a bit busy to do as much as I'd like here :-)

 Chris


 On Mon, Jun 22, 2015 at 9:14 AM, Dragan Djuric drag...@gmail.com 
 javascript: wrote:


 As for performance benchmarks, I have to echo Mike that it seemed 
 strange to me that you were claiming you were faster on ALL benchmarks when 
 I'd only seen data on one. Would you mind sharing your full benchmarking 
 analyses?


 I think this might be a very important issue, and I am glad that you 
 raised it. Has anyone shared any core.matrix (or, to be precise, 
 core.matrix) benchmark data? I know about Java benchmark code project that 
 include vectorz, but I think it would help core.matrix users to see the 
 actual numbers. One main thing vectorz (and core.matrix) is claiming is 
 that it is *fast*. Mike seemed a bit (pleasantly) surprised when I shared 
 my results for vectorz mmul... 

 So, my proposal would be that you (or anyone else able and willing) 
 create a simple Clojure project that simply lists typical core.matrix use 
 cases, or just the core procedures in core.matrix code that you want to 
 measure and that you are interested to see Neanderthal doing. Ready 
 criterium infrastructure is cool, but I'm not even ask for that if you do 
 not have time. Just a setup with matrix objects and core.matrix function 
 calls that you want measured. Share your numbers and that project on Github 
 and I will contribute comparative code for Neanderthal benchmarks, and 
 results for both codes run on my machine. Of course, that would be micro 
 benchmarks, but useful anyway for you, one Neanderthal user (me :) and for 
 all core.matrix users.

 You interested?

 With all that out of the way... I'm glad that you're willing to play ball 
 here with the core.matrix community, and thank you for what I think has 
 been a very productive discussion. I think we all went from talking _past_ 
 each other, to understanding what the issues are and can now hopefully 
 start moving forward and making things happen. While I think we'd all love 
 to have you (Dragan) personally working on the core.matrix implementations, 
 I agree with Mars0i that just having you agree to work-with/advise others 
 who would do the actual work is great. I'd personally love to take that on 
 myself, but I already have about a half dozen side projects I'm working on 
 which I barely have time for. Oh, and a four month old baby :scream:! So if 
 there's anyone else who's willing, I may leave it to them :-)


 I'm also glad we understand each other better now :) 

 -- 
 You received this message because you are subscribed to the Google
 Groups Clojure group.
 To post to this group, send email to clo...@googlegroups.com 
 javascript:
 Note that posts from new members are moderated - please be patient with 
 your first post.
 To unsubscribe from this group, send email to
 clojure+u...@googlegroups.com javascript:
 For more options, visit this group at
 http://groups.google.com/group/clojure?hl=en
 --- 
 You received this message because you are subscribed to a topic in the 
 Google Groups Clojure group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/clojure/dFPOOw8pSGI/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 clojure+u...@googlegroups.com javascript:.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-22 Thread Christopher Small
Well, we also weren't claiming to win ALL benchmarks compared to anything
:-)

But your point is well taken, better benchmarking should be pretty valuable
to the community moving forward.

Chris


On Mon, Jun 22, 2015 at 12:10 PM, Dragan Djuric draga...@gmail.com wrote:

 So, there are exactly two measurements there: matrix multiplication and
 vector addition for dimension 100 (which is quite small and should favor
 vectorz). Here are the results on my machine:

 Matrix multiplications are given at the neanderthal web site at
 http://neanderthal.uncomplicate.org/articles/benchmarks.html in much more
 details than that, so I won't repeat that here.

 Vector addition according to criterium: 124ns vectorz vs 78ns neanderthal
 on my i7 4790k

 Mind you that the project you pointed uses rather old library versions. I
 updated them to the latest versions. Also, the code does not run for both
 old and new versions properly (it complains about :clatrix) so I had to
 evaluate it manually in the repl.

 I wonder why you complained that I didn't show more benchmark data about
 my claims when I had shown much more (and relevant) data than it is
 available for core.matrix, but I would use the opportunity to appeal to
 core.matrix community to improve that.

 On Monday, June 22, 2015 at 8:13:29 PM UTC+2, Christopher Small wrote:

 For benchmarking, there's this:
 https://github.com/mikera/core.matrix.benchmark. It's pretty simple
 though. It would be nice to see something more robust and composable, and
 with nicer output options. I'll put a little bit of time into that now, but
 again, a bit busy to do as much as I'd like here :-)

 Chris


 On Mon, Jun 22, 2015 at 9:14 AM, Dragan Djuric drag...@gmail.com wrote:


 As for performance benchmarks, I have to echo Mike that it seemed
 strange to me that you were claiming you were faster on ALL benchmarks when
 I'd only seen data on one. Would you mind sharing your full benchmarking
 analyses?


 I think this might be a very important issue, and I am glad that you
 raised it. Has anyone shared any core.matrix (or, to be precise,
 core.matrix) benchmark data? I know about Java benchmark code project that
 include vectorz, but I think it would help core.matrix users to see the
 actual numbers. One main thing vectorz (and core.matrix) is claiming is
 that it is *fast*. Mike seemed a bit (pleasantly) surprised when I shared
 my results for vectorz mmul...

 So, my proposal would be that you (or anyone else able and willing)
 create a simple Clojure project that simply lists typical core.matrix use
 cases, or just the core procedures in core.matrix code that you want to
 measure and that you are interested to see Neanderthal doing. Ready
 criterium infrastructure is cool, but I'm not even ask for that if you do
 not have time. Just a setup with matrix objects and core.matrix function
 calls that you want measured. Share your numbers and that project on Github
 and I will contribute comparative code for Neanderthal benchmarks, and
 results for both codes run on my machine. Of course, that would be micro
 benchmarks, but useful anyway for you, one Neanderthal user (me :) and for
 all core.matrix users.

 You interested?

 With all that out of the way... I'm glad that you're willing to play
 ball here with the core.matrix community, and thank you for what I think
 has been a very productive discussion. I think we all went from talking
 _past_ each other, to understanding what the issues are and can now
 hopefully start moving forward and making things happen. While I think we'd
 all love to have you (Dragan) personally working on the core.matrix
 implementations, I agree with Mars0i that just having you agree to
 work-with/advise others who would do the actual work is great. I'd
 personally love to take that on myself, but I already have about a half
 dozen side projects I'm working on which I barely have time for. Oh, and a
 four month old baby :scream:! So if there's anyone else who's willing, I
 may leave it to them :-)


 I'm also glad we understand each other better now :)

 --
 You received this message because you are subscribed to the Google
 Groups Clojure group.
 To post to this group, send email to clo...@googlegroups.com
 Note that posts from new members are moderated - please be patient with
 your first post.
 To unsubscribe from this group, send email to
 clojure+u...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/clojure?hl=en
 ---
 You received this message because you are subscribed to a topic in the
 Google Groups Clojure group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/clojure/dFPOOw8pSGI/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 clojure+u...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google
 Groups Clojure group.
 To post to this group, send email 

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-22 Thread Dragan Djuric
Hi Mike,

On Monday, June 22, 2015 at 1:56:46 PM UTC+2, Mikera wrote:

 Hi Dragan,

 The situation as I see it:
 - You've created a matrix library that performs well on one benchmark 
 (dense matrix multiplication).

 
It performs well on all benchmarks, even BLAS 1 vector stuff is faster than 
primitive arrays and vectorz for anything but the very small vectors. I 
just didn't publish all that benchmarks that I've been doing continuously 
because dgemm is what is most telling.
 

 - Neanderthal meets your own personal use cases. Great job!
 - Neanderthal *doesn't* fit the use cases of many others (e.g. some need a 
 portable pure JVM implementation, so Neanderthal is immediately out)
 - Fortunately, in the Clojure world we have a unique way for such 
 libraries to interoperate smoothly with a common API (core.matrix)


And I've already made it clear that I am happy about that. Please do 
continue to use what works for your needs.
 

 - Neanderthal could fit nicely in this ecosystem (possibly it could even 
 replace Clatrix, which as you note hasn't really been maintained for a 
 while...)
 - For some strange reason, it *appears to me* that you don't want to 
 collaborate. If I perceive wrongly, then I apologise.


I would like to collaborate. I would like to offer help to anyone who needs 
core.matrix integration and wants to build it and maintain it. I do not 
have time, resources, knowledge, and need to do that work myself. 
 

 If you want to work together with the rest of the community, that's great. 
 I'm personally happy to help you make Neanderthal into a great matrix 
 implementation that works well with core.matrix. I'm 100% sure that is an 
 relatively simple and achievable goal, having done it already with 
 vectorz-clj 


Thank you for the willingness to help me do that work, and I will 
definitely ask for it if/when I need to do that. I hope someone who might 
need that integration would step in and do the work, and we could both help 
him with that.
 

 If on the other hand your intention is to go your own way and build 
 something that is totally independent and incompatible, that is of course 
 your right but I think that's a really bad idea and would be detrimental to 
 the community as a whole. 


Sorry, but I fail to see how something that is open and free and solves the 
problems of some but not all potential users can be detrimental, but I 
respect your opinion.
 

 Fragmentation is a likely result. At worst, you'll be stuck maintaining a 
 library with virtually no users (the Clojure community is fairly small 
 anyway... and it is pretty lonely to be a minority within a minority)


What you see as fragmentation I see as open choice. Neanderthal can not 
have virtually no users, when there is *at least one happy user* - myself, 
which is, incidentally, the one most important user for me. It is not like 
I am a salesman hustling for money. I created a library to satisfy my needs 
that were unsatisfied by the existing offerings, and I released that 
library for anyone for free. If anyone feels it's not right for them, they 
can step in and adapt it or use something else. That's how open source 
works. 
 


 I can see from your comments below that you still don't understand 
 core.matrix. I'd be happy to help clarify if you are seriously interested 
 in being part of the ecosystem. 


Yes, apparently I don't understand it. And this may be a major point in all 
this discussion. If I, a pretty informed and experienced user, am not 
understanding an API that aspires to be THE api for a domain that has not 
been unified in any language on any platform yet, how are casual users 
going to understand it? I would like the clarification, not only for me, 
but for all people that might need it in the future.
 

 Ultimately I think you have some talent, you have obviously put in a 
 decent amount of work and Neanderthal could be a great library *if and only 
 if* it works well with the rest of the ecosystem and you are personally 
 willing to collaborate. 

 Your call.


I am willing to help, and to contribute, and all that, but how to RTFM when 
THERE IS NO TFM to read? The call should be on core.matrix people to 
explain the idea clearly and to document the whole stuff, so anyone who is 
interested could try to understand it and see whether he/she is able to 
help. Current state of documentation is far from useful, and only people 
who designed it can do something about that.

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from 

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-22 Thread Dragan Djuric
Just a quick addition, if it is still not clear what I mean regarding the 
documentation. Look at this fairly recent question on stack overflow:
http://stackoverflow.com/questions/19982466/matrix-multiplication-in-core-matrix

On Tuesday, January 13, 2015 at 2:13:13 AM UTC+1, Dragan Djuric wrote:

 I am pleased to announce a first public release of new *very fast *native 
 matrix and linear algebra library for Clojure based on ATLAS BLAS.
 Extensive *documentation* is at http://neanderthal.uncomplicate.org
 See the benchmarks at 
 http://neanderthal.uncomplicate.org/articles/benchmarks.html.

 Neanderthal is a Clojure library that 

 Main project goals are:

- Be as fast as native ATLAS even for linear operations, with no 
copying overhead. It is roughly 2x faster than jBLAS for large matrices, 
and tens of times faster for small ones. Also faster than core.matrix for 
small and large matrices!
- Fit well into idiomatic Clojure - Clojure programmers should be able 
to use and understand Neanderthal like any regular Clojure library.
- Fit well into numerical computing literature - programmers should be 
able to reuse existing widespread BLAS and LAPACK programming know-how and 
easily translate it to Clojure code.

 Implemented features

- Data structures: double vector, double general dense matrix (GE);
- BLAS Level 1, 2, and 3 routines;
- Various Clojure vector and matrix functions (transpositions, 
submatrices etc.);
- Fast map, reduce and fold implementations for the provided 
structures.

 On the TODO list

- LAPACK routines;
- Banded, symmetric, triangular, and sparse matrices;
- Support for complex numbers;
- Support for single-precision floats.


 Call for help:
 Everything you need for Linux is in Clojars. If you know your way around 
 gcc on OS X, or around gcc and MinGW on Windows, and you are willing to 
 help providing the binary builds for those (or other) systems, please 
 contact me. There is an automatic build script, but gcc, atlas and other 
 build tools need to be properly set up on those systems.



-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-22 Thread Mars0i
It may be that there's agreement on everything that matters.  Dragan, 
you've said that you wouldn't mind others integrating Neanderthal into 
core.matrix, but that you don't want to do that.  That you are willing to 
work with others on this is probably all that's needed.  People may have 
questions, want you to consider pull requests, etc.  I think that 
integrating Neanderthal into core. matrix will be an attractive project for 
one someone.

(For the record I don't think it's fair to criticize core.matrix as not 
being an API because the documentation is limited.  The API is in the 
protocols, etc.  It's all there in the source.  Of course anyone using 
core.matrix occasionally encounters frustration at the lack of 
documentation, but core.matrix is a work in progress.  That's the nature of 
the situation.  I have been able to figure out how to do everything I 
needed to do using only the existing documentation, source code, 
docstrings, asking occasional questions with helpful answers from others.   
If I had more time, maybe I would work on the documentation more than the 
tiny bit I've been able to do so.  Similar remarks apply to what clatrix 
doesn't do well yet.  Being unfinished is not a design flaw.)

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-21 Thread Matt Revelle
Mike (core.matrix author) has been adjusting the API according to his needs 
and those of other users. I don't think anyone will disagree with the 
statement that good APIs are shaped by usage and keeping things simple. 
That said, I see no reason why there shouldn't be a common API which is 
higher level and more convenient than BLAS/LAPACK.

On Saturday, June 20, 2015 at 1:28:09 PM UTC-4, Mars0i wrote:

 Dragan, this just occurred to me--a small comment about the slow speed 
 that I reported 
 https://groups.google.com/forum/#!topic/numerical-clojure/WZ-CRchDyl8 
 from clatrix, which you mentioned earlier.  I'm not sure whether the slow 
 speed I experienced on 500x500 matrices itself provides evidence for 
 general conclusions about using the core.matrix api as an interface to 
 BLAS.  There was still a lot of work to be done on clatrix at that 
 point--maybe there still is.  My understanding is that clatrix supported 
 the core.matrix api at that stage, but it was known that it didn't do so in 
 an optimized way, in many respects.  Optimizing remaining areas was left 
 for future work.


Besides the call overhead involved, there are two places where speed 
performance is likely to suffer when using core.matrix with a native lib: 
copying/duplicating matrix data and mismatch between core.matrix operations 
and BLAS/LAPACK operations. The first is fixed by taking the approach 
Dragan has with Neanderthal, use a buffer which is shared by both the 
native lib and JVM code. The second is hypothetical, at least I don't have 
enough familiarity with the BLAS/LAPACK API to quickly identify a problem. 
Examples of these would be helpful.


 I think your general point doesn't depend on my experience with clatrix a 
 year ago, however.  I understand you to be saying that there are some 
 coding strategies that provide efficient code with BLAS and LAPACK, and 
 that are easy to use in Neanderthal, but that are difficult or impossible 
 using the core.matrix api.


Again, there should be examples. I agree with Dragan that better API 
documentation for core.matrix should exist. That said, there's only one 
file you need to look at (matrix.clj, 
https://github.com/mikera/core.matrix/blob/develop/src/main/clojure/clojure/core/matrix.clj)
 
and it should be straightforward for anyone interested to identify 
functions which would introduce inefficiencies for BLAS/LAPACK libraries.

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-21 Thread Mikera
On Saturday, 20 June 2015 08:43:39 UTC+1, Dragan Djuric wrote:

 On Friday, June 19, 2015 at 11:17:02 PM UTC+2, Christopher Small wrote:

 I see now Dragan; you're concerned not about whether easily implementing 
 and swapping in/out implementations of core.matrix is possible, but whether 
 it can be done while maintaining the performance characteristics of 
 Neanderthal, yes? That did not come through in your earlier comments in 
 this thread.


 This, with the addition that for *any* library, not only Neanderthal, 
 there would be many leaking abstractions. It is easy to define common 
 function/method names and parameters, but there are many things that just 
 flow through the API regardless, and taming this is the hardest part of any 
 API.

 


 Certainly, performance is one of those things that can leak in an 
 abstraction. But I'd like to echo Matt's enquiry: If you think a unified 
 API might be possible but that core.matrix isn't it, I think we'd all love 
 to hear what you think it's missing and/or what would would need to be 
 rearchitected in order for it to fit the bill.


 For an unified API, if it is at all feasible, I think there is one place 
 it should be looked at first: BLAS 1, 2, 3 and LAPACK. This is THE de facto 
 standard for matrix computations for dense and banded matrices. Sparse APIs 
 are not that uniform, bat in that space, also, there is a lot of previous 
 work. So, what's wrong with BLAS/LAPACK that core.matrix choose not to 
 follow it and arbitrarily invent (in my opinion) unintuitive and 
 complicated API? I am genuinely interested, maybe I don't see something 
 that other people do. 


There is nothing fundamentally wrong with BLAS/LAPACK, it just isn't 
suitable as a general purpose array programming API. See my comments 
further below.

If you think the core.matrix API is unintuitive and complicated then I'd 
love to hear specific examples. We're still open to changing things before 
we hit 1.0

But it certainly isn't arbitrarily invented. Please note that we have 
collectively considered a *lot* of previous work in the development of 
core.matrix. People involved in the design have had experience with BLAS, 
Fortran, NumPy, R, APL, numerous Java libraries, GPU acceleration, low 
level assembly coding etc. We'd welcome your contributions too but I 
hope you will first take the time to read the mailing list history etc. and 
gain an appreciation for the design decisions.

 


 In my opinion, the best way to create a standard API is to grow it from 
 successful implementations, instead of writing it first, and then 
 shoehorning the implementations to fit it.


It is (comparatively) easy to write an API for a specific implementation 
that supports a few specific operations and/or meets a specific use case. 
The original Clatrix is an example of one such library.

But that soon falls apart when you realise that the API+implementation 
doesn't meet  broader requirements, so you quickly get fragmentation e.g.
- someone else creates a pure-JVM API for those who can't use native code 
(e.g. vectorz-clj)
- someone else produces a similar library with a new API that wins on some 
benchmarks (e.g. Neanderthal)
- someone else needs arrays that support non-numerical scalar types (e.g. 
core.matrix NDArray)
- a library becomes unmaintained and someone forks a replacement
- someone wants to integrate a Java matrix library for legacy reasons
- someone else has a bad case of NIH syndrome and creates a whole new 
library
- etc.

Before long you have a fragmented ecosystem with many libraries, many 
different APIs and many annoyed / confused users who can't easily get their 
tools to work together. Many of us have seen this happen before in other 
contexts, and we don't want to see the same thing to happen for Clojure.

core.matrix solves the problem of library fragmentation by providing a 
common abstract API, while allowing users choice over which underlying 
implementation suits their particular needs best. To my knowledge Clojure 
is the *only* language ecosystem that has developed such a capability, and 
it has already proved extremely useful for many users. 

So if you see people asking for Neanderthal to join the core.matrix 
ecosystem, hopefully this helps to explain why.
 

  


 As for any sort of responsibility to implement core.matrix, I don't 
 think anyone is arguing you have such a responsibility, and I hope our 
 _pleading_ hasn't come across as such. We are simply impressed with your 
 work, and would like to take advantage of it, but also see a drawback you 
 don't: at present Neanderthal is less interoperable with many existing 
 tools, and trying it out on an existing project would require a rewrite 
 (as would migrating away from it if we weren't happy).

 Certainly, a third party library implementing core.matrix with 
 Neanderthal is a possibility, but I'm a bit worried that a) it would add 
 extra burden keeping things in sync and feel a little second class; 

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-20 Thread Dragan Djuric
On Friday, June 19, 2015 at 11:17:02 PM UTC+2, Christopher Small wrote:

 I see now Dragan; you're concerned not about whether easily implementing 
 and swapping in/out implementations of core.matrix is possible, but whether 
 it can be done while maintaining the performance characteristics of 
 Neanderthal, yes? That did not come through in your earlier comments in 
 this thread.


This, with the addition that for *any* library, not only Neanderthal, there 
would be many leaking abstractions. It is easy to define common 
function/method names and parameters, but there are many things that just 
flow through the API regardless, and taming this is the hardest part of any 
API.
 


 Certainly, performance is one of those things that can leak in an 
 abstraction. But I'd like to echo Matt's enquiry: If you think a unified 
 API might be possible but that core.matrix isn't it, I think we'd all love 
 to hear what you think it's missing and/or what would would need to be 
 rearchitected in order for it to fit the bill.


For an unified API, if it is at all feasible, I think there is one place it 
should be looked at first: BLAS 1, 2, 3 and LAPACK. This is THE de facto 
standard for matrix computations for dense and banded matrices. Sparse APIs 
are not that uniform, bat in that space, also, there is a lot of previous 
work. So, what's wrong with BLAS/LAPACK that core.matrix choose not to 
follow it and arbitrarily invent (in my opinion) unintuitive and 
complicated API? I am genuinely interested, maybe I don't see something 
that other people do.

In my opinion, the best way to create a standard API is to grow it from 
successful implementations, instead of writing it first, and then 
shoehorning the implementations to fit it.
 


 As for any sort of responsibility to implement core.matrix, I don't 
 think anyone is arguing you have such a responsibility, and I hope our 
 _pleading_ hasn't come across as such. We are simply impressed with your 
 work, and would like to take advantage of it, but also see a drawback you 
 don't: at present Neanderthal is less interoperable with many existing 
 tools, and trying it out on an existing project would require a rewrite 
 (as would migrating away from it if we weren't happy).

 Certainly, a third party library implementing core.matrix with Neanderthal 
 is a possibility, but I'm a bit worried that a) it would add extra burden 
 keeping things in sync and feel a little second class; and more importantly 
 b) it might be easier to maintain more of the performance benefits if it's 
 directly integrating (I could imagine less indirection this way, but could 
 be totally wrong). So let me ask you this:

 Assuming a) someone forks Neanderthal and makes a core.matrix 
 implementation with close performance parity to the direct Neanderthal API 
 and/or b) folks working on core.matrix are able to address some of your 
 issues with the core.matrix architecture, would you consider a merge?


a) I would rather see the core.matrix interoperability as an additional 
separate project first, and when/if it shows its value, and there is a 
person willing to maintain that part of the code, consider adding it to 
Neanderthal. I wouldn't see it as a second rate, and no fork is needed 
because of Clojure's extend-type/extend-protocol mechanism.

b) I am not sure about what's exactly wrong with core.matrix. Maybe 
nothing is wrong. The first thing that I am interested in is what do 
core.matrix team think is wrong with BLAS/LAPACK in the first place to be 
able to form an opinion in that regard.

Best Wishes,
Dragan
 


 With gratitude

 Chris



 On Fri, Jun 19, 2015 at 1:45 PM, Matt Revelle mrev...@gmail.com 
 javascript: wrote:

 On Friday, June 19, 2015 at 4:57:32 AM UTC-4, Dragan Djuric wrote:

 I do not even claim that an unified api is not possible. I think that to 
 some extent it is. I just doubt in core.matrix eligibility for THE api in 
 numerical computing. For it makes easy things easy and hard things 
 impossible.


 Are you saying you don't believe core.matrix should be _the_ abstract API 
 for matrices/arrays in Clojure? If so, what are your concerns? Feel free to 
 point to me a previous post if it's already been stated. It also sounds 
 like you're alluding to the thread in the Numerical Clojure group about a 
 broad numerical computing lib for complex numbers and various math 
 functions, but I'm not following how that matters here.

 -- 
 You received this message because you are subscribed to the Google
 Groups Clojure group.
 To post to this group, send email to clo...@googlegroups.com 
 javascript:
 Note that posts from new members are moderated - please be patient with 
 your first post.
 To unsubscribe from this group, send email to
 clojure+u...@googlegroups.com javascript:
 For more options, visit this group at
 http://groups.google.com/group/clojure?hl=en
 --- 
 You received this message because you are subscribed to a topic in the 
 Google Groups Clojure group.

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-20 Thread Dragan Djuric
It is difficult to say exactly, since the documentation is almost 
non-existent, which is especially bad for a standard API. Please see my 
answer to Christopher for more elaboration. 

On Friday, June 19, 2015 at 10:45:53 PM UTC+2, Matt Revelle wrote:

 On Friday, June 19, 2015 at 4:57:32 AM UTC-4, Dragan Djuric wrote:

 I do not even claim that an unified api is not possible. I think that to 
 some extent it is. I just doubt in core.matrix eligibility for THE api in 
 numerical computing. For it makes easy things easy and hard things 
 impossible.


 Are you saying you don't believe core.matrix should be _the_ abstract API 
 for matrices/arrays in Clojure? If so, what are your concerns? Feel free to 
 point to me a previous post if it's already been stated. It also sounds 
 like you're alluding to the thread in the Numerical Clojure group about a 
 broad numerical computing lib for complex numbers and various math 
 functions, but I'm not following how that matters here.


-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-20 Thread Mars0i
Dragan, this just occurred to me--a small comment about the slow speed that I 
reported 
https://groups.google.com/forum/#!topic/numerical-clojure/WZ-CRchDyl8 
from clatrix, which you mentioned earlier.  I'm not sure whether the slow 
speed I experienced on 500x500 matrices itself provides evidence for 
general conclusions about using the core.matrix api as an interface to 
BLAS.  There was still a lot of work to be done on clatrix at that 
point--maybe there still is.  My understanding is that clatrix supported 
the core.matrix api at that stage, but it was known that it didn't do so in 
an optimized way, in many respects.  Optimizing remaining areas was left 
for future work.

I think your general point doesn't depend on my experience with clatrix a 
year ago, however.  I understand you to be saying that there are some 
coding strategies that provide efficient code with BLAS and LAPACK, and 
that are easy to use in Neanderthal, but that are difficult or impossible 
using the core.matrix api.

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-19 Thread Christopher Small
I see now Dragan; you're concerned not about whether easily implementing
and swapping in/out implementations of core.matrix is possible, but whether
it can be done while maintaining the performance characteristics of
Neanderthal, yes? That did not come through in your earlier comments in
this thread.

Certainly, performance is one of those things that can leak in an
abstraction. But I'd like to echo Matt's enquiry: If you think a unified
API might be possible but that core.matrix isn't it, I think we'd all love
to hear what you think it's missing and/or what would would need to be
rearchitected in order for it to fit the bill.

As for any sort of responsibility to implement core.matrix, I don't think
anyone is arguing you have such a responsibility, and I hope our _pleading_
hasn't come across as such. We are simply impressed with your work, and
would like to take advantage of it, but also see a drawback you don't: at
present Neanderthal is less interoperable with many existing tools, and
trying it out on an existing project would require a rewrite (as would
migrating away from it if we weren't happy).

Certainly, a third party library implementing core.matrix with Neanderthal
is a possibility, but I'm a bit worried that a) it would add extra burden
keeping things in sync and feel a little second class; and more importantly
b) it might be easier to maintain more of the performance benefits if it's
directly integrating (I could imagine less indirection this way, but could
be totally wrong). So let me ask you this:

Assuming a) someone forks Neanderthal and makes a core.matrix
implementation with close performance parity to the direct Neanderthal API
and/or b) folks working on core.matrix are able to address some of your
issues with the core.matrix architecture, would you consider a merge?


With gratitude

Chris



On Fri, Jun 19, 2015 at 1:45 PM, Matt Revelle mreve...@gmail.com wrote:

 On Friday, June 19, 2015 at 4:57:32 AM UTC-4, Dragan Djuric wrote:

 I do not even claim that an unified api is not possible. I think that to
 some extent it is. I just doubt in core.matrix eligibility for THE api in
 numerical computing. For it makes easy things easy and hard things
 impossible.


 Are you saying you don't believe core.matrix should be _the_ abstract API
 for matrices/arrays in Clojure? If so, what are your concerns? Feel free to
 point to me a previous post if it's already been stated. It also sounds
 like you're alluding to the thread in the Numerical Clojure group about a
 broad numerical computing lib for complex numbers and various math
 functions, but I'm not following how that matters here.

 --
 You received this message because you are subscribed to the Google
 Groups Clojure group.
 To post to this group, send email to clojure@googlegroups.com
 Note that posts from new members are moderated - please be patient with
 your first post.
 To unsubscribe from this group, send email to
 clojure+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/clojure?hl=en
 ---
 You received this message because you are subscribed to a topic in the
 Google Groups Clojure group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/clojure/dFPOOw8pSGI/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 clojure+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-19 Thread Matt Revelle
On Friday, June 19, 2015 at 4:57:32 AM UTC-4, Dragan Djuric wrote:

 I do not even claim that an unified api is not possible. I think that to 
 some extent it is. I just doubt in core.matrix eligibility for THE api in 
 numerical computing. For it makes easy things easy and hard things 
 impossible.


Are you saying you don't believe core.matrix should be _the_ abstract API 
for matrices/arrays in Clojure? If so, what are your concerns? Feel free to 
point to me a previous post if it's already been stated. It also sounds 
like you're alluding to the thread in the Numerical Clojure group about a 
broad numerical computing lib for complex numbers and various math 
functions, but I'm not following how that matters here.

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-19 Thread Dragan Djuric
I understand the concept core.matrix tries to achieve, and would be extremely 
happy if I thought it would be possible, since I would be able to use it and 
spend time on some other stuff instead of writing C, JNI, OpenCL and such 
low-level code.
Thanks for the pointer to your neural networks experience and benchmark. I have 
taken a look at the thread you started about that issue, and it clearly shows 
what (in my opinion) is wrong is core.matrix: it is extremely easy to shoot 
yourself in the foot with it by (unintentionally) using the backing 
implementation in a wrong way. And, when you need to be specific an exploit the 
strength of the implementation, core.matrix gets in your way by making it more 
difficult in best cases and even impossible. Moreover, the optimizations that 
you manage to achieve with one implementation, often turn to be hogs with 
another, with just one configuration change.
For example, if you look at the benchmark on the neanderthal's web site, you'd 
see that for 512x512 matrices, matrix multiplication is 5x faster with clatrix 
(jblas) than vectorz. Yet, in your implementation, you managed to turn that 5x 
speedup into a 1000x slowdown (in the 500x500 case) without even one change in 
code. Quite impressive on the core.matrix side ;)
I do not even claim that an unified api is not possible. I think that to some 
extent it is. I just doubt in core.matrix eligibility for THE api in numerical 
computing. For it makes easy things easy and hard things impossible.

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-19 Thread Mars0i
Neanderthal seems very cool.  You've clearly put a *lot* of work into 
this.  I for one am thankful that you've made available what is a very nice 
tool, as far as I can see.

I don't think there's necessarily a conflict concerning core.matrix, 
though.  You may not want to write a core.matrix wrapper for Neanderthal.  
There's no reason that you must.  Someone else might want to do that; maybe 
I would be able to help with that project.  In that case, those who wanted 
to use Neanderthal via core.matrix could do so, knowing that they lose out 
on any potential advantages of writing directly to Neanderthal, and those 
who want to use Neanderthal in its original form can still do so.  I don't 
see a conflict.

In my case, I have existing code that uses core.matrix.  I wrote to 
core.matrix in part because I didn't want to have to worry about which 
implementation to write to.  I would love to try my code on Neanderthal, 
but I'm not willing to port it.  That's my problem, not yours, though.

For future projects, I could write to Neanderthal, but I also have to 
consider the possibility that there might be situations in which another 
implementation would be better for my code.  Neanderthal looks great, but 
is it always fastest for every application that uses non-tiny matrices?  
Maybe it would be for anything I would write.  I'd rather not have to 
figure that out.  I'm granting that there could be advantages to using 
Neanderthal in its native form rather than via core.matrix, but for me, 
personally, it would be simpler to use it via core.matrix if that was an 
option.  It's not your responsibility to enable that unless you wanted to 
do so, though.  What you've done is already more than enough, Dragan.


On Friday, June 19, 2015 at 3:57:32 AM UTC-5, Dragan Djuric wrote:

 I understand the concept core.matrix tries to achieve, and would be 
 extremely happy if I thought it would be possible, since I would be able to 
 use it and spend time on some other stuff instead of writing C, JNI, OpenCL 
 and such low-level code. 
 Thanks for the pointer to your neural networks experience and benchmark. I 
 have taken a look at the thread you started about that issue, and it 
 clearly shows what (in my opinion) is wrong is core.matrix: it is extremely 
 easy to shoot yourself in the foot with it by (unintentionally) using the 
 backing implementation in a wrong way. And, when you need to be specific an 
 exploit the strength of the implementation, core.matrix gets in your way by 
 making it more difficult in best cases and even impossible. Moreover, the 
 optimizations that you manage to achieve with one implementation, often 
 turn to be hogs with another, with just one configuration change. 
 For example, if you look at the benchmark on the neanderthal's web site, 
 you'd see that for 512x512 matrices, matrix multiplication is 5x faster 
 with clatrix (jblas) than vectorz. Yet, in your implementation, you managed 
 to turn that 5x speedup into a 1000x slowdown (in the 500x500 case) without 
 even one change in code. Quite impressive on the core.matrix side ;) 
 I do not even claim that an unified api is not possible. I think that to 
 some extent it is. I just doubt in core.matrix eligibility for THE api in 
 numerical computing. For it makes easy things easy and hard things 
 impossible.

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-19 Thread Dragan Djuric
That is what I think from the start of the discussion, and I agree with you.

The only thing is that i am almost certain that the performance gain will
be very easy to reverse, just like in the case of Clatrix.

Because it is built on top of blas and atlas, which is the the de facto
standard and a state of the art implementation, I am also pretty certain
that it will be the fastest library on the CPU even for small matrices. The
only exception would be the very teeny ones like 3x2 or something (see the
benchmark), and even then Neanderthal is only 2x slower than vectorz
despite the FFI dance.

If I was you, it is not the speed that I would be concerned about regarding
Neanderthal. In my opinion, currently there are only two major drawbacks to
using Neanderthal:

1. It is a new and unproven library. The author may win the lottery, decide
to retire on the tropical island and abandon all projects. This is less of
a trouble if it becomes popular, since someone else could just take over
the development.

2. You have to install a native library (atlas). This issue is not even
that big, since i (or you, since the build is automatic) could ship atlas
statically compiled like jblas does, but then the top performance would
depend on the user having similar cpu as I (but even that for the last 10
or 20% of performance, and not in the order of magnitude).


On Friday, June 19, 2015, Mars0i marsh...@logical.net wrote:

 Neanderthal seems very cool.  You've clearly put a *lot* of work into
 this.  I for one am thankful that you've made available what is a very nice
 tool, as far as I can see.

 I don't think there's necessarily a conflict concerning core.matrix,
 though.  You may not want to write a core.matrix wrapper for Neanderthal.
 There's no reason that you must.  Someone else might want to do that; maybe
 I would be able to help with that project.  In that case, those who wanted
 to use Neanderthal via core.matrix could do so, knowing that they lose out
 on any potential advantages of writing directly to Neanderthal, and those
 who want to use Neanderthal in its original form can still do so.  I don't
 see a conflict.

 In my case, I have existing code that uses core.matrix.  I wrote to
 core.matrix in part because I didn't want to have to worry about which
 implementation to write to.  I would love to try my code on Neanderthal,
 but I'm not willing to port it.  That's my problem, not yours, though.

 For future projects, I could write to Neanderthal, but I also have to
 consider the possibility that there might be situations in which another
 implementation would be better for my code.  Neanderthal looks great, but
 is it always fastest for every application that uses non-tiny matrices?
 Maybe it would be for anything I would write.  I'd rather not have to
 figure that out.  I'm granting that there could be advantages to using
 Neanderthal in its native form rather than via core.matrix, but for me,
 personally, it would be simpler to use it via core.matrix if that was an
 option.  It's not your responsibility to enable that unless you wanted to
 do so, though.  What you've done is already more than enough, Dragan.


 On Friday, June 19, 2015 at 3:57:32 AM UTC-5, Dragan Djuric wrote:

 I understand the concept core.matrix tries to achieve, and would be
 extremely happy if I thought it would be possible, since I would be able to
 use it and spend time on some other stuff instead of writing C, JNI, OpenCL
 and such low-level code.
 Thanks for the pointer to your neural networks experience and benchmark.
 I have taken a look at the thread you started about that issue, and it
 clearly shows what (in my opinion) is wrong is core.matrix: it is extremely
 easy to shoot yourself in the foot with it by (unintentionally) using the
 backing implementation in a wrong way. And, when you need to be specific an
 exploit the strength of the implementation, core.matrix gets in your way by
 making it more difficult in best cases and even impossible. Moreover, the
 optimizations that you manage to achieve with one implementation, often
 turn to be hogs with another, with just one configuration change.
 For example, if you look at the benchmark on the neanderthal's web site,
 you'd see that for 512x512 matrices, matrix multiplication is 5x faster
 with clatrix (jblas) than vectorz. Yet, in your implementation, you managed
 to turn that 5x speedup into a 1000x slowdown (in the 500x500 case) without
 even one change in code. Quite impressive on the core.matrix side ;)
 I do not even claim that an unified api is not possible. I think that to
 some extent it is. I just doubt in core.matrix eligibility for THE api in
 numerical computing. For it makes easy things easy and hard things
 impossible.





  --
 You received this message because you are subscribed to the Google
 Groups Clojure group.
 To post to this group, send email to clojure@googlegroups.com
 javascript:_e(%7B%7D,'cvml','clojure@googlegroups.com');
 Note that posts from 

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-18 Thread Mars0i
There are plenty of votes for a core.matrix wrapper to this great project, 
but I'll add one point.  I came to Clojure from Common Lisp.  I had a 
neural network application in Common Lisp that didn't use matrices, and I 
decided I needed to rewrite it from scratch using matrices.   Common Lisp 
has lots of matrix libraries.  However, they have different interfaces, are 
faster or slower in different kinds of contexts, etc.  Trying to figure out 
which library to use would have been a lot of trouble, and if I decided to 
change, then I'd have to rewrite my code--or start by writing my own 
abstraction layer.

I didn't switch to Clojure just for core.matrix--there were other reasons 
that were probably more significant in my mind at the time.  However, once 
I started using core.matrix, switching matrix implementations required a 
one-line change.  This was very helpful.  I initially assumed that clatrix 
would be fastest.  It turned out that for my application, it wasn't 
fastest; vectorz was significantly faster, because my matrices are 
relatively small.  But I don't have to worry--maybe my application will 
change, or there will be a new implementation available for core.matrix 
that's better for my application.  As long as the underlying implementation 
supports the operations that I need, all that I'll need to change, again, 
is a single line of code.

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-18 Thread Fergal Byrne
Could I suggest someone who wants a core.matrix integration simply does the
protocol implementation and submits it to Dragan as a PR?

On Thu, Jun 18, 2015 at 1:20 AM, Daniel doubleagen...@gmail.com wrote:

 Just another +1 to include a core.matrix implementation

 --
 You received this message because you are subscribed to the Google
 Groups Clojure group.
 To post to this group, send email to clojure@googlegroups.com
 Note that posts from new members are moderated - please be patient with
 your first post.
 To unsubscribe from this group, send email to
 clojure+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/clojure?hl=en
 ---
 You received this message because you are subscribed to the Google Groups
 Clojure group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to clojure+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.




-- 

Fergal Byrne, Brenter IT @fergbyrne

http://inbits.com - Better Living through Thoughtful Technology
http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne

Founder of Clortex: HTM in Clojure -
https://github.com/nupic-community/clortex
Co-creator @OccupyStartups Time-Bombed Open License http://occupystartups.me

Author, Real Machine Intelligence with Clortex and NuPIC
Read for free or buy the book at https://leanpub.com/realsmartmachines

e:fergalbyrnedub...@gmail.com t:+353 83 4214179
Join the quest for Machine Intelligence at http://numenta.org
Formerly of Adnet edi...@adnet.ie http://www.adnet.ie

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-17 Thread Daniel
Just another +1 to include a core.matrix implementation

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-17 Thread Christopher Small
First of all, nice work :-)


In the docs you keep referring to core.matrix as though it was a particular
implementation, saying things like X times faster than core.matrix. This
is misleading; core.matrix is an abstract API, which can have many
implementations. Saying something is faster than it doesn't mean anything.
In particular, it's misleading to compare Neanderthal to core.matrix by
comparing it to Vectorz and jBlas, since faster implementations could be
written and are underway. Perhaps something a little more verbose like
Neanderthal to the fastest (current?) core.matrix implementations would
be less misleading.

On a stylistic note, I like the website design overall, but the graph paper
lines on the back ground are a bit distracting in relation to the text.
Perhaps a white background behind the text, with the graph lines on the
margins would capture the same effect but be less distracting?


At the risk of being a nag, I'd also like to reiterate the desire of the
community to avoid library fragmentation and see Neanderthal implement the
core.matrix protocols. Mike elegantly explicated the reasons for this in
his earlier post: if we all build around a common abstract API, we maximize
the Clojure dream of composability of anything written towards and for that
API. This is important to a lot of Clojurians.

Earlier in the thread you expressed concerns about whether it would really
be so easy to have Neanderthal's as a core.matrix API. Mike Anderson's
response to me suggests he thought you were questioning whether swapping
core.matrix implementations was really so easy. Just in case you were
doubtful of the former more so than the latter, perhaps I can clarify.

Core.matrix is built entirely around abstract protocols which represent the
various matrix operations one might care about for a matrix computation
library. There are protocols for addition, inner product, outer product,
multiply and add, invert, etc. All you would have to do to make Neanderthal
functional as a core.matrix implementation is implement these protocols in
some namespace within your project (perhaps `
uncomplicate.neanderthal.core-matrix`). As Mike pointed out, you wouldn't
even have to implement _all_ the protocols, since core.matrix comes with
default implementations of the things you don't define. Once you'd made the
implementations, you register an example matrix object with core.matrix at
the end of your namespace, and then when a user requires that namespace,
the Neanderthal implementation would be registered, and users could start
instantiating Neanderthal matrices that are 100% compatible with the
core.matrix API. Setting things up this way, you wouldn't even have to
abandon your own API. The two could live in harmony along side each other,
letting folks require and use whichever they like. It really is just that
simple; implement a few protocols, and the API will work. It sounds too
good to be true, and I didn't believe it at first either, but it works.

I get that you've put a lot of energy into this, and commend you for it. I
personally would love to take advantage of that hard work, and am sure
others would as well. However, I think you're going to find adoption
difficult if Neanderthal risks fragmenting the presently coherent state of
numerical computing in Clojure, and imposes inflexibilities upon those who
would use it in their own code. I personally am not willing to boost
performance at the cost of flexibility.

OK; I'll leave it at that.


Respectfully,

Chris


On Wed, Jun 17, 2015 at 8:07 AM, Dragan Djuric draga...@gmail.com wrote:

 Version 0.2.0 has just been released to Clojars

 New features:

 * implemented BLAS support for floats
 * implemented fmap!, freduce, and fold functions for all existing types of
 matrices and vectors

 Changes:

 No API changes were required for these features.

 On Tuesday, January 13, 2015 at 2:13:13 AM UTC+1, Dragan Djuric wrote:

 I am pleased to announce a first public release of new *very fast *native
 matrix and linear algebra library for Clojure based on ATLAS BLAS.
 Extensive *documentation* is at http://neanderthal.uncomplicate.org
 See the benchmarks at
 http://neanderthal.uncomplicate.org/articles/benchmarks.html.

 Neanderthal is a Clojure library that

 Main project goals are:

- Be as fast as native ATLAS even for linear operations, with no
copying overhead. It is roughly 2x faster than jBLAS for large matrices,
and tens of times faster for small ones. Also faster than core.matrix for
small and large matrices!
- Fit well into idiomatic Clojure - Clojure programmers should be
able to use and understand Neanderthal like any regular Clojure library.
- Fit well into numerical computing literature - programmers should
be able to reuse existing widespread BLAS and LAPACK programming know-how
and easily translate it to Clojure code.

 Implemented features

- Data structures: double vector, double general dense matrix (GE);
- 

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-06-17 Thread Dragan Djuric
Version 0.2.0 has just been released to Clojars

New features:

* implemented BLAS support for floats
* implemented fmap!, freduce, and fold functions for all existing types of 
matrices and vectors

Changes:

No API changes were required for these features.

On Tuesday, January 13, 2015 at 2:13:13 AM UTC+1, Dragan Djuric wrote:

 I am pleased to announce a first public release of new *very fast *native 
 matrix and linear algebra library for Clojure based on ATLAS BLAS.
 Extensive *documentation* is at http://neanderthal.uncomplicate.org
 See the benchmarks at 
 http://neanderthal.uncomplicate.org/articles/benchmarks.html.

 Neanderthal is a Clojure library that 

 Main project goals are:

- Be as fast as native ATLAS even for linear operations, with no 
copying overhead. It is roughly 2x faster than jBLAS for large matrices, 
and tens of times faster for small ones. Also faster than core.matrix for 
small and large matrices!
- Fit well into idiomatic Clojure - Clojure programmers should be able 
to use and understand Neanderthal like any regular Clojure library.
- Fit well into numerical computing literature - programmers should be 
able to reuse existing widespread BLAS and LAPACK programming know-how and 
easily translate it to Clojure code.

 Implemented features

- Data structures: double vector, double general dense matrix (GE);
- BLAS Level 1, 2, and 3 routines;
- Various Clojure vector and matrix functions (transpositions, 
submatrices etc.);
- Fast map, reduce and fold implementations for the provided 
structures.

 On the TODO list

- LAPACK routines;
- Banded, symmetric, triangular, and sparse matrices;
- Support for complex numbers;
- Support for single-precision floats.


 Call for help:
 Everything you need for Linux is in Clojars. If you know your way around 
 gcc on OS X, or around gcc and MinGW on Windows, and you are willing to 
 help providing the binary builds for those (or other) systems, please 
 contact me. There is an automatic build script, but gcc, atlas and other 
 build tools need to be properly set up on those systems.



-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-01-13 Thread Christopher Small
Awesome project!

I'll echo the encouragement towards having Neanderthal implement the 
core.matrix protocols. You'll have much higher adoption if folks know they 
can just plug your tool in by changing a single line setting the underlying 
implementation to Neanderthal. And as Mikera points out, it would be nice 
if we kept the Clojure matrix API space cohesive.

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-01-13 Thread adrian . medina
Ditto to the others. This looks great, and I have a lot of core.matrix 
compatible code I'd love to test it out against! Thanks for releasing this! 

On Tuesday, January 13, 2015 at 5:07:51 PM UTC-5, Sam Raker wrote:

 I'd like to politely add to the calls for this to become a pluggable 
 core.matrix backend.

 On Tuesday, January 13, 2015 at 4:38:22 PM UTC-5, Dragan Djuric wrote:

 It would be nice if that would be that easy. However, I am sceptical...

 On Tuesday, January 13, 2015 at 8:13:36 PM UTC+1, Christopher Small wrote:

 Awesome project!

 I'll echo the encouragement towards having Neanderthal implement the 
 core.matrix protocols. You'll have much higher adoption if folks know they 
 can just plug your tool in by changing a single line setting the underlying 
 implementation to Neanderthal. And as Mikera points out, it would be nice 
 if we kept the Clojure matrix API space cohesive.



-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-01-13 Thread Sam Raker
I'd like to politely add to the calls for this to become a pluggable 
core.matrix backend.

On Tuesday, January 13, 2015 at 4:38:22 PM UTC-5, Dragan Djuric wrote:

 It would be nice if that would be that easy. However, I am sceptical...

 On Tuesday, January 13, 2015 at 8:13:36 PM UTC+1, Christopher Small wrote:

 Awesome project!

 I'll echo the encouragement towards having Neanderthal implement the 
 core.matrix protocols. You'll have much higher adoption if folks know they 
 can just plug your tool in by changing a single line setting the underlying 
 implementation to Neanderthal. And as Mikera points out, it would be nice 
 if we kept the Clojure matrix API space cohesive.



-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-01-13 Thread Dragan Djuric
It would be nice if that would be that easy. However, I am sceptical...

On Tuesday, January 13, 2015 at 8:13:36 PM UTC+1, Christopher Small wrote:

 Awesome project!

 I'll echo the encouragement towards having Neanderthal implement the 
 core.matrix protocols. You'll have much higher adoption if folks know they 
 can just plug your tool in by changing a single line setting the underlying 
 implementation to Neanderthal. And as Mikera points out, it would be nice 
 if we kept the Clojure matrix API space cohesive.


-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-01-13 Thread Mikera
It works for the development branch of Incanter 2.0 - which is a pretty 
significant project with a lot of matrix code. You can switch between 
Clatrix (native BLAS via JBlas), persistent vectors (regular Clojure 
vectors) and vectorz-clj (pure JVM code) transparently. 

I think it would be the same for Neanderthal - I don't see anything that 
would be inconsistent with core.matrix from an API perspective.

On Wednesday, 14 January 2015 05:38:22 UTC+8, Dragan Djuric wrote:

 It would be nice if that would be that easy. However, I am sceptical...

 On Tuesday, January 13, 2015 at 8:13:36 PM UTC+1, Christopher Small wrote:

 Awesome project!

 I'll echo the encouragement towards having Neanderthal implement the 
 core.matrix protocols. You'll have much higher adoption if folks know they 
 can just plug your tool in by changing a single line setting the underlying 
 implementation to Neanderthal. And as Mikera points out, it would be nice 
 if we kept the Clojure matrix API space cohesive.



-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-01-12 Thread Dragan Djuric



 b) It looks like you are consistently about 2x faster than JBlas for large 
 matrices - wondering what is causing the difference, is that because of 
 copying?

That is the most probable cause, but I am not sure whether it is the only 
one.
 

 c) Would be interesting to see a few other operations: I do a lot of work 
 with stochastic gradient descent for example so addition and 
 multiply-and-add can be even more important than matrix multiply.


Multiply-and-add is already there. See the docs. Neanderthal closely 
follows BLAS, so the default multiply is actually multiply-and-add.

 



 On Tuesday, 13 January 2015 09:13:13 UTC+8, Dragan Djuric wrote:

 I am pleased to announce a first public release of new *very fast *native 
 matrix and linear algebra library for Clojure based on ATLAS BLAS.
 Extensive *documentation* is at http://neanderthal.uncomplicate.org
 See the benchmarks at 
 http://neanderthal.uncomplicate.org/articles/benchmarks.html.

 Neanderthal is a Clojure library that 

 Main project goals are:

- Be as fast as native ATLAS even for linear operations, with no 
copying overhead. It is roughly 2x faster than jBLAS for large matrices, 
and tens of times faster for small ones. Also faster than core.matrix for 
small and large matrices!
- Fit well into idiomatic Clojure - Clojure programmers should be 
able to use and understand Neanderthal like any regular Clojure library.
- Fit well into numerical computing literature - programmers should 
be able to reuse existing widespread BLAS and LAPACK programming know-how 
and easily translate it to Clojure code.

 Implemented features

- Data structures: double vector, double general dense matrix (GE);
- BLAS Level 1, 2, and 3 routines;
- Various Clojure vector and matrix functions (transpositions, 
submatrices etc.);
- Fast map, reduce and fold implementations for the provided 
structures.

 On the TODO list

- LAPACK routines;
- Banded, symmetric, triangular, and sparse matrices;
- Support for complex numbers;
- Support for single-precision floats.


 Call for help:
 Everything you need for Linux is in Clojars. If you know your way around 
 gcc on OS X, or around gcc and MinGW on Windows, and you are willing to 
 help providing the binary builds for those (or other) systems, please 
 contact me. There is an automatic build script, but gcc, atlas and other 
 build tools need to be properly set up on those systems.



-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-01-12 Thread Dragan Djuric
I am pleased to announce a first public release of new *very fast *native 
matrix and linear algebra library for Clojure based on ATLAS BLAS.
Extensive *documentation* is at http://neanderthal.uncomplicate.org
See the benchmarks at 
http://neanderthal.uncomplicate.org/articles/benchmarks.html.

Neanderthal is a Clojure library that 

Main project goals are:

   - Be as fast as native ATLAS even for linear operations, with no copying 
   overhead. It is roughly 2x faster than jBLAS for large matrices, and tens 
   of times faster for small ones. Also faster than core.matrix for small and 
   large matrices!
   - Fit well into idiomatic Clojure - Clojure programmers should be able 
   to use and understand Neanderthal like any regular Clojure library.
   - Fit well into numerical computing literature - programmers should be 
   able to reuse existing widespread BLAS and LAPACK programming know-how and 
   easily translate it to Clojure code.

Implemented features
   
   - Data structures: double vector, double general dense matrix (GE);
   - BLAS Level 1, 2, and 3 routines;
   - Various Clojure vector and matrix functions (transpositions, 
   submatrices etc.);
   - Fast map, reduce and fold implementations for the provided structures.

On the TODO list
   
   - LAPACK routines;
   - Banded, symmetric, triangular, and sparse matrices;
   - Support for complex numbers;
   - Support for single-precision floats.


Call for help:
Everything you need for Linux is in Clojars. If you know your way around 
gcc on OS X, or around gcc and MinGW on Windows, and you are willing to 
help providing the binary builds for those (or other) systems, please 
contact me. There is an automatic build script, but gcc, atlas and other 
build tools need to be properly set up on those systems.

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-01-12 Thread Mikera
Looks cool Dragan, thanks for sharing!

I would strongly encourage you to wrap this up and make it work as a 
core.matrix implementation. That would considerably improve 
interoperability with other libraries / tools in the Clojure numerical 
space e.g. it could be used as a drop-in replacement for Clatrix in 
Incanter 2.0 for example.

The work to do this is fairly simple - you just need to implement a few 
mandatory protocols from clojure.core.matrix.protocols. Most of the 
protocols are optional: you only need to implement them if you want to 
provide a high performance override of the default behaviour.

I'm particularly keen that we avoid API fragmentation in the Clojure 
numerical space (this has plagued many other language communities, e.g. 
Java). If every such library comes with its own API, then we won't be able 
to build a strong ecosystem of composable tools (which is in my view pretty 
fundamental to the Clojure style of development)

Some quick thoughts on the benchmarks:
a) I'm quite pleased to see that my little pure-Java matrix multiplication 
implementation comes within an order of magnitude of BLAS/ATLAS :-) thanks 
for giving me some further improvements to target!
b) It looks like you are consistently about 2x faster than JBlas for large 
matrices - wondering what is causing the difference, is that because of 
copying?
c) Would be interesting to see a few other operations: I do a lot of work 
with stochastic gradient descent for example so addition and 
multiply-and-add can be even more important than matrix multiply.


On Tuesday, 13 January 2015 09:13:13 UTC+8, Dragan Djuric wrote:

 I am pleased to announce a first public release of new *very fast *native 
 matrix and linear algebra library for Clojure based on ATLAS BLAS.
 Extensive *documentation* is at http://neanderthal.uncomplicate.org
 See the benchmarks at 
 http://neanderthal.uncomplicate.org/articles/benchmarks.html.

 Neanderthal is a Clojure library that 

 Main project goals are:

- Be as fast as native ATLAS even for linear operations, with no 
copying overhead. It is roughly 2x faster than jBLAS for large matrices, 
and tens of times faster for small ones. Also faster than core.matrix for 
small and large matrices!
- Fit well into idiomatic Clojure - Clojure programmers should be able 
to use and understand Neanderthal like any regular Clojure library.
- Fit well into numerical computing literature - programmers should be 
able to reuse existing widespread BLAS and LAPACK programming know-how and 
easily translate it to Clojure code.

 Implemented features

- Data structures: double vector, double general dense matrix (GE);
- BLAS Level 1, 2, and 3 routines;
- Various Clojure vector and matrix functions (transpositions, 
submatrices etc.);
- Fast map, reduce and fold implementations for the provided 
structures.

 On the TODO list

- LAPACK routines;
- Banded, symmetric, triangular, and sparse matrices;
- Support for complex numbers;
- Support for single-precision floats.


 Call for help:
 Everything you need for Linux is in Clojars. If you know your way around 
 gcc on OS X, or around gcc and MinGW on Windows, and you are willing to 
 help providing the binary builds for those (or other) systems, please 
 contact me. There is an automatic build script, but gcc, atlas and other 
 build tools need to be properly set up on those systems.



-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.