On Thursday, 20 June 2013 08:45:47 UTC+1, Jason Wolfe wrote: > > On Saturday, June 15, 2013 4:37:06 AM UTC-7, Mikera wrote: >> >> On Friday, 14 June 2013 18:15:34 UTC+1, Jason Wolfe wrote: >> >>> Hey Mikera, >>> >>> I did look at core.matrix awhile ago, but I'll take another look. >>> >>> Right now, flop is just trying to make it easy to write *arbitrary* >>> array operations compactly, while minimizing the chance of getting >>> worse-than-Java performance. This used to be very tricky to get right >>> when flop was developed (against Clojure 1.2); the situation has >>> clearly improved since then, but there still seem to be some >>> subtleties in going fast with arrays in 1.5.1 that we are trying to >>> understand and then automate. >>> >>> As I understand it, core.matrix has a much more ambitious goal of >>> abstracting over all matrix types. This is a great goal, but I'm not >>> sure if the protocol-based implementation can give users any help >>> writing new core operations efficiently (say, making a new array with >>> c[i] = a[i] + b[i]^2 / 2) -- unless there's some clever way of >>> combining protocols with macros (hmmm). >>> >> >> A longer term objective for core.matrix could be to allow compiling such >> expressions. Our GSoC student Maik Schünemann is exploring how to represent >> and optimised mathematical expressions in Clojure, and in theory these >> could be used to compile down to efficient low-level operations. API could >> look something like this: >> >> ;; define an expression >> (def my-expression (expression [a b] (+ a (/ (* b b) 2)))) >> >> ;; compile the expression for the specified matrix implementation A >> (def func (compile-expression A my-expression)). >> >> ;; now computation can be run using the pre-compiled, optimised function >> (func A B) >> >> In the case that A is a Java double array, then perhaps the flop macros >> could be the engine behind generating the compiled function? >> >> >> >>> I just benchmarked core.matrix/esum, and on my machine in Clojure >>> 1.5.1 it's 2.69x slower than the Java version above, and 1.62x slower >>> than our current best Clojure version. >>> >> >> Great - happy to steal your implementation :-) >> >> Other core.matrix implementations are probably faster BTW: vectorz-clj is >> pure Java and has esum for the general-purpose Vector type implemented in >> exactly the same way as your fast Java example. Clatrix executes a lot of >> operations via native code using BLAS. >> > > I should follow up on this and clarify that core.matrix's esum is in fact > as fast as Java -- I apologize for the false statement (I was unaware that > new versions of leiningen disable advanced JIT optimizations by default, > which lead to the numbers I reported). > > Nevertheless, I hope there may be room for interesting collaboration on > more complex operations, or code gen as you mentioned. I'll follow up > later when we're a bit further along. > >
Great thanks for confirming, I was getting worried :-) On the topic of code gen, we've been thinking a bit about how to represent expressions in the expresso project, and are developing a few potential use case API examples. https://github.com/clojure-numerics/expresso/wiki/User-API-examples If anyone has any additional use cases to think about, then please throw them in! -- -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient with your first post. To unsubscribe from this group, send email to clojure+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/clojure?hl=en --- You received this message because you are subscribed to the Google Groups "Clojure" group. To unsubscribe from this group and stop receiving emails from it, send an email to clojure+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.