Numerical Linear Algebra for Programmers - Clojure Book - New Release 0.10.0

2020-09-14 Thread Dragan Djuric
https://aiprobook.com/numerical-linear-algebra-for-programmers/ with new chapter, Hello World Numerical Linear Algebra for Programmers: An Interactive Tutorial with GPU, CUDA, OpenCL, MKL, Java, and Clojure is... ...basically… a book for programmers interactive & dynamic a direct link from

Re: Cognitect joins Nubank!

2020-07-23 Thread Dragan Djuric
Congratulations, Rich, Stu, Alex, and the rest of the team! I hope that this is the milestone that will finally convince broader programming community that Clojure is here to stay, healthy and growing! Well deserved! On Thursday, July 23, 2020 at 2:04:49 PM UTC+2, Rich Hickey wrote: > > We

Re: Deep Learning for Programmers book 0.16.0: new chapter on Multi-class classification and metrics

2020-05-10 Thread Dragan Djuric
It makes the book possible. On Sunday, May 10, 2020 at 8:39:37 PM UTC+2, Ali M wrote: > > Why a subscription model for a book wouldn't that make the book very > expensive ? > > > > On Thursday, April 2, 2020 at 6:06:33 AM UTC-4, Dragan Djuric wrote: >> >&

Deep Learning for Programmers book 0.16.0: new chapter on Multi-class classification and metrics

2020-04-02 Thread Dragan Djuric
Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, DNNL, Java, and Clojure version 0.16.0 is available at https://aiprobook.com/deep-learning-for-programmers?release=1.16.0=cgroups Why? ++ Clojure! ++ For Programmers! + the only AI book that walks the walk + complete,

Deep Learning for Programmers - Parts 2,3,4, and 5 completed

2020-01-27 Thread Dragan Djuric
With the final chapter in the new relase, Parts 2, 3, 4, and 5 are complete, and the book runs at 250 pages so far. I hope it won't be more than 350 when finished :) Drafts are already available. https://aiprobook.com/deep-learning-for-programmers/?release=0.15.0=cgroups

Numerical Linear Algebra for Programmers (Clojure book WIP) new release 0.5.0

2019-12-21 Thread Dragan Djuric
learn linear algebra with code examplesexplore it on the CPUrun it on the GPU!integrate with Intel’s MKL and Nvidia’s cuBLAS performance librarylearn the nuts and boltsunderstand how to use it to solve practical problems…and much more! Available at

Re: Deep Learning for Programmers 0.11.0 (Clojure AI Book WIP)

2019-11-08 Thread Dragan Djuric
New release, 0.12.0 is available, with additional chapter on using DL for regression and predicting the prices of Boston real estate (a classic regression example). On Friday, October 25, 2019 at 9:53:26 AM UTC+2, Dragan Djuric wrote: > > New release:Deep Learning for Progr

Deep Learning for Programmers 0.11.0 (Clojure AI Book WIP)

2019-10-25 Thread Dragan Djuric
New release:Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, MKL-DNN, Java, and Clojure https://aiprobook.com/deep-learning-for-programmers + Chapter on Adaptive Learning Rates ** no

Numerical Linear Algebra for Programmers - book release 0.4.0

2019-10-18 Thread Dragan Djuric
Linear Algebra for Programmers: An Interactive Tutorial with GPU, CUDA, OpenCL, Java, and Clojure New Release 0.4.0 is available. + chapter on Orthogonalization and Least Squares A book written with

Deep Learning for Programmers - new release 0.10.0 (Clojure AI Book WIP)

2019-10-10 Thread Dragan Djuric
New release of the WIP book Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, MKL-DNN, Java, and Clojure https://aiprobook.com/deep-learning-for-programmers + Chapter on Momentum and Nesterov

Re: Numerical Linear Algebra for Programmers - new book release 0.3.0

2019-09-21 Thread Dragan Djuric
do with Artificial > Intelligence? > > On Wednesday, September 11, 2019 at 11:52:42 AM UTC-5, Dragan Djuric wrote: >> >> Numerical Linear Algebra for Programmers: An Interactive Tutorial with >> GPU, CUDA, OpenCL, MKL, Java and Clojure >> >> new release 0.3.0 is avai

Numerical Linear Algebra for Programmers - new book release 0.3.0

2019-09-11 Thread Dragan Djuric
Numerical Linear Algebra for Programmers: An Interactive Tutorial with GPU, CUDA, OpenCL, MKL, Java and Clojure new release 0.3.0 is available https://aiprobook.com/numerical-linear-algebra-for-programmers basically… - a book for programmers - interactive & dynamic - direct link from

Deep Learning for Programmers 0.8.0 (Clojure AI Book WIP)

2019-09-05 Thread Dragan Djuric
Learn Deep Learning by implementing it from scratch! New release 0.8.0 of the Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, MKL-DNN, Java, and Clojure is ready! Read the drafts as they are released.

[ANN] Deep Learning for Programmers - Clojure Book WIP Release 0.7.0

2019-08-22 Thread Dragan Djuric
Learn Deep Learning by implementing it from scratch! New release 0.7.0 of the Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, MKL-DNN, Java, and Clojure is ready! https://aiprobook.com/deep-learning-for-programmers?release=0.7.0=cgroups - explore on CPU - then

[ANN] Deep Learning for Programmers - New Release 0.6.0

2019-07-25 Thread Dragan Djuric
Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, MKL-DNN, Java, and Clojure basically… - the only DL book for programmers - interactive & dynamic - step-by-step implementation -

[ANN] Numerical Linear Algebra for Programmers - New Release 0.2.0

2019-07-22 Thread Dragan Djuric
Linear Algebra for Programmers: An Interactive Tutorial with GPU, CUDA, OpenCL, Java, and Clojure New Release 0.2.0 is available. A book written with programmers in mind: The only AI book that walks the

[ANN] Deep Learning for Programmers - release 0.5.0 (Clojure book WIP)

2019-07-08 Thread Dragan Djuric
Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, MKL-DNN, Java, and Clojure is the only DL book written with programmers in mind: the only AI book that walks the walk - complete, 100%

[Clojure Book WIP] Numerical Linear Algebra for Programmers

2019-06-25 Thread Dragan Djuric
Numerical Linear Algebra for Programmers: an Interactive Tutorial with GPU, CUDA, OpenCL, MKL, Java and Clojure initial release 0.1.0 https://aiprobook.com/numerical-linear-algebra-for-programmers basically… - a book for programmers - interactive & dynamic - direct link from theory

Re: [Clojure Book WIP] Deep Learning for Programmers: an Interactive Tutorial with CUDA, OpenCL, MKL-DNN, Java and Clojure, New Release 0.4.0

2019-06-18 Thread Dragan Djuric
Thank you for your support! On Tuesday, June 18, 2019 at 4:44:11 PM UTC+2, Daniel Carleton wrote: > > Subscribed! This is the book I've been waiting for. Was going to start > with your reading list, but now I'll start here. > > On Tue, Jun 18, 2019, 2:49 AM Dragan

[Clojure Book WIP] Deep Learning for Programmers: an Interactive Tutorial with CUDA, OpenCL, MKL-DNN, Java and Clojure, New Release 0.4.0

2019-06-18 Thread Dragan Djuric
basically… 1. the only DL book for programmers 2. interactive & dynamic 3. step-by-step implementation 4. incredible speed 5. yet, No C++ hell (!) 6. Nvidia GPU (CUDA and cuDNN) 7. AMD GPU (yes, OpenCL too!) 8. Intel & AMD CPU (MKL-DNN) 9. Clojure (magic!) 10. Java

Adopt a Neanderthal function as your own pet! Support my Clojure work on Patreon.

2018-10-18 Thread Dragan Djuric
https://dragan.rocks/articles/18/Patreon-Announcement-Adopt-a-Function You can become a proud sponsor of a pet Clojure function from one of Uncomplicate projects! I will add your name to the documentation data of a function, so you can follow the project and

Re: Using Clojure for public facing system in a bank - code security scanning - any luck?

2018-04-15 Thread Dragan Djuric
Hi all. Very interesting thread! I guess that not many Clojure developers are in this situation, but I hope many more will be; that would mean that Clojure got the foot in the door of the enterprise. Gregg, I need a little clarification on the last thing you mentioned: Is a dependency treated

Re: Cases of `RT/canSeq`

2017-05-19 Thread Dragan Djuric
To me it looks like a leftover from some Clojure pre-history where it made functional sense. On Friday, May 19, 2017 at 6:54:32 PM UTC+2, Tianxiang Xiong wrote: > > That seems unlikely to be the reason here. ¯\_(ツ)_/¯ > > As you said, the `null` check should come first if performance is the >

Re: slackpocalypse?

2017-05-18 Thread Dragan Djuric
Southeast Europe. On Thu, May 18, 2017 at 10:45 PM Gregg Reynolds <d...@mobileink.com> wrote: > > > On May 18, 2017 3:40 PM, "Dragan Djuric" <draga...@gmail.com> wrote: > > It works for me as always. > > > hmm, maybe it's a hiccup. where

Re: slackpocalypse?

2017-05-18 Thread Dragan Djuric
It works for me as always. On Thursday, May 18, 2017 at 10:34:33 PM UTC+2, Gregg Reynolds wrote: > > > > On May 18, 2017 3:32 PM, "Jason Stewart" > wrote: > > I'm experiencing the same thing, while I am able to connect with my other > slack teams. > > > this is not

Re: How to Create Clojure `defn` Functions automatically?

2017-05-11 Thread Dragan Djuric
e to the combination of eval and memoization. > > > > On Thu, May 11, 2017 at 2:55 AM, Dragan Djuric <drag...@gmail.com > > wrote: > >> What's wrong with (foo :able) => "Adelicious!" and (:able foo) => >> "Adelicious!"? >> >>

Re: How to Create Clojure `defn` Functions automatically?

2017-05-11 Thread Dragan Djuric
What's wrong with (foo :able) => "Adelicious!" and (:able foo) => "Adelicious!"? On Thursday, May 11, 2017 at 9:20:19 AM UTC+2, Alan Thompson wrote: > > A recent question on StackOverflow raised the question of the best way to > automatically generate functions. Suppose you want to automate the

Re: [ANN] Neanderthal 0.9.0 with major improvements

2017-04-28 Thread Dragan Djuric
Version 0.10.0 is in clojars. On Friday, March 31, 2017 at 4:39:35 PM UTC+2, Dragan Djuric wrote: > > More details in the announcement blog post: > http://dragan.rocks/articles/17/Neanderthal-090-released-Clojure-high-performance-computing > -- You received this message

[ANN] ClojureCUDA: a Clojure library for CUDA GPU computing

2017-04-24 Thread Dragan Djuric
I'll write more in an introductory blog post in a day or two. Until that, there is a website http://clojurecuda.uncomplicate.org, that has the details and documentation. It is similar to ClojureCL (http://clojurecl.uncomplicate.org), but is targeted to CUDA and Nvidia GPUs specifically. The

[ANN] Neanderthal 0.9.0 with major improvements

2017-03-31 Thread Dragan Djuric
More details in the announcement blog post: http://dragan.rocks/articles/17/Neanderthal-090-released-Clojure-high-performance-computing -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com

Re: off-topic: stackof developer survey

2017-03-25 Thread Dragan Djuric
der perspective and would like to see employers committed to Clojure > and looking to engage with practitioners.+ > > > On Sunday, March 26, 2017 at 4:49:45 AM UTC+11, Dragan Djuric wrote: >> >> >> Isn't it advantageous in some sense to have access to stuff that your >>

Re: off-topic: stackof developer survey

2017-03-25 Thread Dragan Djuric
Clojure offers full support for GPU computing. See http://clojurecl.uncomplicate.org, as far as I know, Python doesn't have so well integrated GPU programming. It also supports full high-performance CPU acceleration. Also, although Neanderthal (http://neanderthal.uncomplicate.org) is not yet

Re: off-topic: stackof developer survey

2017-03-25 Thread Dragan Djuric
But why is it bad news if your competition don't use the best tool available (if it is true, of course)? I consider it a competitive advantage. On the other hand, it is perfectly clear why everyone uses Python for ML and (almost) nobody uses Clojure: 1) All serious literature is in Python.

Re: The major upgrade to Neanderthal (the matrix library) will be released soon, with lots of new functionality.

2017-03-22 Thread Dragan Djuric
Great! Please follow up with feedback when you do! On Wednesday, March 22, 2017 at 2:55:57 PM UTC+1, Christian Weilbach wrote: > > Am 22.03.2017 um 02:41 schrieb Dragan Djuric: > > More details > > at: http://dragan.rocks/articles/17/Neanderthal-090-is-around-the-cor

The major upgrade to Neanderthal (the matrix library) will be released soon, with lots of new functionality.

2017-03-21 Thread Dragan Djuric
More details at: http://dragan.rocks/articles/17/Neanderthal-090-is-around-the-corner -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated -

Re: structuring parallel code

2017-01-31 Thread Dragan Djuric
1) When you work with numerics, you have to take into account that numerical operations are typically order(s) of magnitude faster and consume much less resources per element than any of those concurrency mechanisms. 2) It is important whether the problem you're working on parallelizable in

Re: Slides for my talk at EuroClojure 2016

2016-10-23 Thread Dragan Djuric
framework. But as I recall, it also used to be in > the order of minutes. Will see if I can finish the upgrade and put it > online. > > On Sunday, October 23, 2016 at 8:45:47 PM UTC+2, Dragan Djuric wrote: > >> Are those hierarchical models? I also suppose the variables are &g

Re: Slides for my talk at EuroClojure 2016

2016-10-23 Thread Dragan Djuric
l phenotype. Don't think I would try that in Anglican :-). > > > On Sunday, October 23, 2016 at 6:06:49 PM UTC+2, Dragan Djuric wrote: >> >> Thanks. I know about Anglican, but it is not even in the same category, >> other than being Bayesian. Anglican also has MCMC, but

Re: Slides for my talk at EuroClojure 2016

2016-10-23 Thread Dragan Djuric
l network classification to > speed things up. Fun stuff.) > > On Thursday, October 20, 2016 at 11:38:25 PM UTC+2, Dragan Djuric wrote: >> >> Hi all, I posted slides for my upcoming EuroClojure talk, so you can >> enjoy the talk without having to take notes: &

Re: Slides for my talk at EuroClojure 2016

2016-10-20 Thread Dragan Djuric
org/ > > > > "If you're not annoying somebody, you're not really alive." > > -- Margaret Atwood > > > > > > > > On 10/20/16, 3:37 PM, "Dragan Djuric" <clo...@googlegroups.com > on behalf of > > drag...@gmail.com > wrot

Re: Slides for my talk at EuroClojure 2016

2016-10-20 Thread Dragan Djuric
really alive." > -- Margaret Atwood > > > > On 10/20/16, 3:37 PM, "Dragan Djuric" <clojure@googlegroups.com on behalf > of draga...@gmail.com> wrote: > > > > Hmm, what browser do you use? The link that I'm been shown in the browser > is http:

Re: Slides for my talk at EuroClojure 2016

2016-10-20 Thread Dragan Djuric
takes you to a 404: > http://talks/EuroClojure2016/clojure-is-not-afraid-of-the-gpu.html > > On 20 October 2016 at 22:38, Dragan Djuric <drag...@gmail.com > > wrote: > > Hi all, I posted slides for my upcoming EuroClojure talk, so you can > enjoy > >

Slides for my talk at EuroClojure 2016

2016-10-20 Thread Dragan Djuric
Hi all, I posted slides for my upcoming EuroClojure talk, so you can enjoy the talk without having to take notes: http://dragan.rocks/articles/16/Clojure-is-not-afraid-of-the-GPU-slides-EuroClojure -- You received this message because you are subscribed to the Google Groups "Clojure" group. To

Re: Clojure with Tensorflow, Torch etc (call for participation, brainstorming etc)

2016-10-20 Thread Dragan Djuric
Please note that this is a really a TOY nn project, actually a direct translation of gigasquid's nn hello-world. It is ridiculous to compare it with a library that delegates nn work to cuDNN. And it is a really old version of affe, that ddosic has improved in the meantime, but have not yet

Re: [ANN] CIDER 0.14 (Berlin) released

2016-10-14 Thread Dragan Djuric
Thank you for the great tool. Happy birthday! On Friday, October 14, 2016 at 9:08:50 AM UTC+2, Bozhidar Batsov wrote: > > Hey everyone, > > Yesterday I released CIDER 0.14 (Berlin). It's a rather small CIDER > update, mostly focusing on bug fixes. Thanks to everyone who contributed to > this

Re: Neanderhal 0.8.0 released - includes Windows build

2016-10-10 Thread Dragan Djuric
nnot find what it is. > However, some interesting routines like axpy (I've no experience on > BLAS/LAPACK) are fresh (positively) to me :-) > > Thank you. > > On Monday, October 10, 2016 at 2:10:27 PM UTC+9, Dragan Djuric wrote: >> >> Thank you for reporting b

Re: Neanderhal 0.8.0 released - includes Windows build

2016-10-09 Thread Dragan Djuric
mple linear algebra code of my own such as > inverting matrix, however, I cannot figure > out how to do this with neanderthal. Are there any docs on this simple > subjects? > > On Monday, October 10, 2016 at 1:40:19 AM UTC+9, Dragan Djuric wrote: >> >> Windows users

Neanderhal 0.8.0 released - includes Windows build

2016-10-09 Thread Dragan Djuric
Windows users should not feel left out from high-performance computing experience in Clojure. Neanderthal now comes ready for Linux, OS X, AND Windows, on all CPUs and AMD, Nvidia, and Intel GPUs! Greatest thanks go to Dejan Dosic (https://github.com/ddosic), who wrestled Windows peculiarities

Re: Clojure with Tensorflow, Torch etc (call for participation, brainstorming etc)

2016-10-07 Thread Dragan Djuric
> > > If people build on the 'wrong' api, thats a good problem to have. The > field is so in flux anyway. The problem can also be mitigated through > minimalism in what is released in the beginning. > > This. -- You received this message because you are subscribed to the Google Groups

Re: Clojure with Tensorflow, Torch etc (call for participation, brainstorming etc)

2016-10-07 Thread Dragan Djuric
Hi Mike, Thanks for the update. > Opening the source is not entirely my decision, this is a collaboration > with the Thinktopic folks (Jeff Rose et al.). I'm personally in favour of > being pretty open about this stuff but I do think that it would be a > mistake if people build too much

Re: Clojure with Tensorflow, Torch etc (call for participation, brainstorming etc)

2016-10-07 Thread Dragan Djuric
Hi Kovas, > One question: > > Is it possible to feed Neanderthal's matrix representation (the underlying > bytes) into one of these other libraries, to obtain > computations Neanderthal doesn't support? > There are two parts to that question, I think: 1) How can you make Neanderthal work

Re: Clojure with Tensorflow, Torch etc (call for participation, brainstorming etc)

2016-10-06 Thread Dragan Djuric
Just a small addition: I looked at BidiMat's code and even at the JNI/C level they are doing some critical things that work on small scale but byte unexpectedly when JVM needs to rearrange memory and also may trigger copying. On Thursday, October 6, 2016 at 10:46:04 PM UTC+2, Dragan Djuric

Re: Clojure with Tensorflow, Torch etc (call for participation, brainstorming etc)

2016-10-06 Thread Dragan Djuric
Hi Kovas, > By the way, I'd love to see matrix/tensor benchmarks of Neanderthal and > Vectorz vs ND4J, MXNet's NDArray, and BidMat.. :) > I don't have exact numbers, but will try to give you a few pointers to help you if you decide to investigate this further: 0. Neanderthal's scope is

Re: Clojure with Tensorflow, Torch etc (call for participation, brainstorming etc)

2016-10-06 Thread Dragan Djuric
Hey Mike, A friend asked me if I know of any good (usable) deep learning libraries for Clojure. I remembered you had some earlier neural networks library that was at least OK for experimenting, but seems abandoned for your current work in a similar domain. A bit of digging lead me to this

Re: Neanderthal (fast CPU & GPU matrix library for Clojure) will also support Windows out of the box

2016-10-05 Thread Dragan Djuric
e file is in the libnd4j > repository: > > https://github.com/deeplearning4j/libnd4j > > Do you have some cooperation with the dl4j people? > > Cheers, > Christian > > On 04.10.2016 17:53, Dragan Djuric wrote: > > Hi all, > > > > I've just s

Neanderthal (fast CPU & GPU matrix library for Clojure) will also support Windows out of the box

2016-10-04 Thread Dragan Djuric
Hi all, I've just spent some time building ATLAS for windows, and created a windows binary of the current snapshot version. There seems to be no performance tax on windows, or at least it is not large. I wasn't unable to compare it on the same machine, but on my i7 laptop (2.4 GHz) with

Re: parallel sequence side-effect processor

2016-09-24 Thread Dragan Djuric
(quick-bench (foldmap (double-fn +) 0 side-effect-4 sv1 sv2))) Execution time mean : 16.436371 µs On Saturday, September 24, 2016 at 8:07:21 PM UTC+2, Dragan Djuric wrote: > > Francis, > > The times you got also heavily depend on the actual side-effect function, > which in th

Re: parallel sequence side-effect processor

2016-09-24 Thread Dragan Djuric
Francis, The times you got also heavily depend on the actual side-effect function, which in this case is much faster when called with one arg, instead of with varargs, that fluokitten need here. If we give fluokitten a function that does not create a sequence for multiple arguments, it is

Re: parallel sequence side-effect processor

2016-09-24 Thread Dragan Djuric
That's why I still think that in this particular case, there is no new sequence creation (by fluokitten). yes, it does call first/next but those do not require (significant) new memory or copying. They reuse the memory of the underlying vectors, if I understood that well. Or there is something

Re: parallel sequence side-effect processor

2016-09-24 Thread Dragan Djuric
Now I am on the REPL, and the solution is straightforward: (foldmap op nil println [1 2 3] [4 5 6]) gives: 1 4 2 5 3 6 nil The first function is a folding function. In this case we can use op, a monoid operation. Since nil is also a monoid, everything will be folded to nil. The second part

Re: parallel sequence side-effect processor

2016-09-23 Thread Dragan Djuric
access to the repl :) On Saturday, September 24, 2016 at 1:44:10 AM UTC+2, Dragan Djuric wrote: > > A couple of things: > 1. How fold/foldmap and any other function works, depends on the actual > type. For example, if you look at > https://github.com/uncomplicate/neanderthal/

Re: parallel sequence side-effect processor

2016-09-23 Thread Dragan Djuric
t;> iteration so you can get all the items at an index together. >> >> The ideal for multi-collection would probably be something that >> internally looks like clojure.core/sequence but doesn't accumulate the >> results. (Unfortunately some of the classes necessary t

Re: parallel sequence side-effect processor

2016-09-23 Thread Dragan Djuric
w Haskell does rfold vs lfold, but one > of those does create allocations, they may be in the form of closures, but > they are allocations none the less. So does fold use iterators or something? > > Timothy > > On Fri, Sep 23, 2016 at 4:23 PM, Dragan Djuric <drag...@gmail.com

Re: parallel sequence side-effect processor

2016-09-23 Thread Dragan Djuric
AND the folding would be optimized per the type of a. On Friday, September 23, 2016 at 10:56:00 PM UTC+2, tbc++ wrote: > > How is fluokitten's fold any better than using seqs like (map f a b) > would? Both create intermediate collections. > > On Fri, Sep 23, 2016 at 11:40 AM, Drag

Re: parallel sequence side-effect processor

2016-09-23 Thread Dragan Djuric
If you do not insist on vanilla clojure, but can use a library, fold from fluokitten might enable you to do this. It is similar to reduce, but accepts multiple arguments. Give it a vararg folding function that prints what you need and ignores the first parameter, and you'd get what you asked

Re: Preparing a proposal for EuroClojure presentation about Clojure and GPU, high-performance computing - suggestions welcome

2016-07-14 Thread Dragan Djuric
noted On Thursday, July 14, 2016, Ashish Negi wrote: > Not any specific request.. but > i would be highly interested in showing ML in clojure landscape.. > showing something end to end.. debugging and optimization tips would be > great. > > I will be waiting.. > > --

Re: Preparing a proposal for EuroClojure presentation about Clojure and GPU, high-performance computing - suggestions welcome

2016-07-13 Thread Dragan Djuric
l>, > > but as far as I know, none of it in Clojure. > > > It would be wonderful to see a minimal example of how to take a minimal GP > system (I'd be happy to provide code) and to exploit GPUs to do bigger runs > more quickly. > > > -Lee > > On Wednesday,

Preparing a proposal for EuroClojure presentation about Clojure and GPU, high-performance computing - suggestions welcome

2016-07-13 Thread Dragan Djuric
I'm preparing a presentation proposal for EuroClojure 2016 about Clojure and GPU computing, high-performance computing, data analysis, and machine learning. If you are interested in that area, I am open to suggestions about specific stuff that you would like to be covered (regardless of

Re: [ANN] Neanderthal 0.6.0: new support for AMD, Nvidia, and Intel GPUs on Linux, Windows and OS X (fast matrix library)

2016-07-07 Thread Dragan Djuric
Neanderthal 0.7.0 released: Nvidia, AMD, and Intel GPU on LInux, Windows, and OS X http://neanderthal.uncomplicate.org#matrix #Clojure #GPU #GPGPU On Monday, May 23, 2016 at 9:57:00 PM UTC+2, Dragan Djuric wrote: > > This is a major release of Neanderthal, a fast native &

[ANN] Neanderthal 0.6.0: new support for AMD, Nvidia, and Intel GPUs on Linux, Windows and OS X (fast matrix library)

2016-05-23 Thread Dragan Djuric
This is a major release of Neanderthal, a fast native & GPU matrix library: In this release, spotlight is on the new GPU engine, that: * Works on all three major hardware platforms: AMD, Nvidia, and Intel * Works on all three major operating systems: Linux, Windows, and OS X * Is even faster, so

Re: [ANN] ClojureCL now supports Linux, Windows and OS X (GPGPU and high performance parallel computing)

2016-05-20 Thread Dragan Djuric
FT (on audio) in parallel. > > > On Wednesday, May 18, 2016 at 1:37:50 AM UTC+2, Dragan Djuric wrote: >> >> http://clojurecl.uncomplicate.org >> >> >> https://www.reddit.com/r/Clojure/comments/4jtqhm/clojurecl_gpu_programming_now_works_on_linux/ >> >> Clojure

[ANN] ClojureCL now supports Linux, Windows and OS X (GPGPU and high performance parallel computing)

2016-05-17 Thread Dragan Djuric
http://clojurecl.uncomplicate.org https://www.reddit.com/r/Clojure/comments/4jtqhm/clojurecl_gpu_programming_now_works_on_linux/ ClojureCL is a library for OpenCL high-performance numerical computing that supports GPU and CPU optimizations. ClojureCL supports OpenCL 2.0 and 1.2 standards.

Re: Porting Clojure to Native Platforms

2016-04-26 Thread Dragan Djuric
On Tuesday, April 26, 2016 at 5:53:42 AM UTC+2, puzzler wrote: > > On Mon, Apr 25, 2016 at 1:50 PM, Timothy Baldridge > wrote: > >> As someone who has spent a fair amount of time playing around with such >> things, I'd have to say people vastly misjudge the raw speed you

Re: [ANN] Quil 2.4.0 and improved live editor

2016-03-24 Thread Dragan Djuric
Thank you for keeping this fantastic library alive :) On Thursday, March 24, 2016 at 9:03:42 PM UTC+1, Nikita Beloglazov wrote: > > Happy to announce Quil 2.4.0 release. > > Quil is a Clojure/ClojureScript library for creating interactive drawings > and animations. > > The release available on

Re: Similar lisps and emacs reimplementations?

2016-03-19 Thread Dragan Djuric
I understand your position (you have bosses that you have to answer to), but I would like to thank you for reminding me to thank Rich for choosing the license that is unfriendly to software patent litigators. here is how I look at this: there is a wonderful free software that lots of people

Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-16 Thread Dragan Djuric
Christopher, Bobby, Mars0i, I agree with you! core.matrix works fine for what you need and it is quite sane and wise to continue happily using it. Whether you want to touch Neanderthal or not is up to you, and it is same to me one way or the other - I earn exactly $0 if you do and lose the

Re: Neanderthal 0.5.0 - much easier installation and out of the box Mac OS X support

2016-03-15 Thread Dragan Djuric
Forgot to add the link: http://neanderthal.uncomplicate.org/articles/getting_started.html On Tuesday, March 15, 2016 at 6:26:29 PM UTC+1, Dragan Djuric wrote: > > Most notable new features: > >- Streamlined dependencies: no longer need 2 dependencies in project >files.

Re: [ANN] ClojureCL - OpenCL 2.0 Clojure library (GPGPU and high performance parallel computing)

2016-03-15 Thread Dragan Djuric
A few minor changes so Neanderthal can work better. New version of ClojureCL 0.5.0 has been released to Clojars Clojars. http://clojurecl.uncomplicate.org On Tuesday, March 15, 2016 at 1:32:34 AM UTC+1, Dragan Djuric wrote: > > New version of ClojureCL 0.4.0 has been released to Clojars C

Neanderthal 0.5.0 - much easier installation and out of the box Mac OS X support

2016-03-15 Thread Dragan Djuric
Most notable new features: - Streamlined dependencies: no longer need 2 dependencies in project files. The dependency on uncomplicate/neanderthal is enough - Comes with Mac OS X build out of the box. No need even for external ATLAS. - release and with-release moved from

Re: [ANN] ClojureCL - OpenCL 2.0 Clojure library (GPGPU and high performance parallel computing)

2016-03-14 Thread Dragan Djuric
New version of ClojureCL 0.4.0 has been released to Clojars Clojars. http://clojurecl.uncomplicate.org On Wednesday, October 21, 2015 at 7:18:27 PM UTC+2, Dragan Djuric wrote: > > New version of ClojureCL 0.3.0 is out in Clojars. > http://clojurecl.uncomplicate.org > > On Wed

Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-14 Thread Dragan Djuric
At least for the JNI part - there are many Java libraries that generate JNI, but my experience is that it is easier to just write JNI by hand (it is simple if you know what you are doing) than to learn to use one of those, usually poorly documented, tools. As for code generation - OpenCL as a

Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-14 Thread Dragan Djuric
> > > Its a fact that the JVM is not the state of the art for numerical > computing, including big swaths of data science/machine learning. There is > 0 chance of this changing until at least Panama and Valhalla come to > fruition (5 year timeline). > I agree, but I would not dismiss even

Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-14 Thread Dragan Djuric
Thank you for the encouragement, Sergey. As I mentioned in one of the articles, a decent vectorized/GPU support is not a solution on its own. It is a foundation for writing your own custom GPU or SSE algorithms. For that, you'll have to drop to the native level for some parts of the code,

Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-14 Thread Dragan Djuric
On Monday, March 14, 2016 at 4:56:19 PM UTC+1, tbc++ wrote: > > Just a side comment, Dragan, if you don't want to be compared against some > other tech, it might be wise to not make the subtitle of a release "X times > faster than Y". Make the defining feature of a release that it's better >

Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-14 Thread Dragan Djuric
> > > 2) I disagree with. Most real world data science is about data > manipulation and transformation, not raw computation. 1% of people need to > optimise the hell out of a specific algorithm for a specific use case, 99% > just want convenient tools and the ability to get an answer "fast

Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-14 Thread Dragan Djuric
> > >> > There is a set of BLAS-like API functions in core.matrix already. See: > https://github.com/mikera/core.matrix/blob/develop/src/main/clojure/clojure/core/matrix/blas.cljc > GitHub history says they were added 7 days ago. Nevermind that they just delegate, so the only BLAS-y thing is

Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-14 Thread Dragan Djuric
> > > Please remember that core.matrix is primarily intended as an API, not a > matrix implementation itself. The point is that different matrix > implementations can implement the standard protocols, and users and library > writers can then code to a standard API while maintaining flexibility

Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-13 Thread Dragan Djuric
ATLAS or Nympu + ATLAS or R + ATLAS guide for instructions. Many people did that installation, so I doubt it'd be a real obstacle for you. > On Sunday, 13 March 2016 23:34:23 UTC+8, Dragan Djuric wrote: >> >> I am soon going to release a new version of Neanderthal. >> >> I re

New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-13 Thread Dragan Djuric
I am soon going to release a new version of Neanderthal. I reinstalled ATLAS, so I decided to also update benchmarks with threaded ATLAS bindings. The results are still the same for doubles on one core: 10x faster than Vectorz and 2x faster than JBlas. The page now covers some more cases:

Re: Is there any desire or need for a Clojure DataFrame? (X-POST from Numerical Clojure mailing list)

2016-03-10 Thread Dragan Djuric
> > This is already working well for the array programming APIs (it's easy to > mix and match Clojure data structures, Vectorz Java-based arrays, GPU > backed arrays in computations). > While we could agree to some extent on the other parts of your post but the GPU part is *NOT* true: I

Re: [ANN] Fluokitten - Category theory concepts in Clojure - Functors, Applicatives, Monads, Monoids and more

2016-03-10 Thread Dragan Djuric
tion of both > fluokitten and morph and understood it. The functionality certainly > seems useful. > > Phil > > Dragan Djuric <drag...@gmail.com > writes: > > > If Clojure has all of the Haskell's type features, I guess there would > be > > only one Clojure

Re: Any chance of core.logic getting extended with probKanren?

2015-11-25 Thread Dragan Djuric
I am working on something related to probabilistic programming/inference/learning. Not yet ready for use but I hope to get it there the next year. Although some key building blocks are the same (MCMC), I really cannot see how such things could be integrated to core.logic, and even why. So, I

Re: Any chance of core.logic getting extended with probKanren?

2015-11-25 Thread Dragan Djuric
but probability can be > thought of as a continuous extension of logic where 0 = False and 1 = > True (and .5 = "half true", etc.). > > This extension is unique given a few natural conditions. > > Carl > > > On Wed, Nov 25, 2015 at 5:49 AM, Dragan Djuri

Re: Any chance of core.logic getting extended with probKanren?

2015-11-25 Thread Dragan Djuric
"even why". Perhaps I misinterpreted ... > > > On Wed, Nov 25, 2015 at 9:20 AM, Dragan Djuric <drag...@gmail.com > > wrote: > > I know well how probabilistic logic works. It is a superset of the > > true/false logic, and goes much beyond "half true&quo

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-10-25 Thread Dragan Djuric
WRT their documentation, I do not think it means much for me now, since I do not need that library and its functionality. So I guess the integration depends on whether 1) it is technically viable 2) someone needs it I will support whomever wants to take on that task, but have no time and need

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-10-23 Thread Dragan Djuric
Neanderthal 0.4.0 has just been released with OpenCL-based GPU support and pluggable engines. http://neanderthal.uncomplicate.org On Tuesday, June 23, 2015 at 12:39:40 AM UTC+2, Dragan Djuric wrote: > > As it is a *sparse matrix*, C++ library unavailable on JVM, I don't > consider it

Re: [ANN] ClojureCL - OpenCL 2.0 Clojure library (GPGPU and high performance parallel computing)

2015-10-21 Thread Dragan Djuric
New version of ClojureCL 0.3.0 is out in Clojars. http://clojurecl.uncomplicate.org On Wednesday, June 17, 2015 at 4:59:02 PM UTC+2, Dragan Djuric wrote: > > Certainly, but that is not a priority, since I do not use (nor need) > OpenGL myself. I would be very interested to include cont

Re: The Reading List

2015-10-14 Thread Dragan Djuric
Is there a living person who read all, or even a majority, of these books? If there is (although I doubt), what he/she thinks about them? I am genuinely interested. In my opinion, there is too much Cool Aid magic in the list, and too little of more tangible stuff. Is not that the books are

[ANN] Neanderthal 0.3.0 with GPU matrix operations released

2015-08-07 Thread Dragan Djuric
New and noteworthy: 1. GPU engine now available (OpenCL 2.0 required, works superfast on AMD Radeons and FirePros) 2. Support for pluggable engines and datastructures (so pure Java engine would be relatively easy to add) *** New, very detailed tutorials with benchmarks available ** Discuss at

Re: [ANN] Clojure 1.8.0-alpha4

2015-08-05 Thread Dragan Djuric
Trying to compile an application using ztellman/vertigo 1.3.0 library. Worked with Clojure 1.7.0, Clojure 1.8.0-alpha4 raises the following exception: An app that worked with vertigo 1.3.0 and Clojure 1.7.0 causes the following exception in the Clojure compiler: java.lang.NoClassDefFoundError:

  1   2   >