Numerical Linear Algebra for Programmers - Clojure Book - New Release 0.10.0

2020-09-14 Thread Dragan Djuric
https://aiprobook.com/numerical-linear-algebra-for-programmers/

with new chapter, Hello World

Numerical Linear Algebra for Programmers: An Interactive Tutorial with GPU, 
CUDA, OpenCL, MKL, Java, and Clojure is...

...basically…
a book for programmers
interactive & dynamic
a direct link from theory to implementation
incredible performance
Intel & AMD CPUs (MKL)
Nvidia GPUs (CUDA and cuBLAS)
AMD GPUs (yes, OpenCL too!)
Clojure (it’s magic!)
Java Virtual Machine (without Java boilerplate!)
complete source code
beautiful typesetting (see the sample chapters)

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/e3fe2c4f-38c4-46f3-b774-bbd7c426cf0dn%40googlegroups.com.


Re: Cognitect joins Nubank!

2020-07-23 Thread Dragan Djuric
Congratulations, Rich, Stu, Alex, and the rest of the team! 

I hope that this is the milestone that will finally convince broader 
programming community that Clojure is here to stay, healthy and growing! 
Well deserved!

On Thursday, July 23, 2020 at 2:04:49 PM UTC+2, Rich Hickey wrote:
>
> We are excited to announce that Cognitect is joining the Nubank family of 
> companies: 
>
> https://cognitect.com/blog/2020/07/23/Cognitect-Joins-Nubank 
>
> Clojure remains independent, and development and stewardship will continue 
> as it has, with more resources and a much larger sponsoring organization in 
> Nubank. Rich, Stu, Alex et al are all on board. We are dedicated to growing 
> the Clojure community, and helping companies adopt and succeed with 
> Clojure. 
>
> Rich

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/a51d673b-7050-4382-8476-cb801877c176o%40googlegroups.com.


Re: Deep Learning for Programmers book 0.16.0: new chapter on Multi-class classification and metrics

2020-05-10 Thread Dragan Djuric
It makes the book possible.

On Sunday, May 10, 2020 at 8:39:37 PM UTC+2, Ali M wrote:
>
> Why a subscription model for a book wouldn't that make the book very 
> expensive ?
>
>
>
> On Thursday, April 2, 2020 at 6:06:33 AM UTC-4, Dragan Djuric wrote:
>>
>> Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, 
>> DNNL, Java, and Clojure
>> version 0.16.0 is available at 
>> https://aiprobook.com/deep-learning-for-programmers?release=1.16.0=cgroups
>>
>> Why?
>>
>> ++ Clojure!
>> ++ For Programmers!
>>
>> + the only AI book that walks the walk
>> + complete, 100% executable code
>> + step-by-step instructions
>> + full path from theory to implementation in actual code
>> + superfast implementation
>>
>> Other books are math-only monographs for academics, or are written for 
>> non-technical readers.
>>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/4c29d9dd-a22b-490a-9977-0673d6c9dc8f%40googlegroups.com.


Deep Learning for Programmers book 0.16.0: new chapter on Multi-class classification and metrics

2020-04-02 Thread Dragan Djuric
Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, 
DNNL, Java, and Clojure
version 0.16.0 is available at 
https://aiprobook.com/deep-learning-for-programmers?release=1.16.0=cgroups

Why?

++ Clojure!
++ For Programmers!

+ the only AI book that walks the walk
+ complete, 100% executable code
+ step-by-step instructions
+ full path from theory to implementation in actual code
+ superfast implementation

Other books are math-only monographs for academics, or are written for 
non-technical readers.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/e6e8cf53-b783-43a3-87ee-a758635ac2c1%40googlegroups.com.


Deep Learning for Programmers - Parts 2,3,4, and 5 completed

2020-01-27 Thread Dragan Djuric
With the final chapter in the new relase, Parts 2, 3, 4, and 5 are 
complete, and the book runs at 250 pages so far. I hope it won't be more 
than 350 when finished :)

Drafts are already available.

https://aiprobook.com/deep-learning-for-programmers/?release=0.15.0=cgroups 


Deep Learning for Programmers
Interactive Programming for Artificial Intelligence series; Deep Learning 
for Programmers: An Interactive Tutorial with CUDA, OpenCL, DNNL, Java, and 
Clojure


WHY?
basically…
this is the only DL book for programmers
interactive & dynamic
step-by-step implementation
incredible performance, yet no C++ hell (!)
Intel & AMD CPUs (DNNL)
Nvidia GPUs (CUDA and cuDNN)
AMD GPUs (yes, OpenCL too!)
Clojure (it’s magic!)
Java Virtual Machine (without Java boilerplate!)
complete source code
beautiful typesetting (see the sample chapters below)

NO MIDDLEMAN
no middleman!
100% of the revenue goes towards my open-source work!

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/f6134197-f7eb-4d82-ab6b-f78c445e05e4%40googlegroups.com.


Numerical Linear Algebra for Programmers (Clojure book WIP) new release 0.5.0

2019-12-21 Thread Dragan Djuric
learn linear algebra with code examplesexplore it on the CPUrun it on the 
GPU!integrate with Intel’s MKL and Nvidia’s cuBLAS performance librarylearn 
the nuts and boltsunderstand how to use it to solve practical problems…and 
much more!

Available at https://aiprobook.com/numerical-linear-algebra-for-programmers 


-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/82bedcee-d0f6-4710-85ba-e41c4adf016d%40googlegroups.com.


Re: Deep Learning for Programmers 0.11.0 (Clojure AI Book WIP)

2019-11-08 Thread Dragan Djuric
New release, 0.12.0 is available, with additional chapter on using DL for 
regression and predicting the prices of Boston real estate (a classic 
regression example).

On Friday, October 25, 2019 at 9:53:26 AM UTC+2, Dragan Djuric wrote:
>
> New release:Deep Learning for Programmers: An Interactive Tutorial with 
> CUDA, OpenCL, MKL-DNN, Java, and Clojure
>
> https://aiprobook.com/deep-learning-for-programmers 
> <https://aiprobook.com/deep-learning-for-programmers?release=0.11.0=cgroups>
>
> + Chapter on Adaptive Learning Rates
>
> **
> no middleman!
> 100% of the revenue goes towards my open-source work!
>
> ** this is the only DL book for programmers
>
> - interactive & dynamic
> - step-by-step implementation
> - incredible performance, yet no C++ hell (!)
> - Intel & AMD CPUs (MKL-DNN)
> - Nvidia GPUs (CUDA and cuDNN)
> - AMD GPUs (yes, OpenCL too!)
> - Clojure (it’s magic!)
> - Java Virtual Machine (without Java boilerplate!)
> - complete source code
> - beautiful typesetting (see the sample chapters)
>
> Current status:
>
> ## Table of Contents
>
> ### Part 1: Getting Started
>
> 4-6 chapters, (TO BE DETERMINED)
>
> ### Part 2: Inference ([AVAILABLE])
>
>  Representing layers and connections ([AVAILABLE])
>
>  Bias and activation function ([AVAILABLE])
>
>  Fully connected inference layers ([AVAILABLE])
>
>  Increasing performance with batch processing ([AVAILABLE])
>
>  Sharing memory ([AVAILABLE])
>
>  GPU computing with CUDA and OpenCL ([AVAILABLE]
>
> ### Part 3: Learning ([AVAILABLE]
>
>  Gradient descent and backpropagation ([AVAILABLE]
>
>  The forward pass ([AVAILABLE]
>
>  The activation and its derivative ([AVAILABLE]
>
>  The backward pass ([AVAILABLE]
>
> ### Part 4: A simple neural networks API ([AVAILABLE]
>
>  Inference API ([AVAILABLE]
>
>  Training API ([AVAILABLE]
>
>  Initializing weights ([AVAILABLE]
>
>  Regression: learning a known function ([AVAILABLE]
>
> ### Part 5: Training optimizations (IN PROGSESS)
>
>  Weight decay ([AVAILABLE]
>
>  Momentum and Nesterov momentum ([AVAILABLE]
>
>  Adaptive learning rates ([AVAILABLE]
>
>  Regression: Boston housing prices (SOON)
>
>  Dropout (SOON)
>
>  Stochastic gradient descent (SOON)
>
>  Classification: IMDB sentiments (SOON)
>
> ### Part 6: Tensors (TO BE DETERMINED, BUT SOON ENOUGH)
>
>  Tensors, Matrices, and ND-arrays (TBD)
>
>  Tensors on the CPU with MKL-DNN (TBD)
>
>  Tensors on the GPU with cuDNN (TBD)
>
>  Tensor API (TBD)
>
> ### Part 7: Convolutional layers (TBD)
>
> 4-6 Chapters, (TBD)
>
> ### Part 8: Recurrent networks (TBD)
>
> 4-6 Chapters, (TBD)
>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/7ee9973b-bfb2-4bda-bf00-8ad4adbdb9f1%40googlegroups.com.


Deep Learning for Programmers 0.11.0 (Clojure AI Book WIP)

2019-10-25 Thread Dragan Djuric
New release:Deep Learning for Programmers: An Interactive Tutorial with 
CUDA, OpenCL, MKL-DNN, Java, and Clojure

https://aiprobook.com/deep-learning-for-programmers 


+ Chapter on Adaptive Learning Rates

**
no middleman!
100% of the revenue goes towards my open-source work!

** this is the only DL book for programmers

- interactive & dynamic
- step-by-step implementation
- incredible performance, yet no C++ hell (!)
- Intel & AMD CPUs (MKL-DNN)
- Nvidia GPUs (CUDA and cuDNN)
- AMD GPUs (yes, OpenCL too!)
- Clojure (it’s magic!)
- Java Virtual Machine (without Java boilerplate!)
- complete source code
- beautiful typesetting (see the sample chapters)

Current status:

## Table of Contents

### Part 1: Getting Started

4-6 chapters, (TO BE DETERMINED)

### Part 2: Inference ([AVAILABLE])

 Representing layers and connections ([AVAILABLE])

 Bias and activation function ([AVAILABLE])

 Fully connected inference layers ([AVAILABLE])

 Increasing performance with batch processing ([AVAILABLE])

 Sharing memory ([AVAILABLE])

 GPU computing with CUDA and OpenCL ([AVAILABLE]

### Part 3: Learning ([AVAILABLE]

 Gradient descent and backpropagation ([AVAILABLE]

 The forward pass ([AVAILABLE]

 The activation and its derivative ([AVAILABLE]

 The backward pass ([AVAILABLE]

### Part 4: A simple neural networks API ([AVAILABLE]

 Inference API ([AVAILABLE]

 Training API ([AVAILABLE]

 Initializing weights ([AVAILABLE]

 Regression: learning a known function ([AVAILABLE]

### Part 5: Training optimizations (IN PROGSESS)

 Weight decay ([AVAILABLE]

 Momentum and Nesterov momentum ([AVAILABLE]

 Adaptive learning rates ([AVAILABLE]

 Regression: Boston housing prices (SOON)

 Dropout (SOON)

 Stochastic gradient descent (SOON)

 Classification: IMDB sentiments (SOON)

### Part 6: Tensors (TO BE DETERMINED, BUT SOON ENOUGH)

 Tensors, Matrices, and ND-arrays (TBD)

 Tensors on the CPU with MKL-DNN (TBD)

 Tensors on the GPU with cuDNN (TBD)

 Tensor API (TBD)

### Part 7: Convolutional layers (TBD)

4-6 Chapters, (TBD)

### Part 8: Recurrent networks (TBD)

4-6 Chapters, (TBD)

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/870a0b12-bdcd-4e0a-aecf-1dc14ad88c32%40googlegroups.com.


Numerical Linear Algebra for Programmers - book release 0.4.0

2019-10-18 Thread Dragan Djuric
Linear Algebra for Programmers: An Interactive Tutorial with GPU, CUDA, 
OpenCL, Java, and Clojure 


New Release 0.4.0 is available.

+ chapter on Orthogonalization and Least Squares

A book written with programmers in mind:

The only AI book that walks the walk


   - complete, 100% executable, code inside
   - step-by-step instructions
   - full path from theory to implementation in actual code
   - superfast implementation
   - Other books are either math-only monographs for academics, or written 
   for non-technical end users.

You can subscribe  to access the 
drafts immediately. 

All proceeds go towards funding my work on open-source Clojure libraries 
.


https://aiprobook.com/numerical-linear-algebra-for-programmers/ 




-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/5244632a-04dc-4d60-9d13-b538f9972ec7%40googlegroups.com.


Deep Learning for Programmers - new release 0.10.0 (Clojure AI Book WIP)

2019-10-10 Thread Dragan Djuric
New release of the WIP book Deep Learning for Programmers: An Interactive 
Tutorial with CUDA, OpenCL, MKL-DNN, Java, and Clojure

https://aiprobook.com/deep-learning-for-programmers 


+ Chapter on Momentum and Nesterov Momentum

**
no middleman!
100% of the revenue goes towards my open-source work!

** this is the only DL book for programmers

- interactive & dynamic
- step-by-step implementation
- incredible performance, yet no C++ hell (!)
- Intel & AMD CPUs (MKL-DNN)
- Nvidia GPUs (CUDA and cuDNN)
- AMD GPUs (yes, OpenCL too!)
- Clojure (it’s magic!)
- Java Virtual Machine (without Java boilerplate!)
- complete source code
- beautiful typesetting (see the sample chapters)

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/83ef49ce-3d07-4862-8483-77e49fee7056%40googlegroups.com.


Re: Numerical Linear Algebra for Programmers - new book release 0.3.0

2019-09-21 Thread Dragan Djuric
The statement refers to the whole series at https://aiprobook.com of which
NLAFP is an important foundation stone. So, LA in itself is clearly not AI,
but LA is the basic foundation in AI software implementations, and is
usually discussed in AI books, sadly, without much reference to actual
implementations.

On Sat, Sep 21, 2019 at 10:46 PM Gary Schiltz 
wrote:

> I checked out your link. On the page it says "the only AI book that walks
> the walk." As an AI guy from the 1980s, I have to ask: Is Numerical Linear
> Algebra these days considered to have something to do with Artificial
> Intelligence?
>
> On Wednesday, September 11, 2019 at 11:52:42 AM UTC-5, Dragan Djuric wrote:
>>
>> Numerical Linear Algebra for Programmers: An Interactive Tutorial with
>> GPU, CUDA, OpenCL, MKL, Java and Clojure
>>
>> new release 0.3.0 is available
>>
>> https://aiprobook.com/numerical-linear-algebra-for-programmers
>>
>> basically…
>>
>>- a book for programmers
>>- interactive & dynamic
>>- direct link from theory to implementation
>>- incredible speed
>>- Nvidia GPU (CUDA and cuBLAS)
>>- AMD GPU (yes, OpenCL too!)
>>- Intel & AMD CPU (MKL)
>>- Clojure (magic!)
>>- Java Virtual Machine (without Java boilerplate!)
>>- complete source code
>>- beautiful typesetting (see sample chapters below)
>>
>> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Clojure" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/clojure/RWhNIGU9NaM/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> clojure+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/clojure/a7393319-b8f3-40e9-96d8-29d88f3bcce5%40googlegroups.com
> <https://groups.google.com/d/msgid/clojure/a7393319-b8f3-40e9-96d8-29d88f3bcce5%40googlegroups.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/CAOqh1tSox4pyGa9V0g62jb_646Co17mJbG6Fd8MoAKAjzo%2BhHA%40mail.gmail.com.


Numerical Linear Algebra for Programmers - new book release 0.3.0

2019-09-11 Thread Dragan Djuric
Numerical Linear Algebra for Programmers: An Interactive Tutorial with GPU, 
CUDA, OpenCL, MKL, Java and Clojure

new release 0.3.0 is available

https://aiprobook.com/numerical-linear-algebra-for-programmers

basically…

   - a book for programmers
   - interactive & dynamic
   - direct link from theory to implementation
   - incredible speed
   - Nvidia GPU (CUDA and cuBLAS)
   - AMD GPU (yes, OpenCL too!)
   - Intel & AMD CPU (MKL)
   - Clojure (magic!)
   - Java Virtual Machine (without Java boilerplate!)
   - complete source code
   - beautiful typesetting (see sample chapters below)

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/1f6e2216-4ef3-4519-9ae7-c9ce2f69b9be%40googlegroups.com.


Deep Learning for Programmers 0.8.0 (Clojure AI Book WIP)

2019-09-05 Thread Dragan Djuric
Learn Deep Learning by implementing it from scratch!

New release 0.8.0 of the Deep Learning for Programmers: An Interactive 
Tutorial with CUDA, OpenCL, MKL-DNN, Java, and Clojure is ready!

Read the drafts as they are released. 

https://aiprobook.com/deep-learning-for-programmers?release=0.8.0=cgroups


   - explore on CPU
   - then on GPU
   - design elegant neural networks API
   - build tensor support
   - integrate with Intel’s MKL-DNN performance lib
   - integrate with Nvidia’s CUDA and cuDNN performance lib


100% of your support goes towards funding Clojure open-source libraries for 
high-performance computing!

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/28283f64-f20f-44a0-8d09-56f3eeea660e%40googlegroups.com.


[ANN] Deep Learning for Programmers - Clojure Book WIP Release 0.7.0

2019-08-22 Thread Dragan Djuric
Learn Deep Learning by implementing it from scratch!

New release 0.7.0 of the Deep Learning for Programmers: An Interactive 
Tutorial with CUDA, OpenCL, MKL-DNN, Java, and Clojure is ready!

https://aiprobook.com/deep-learning-for-programmers?release=0.7.0=cgroups


   - explore on CPU
   - then on GPU
   - design elegant neural networks API
   - build tensor support
   - integrate with Intel’s MKL-DNN performance lib
   - integrate with Nvidia’s CUDA and cuDNN performance lib


Read the drafts as they are released. 100% of your support goes towards 
funding Clojure open-source libraries for high-performance computing!

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/dfe3912a-315e-4d0c-a820-1a578072a2bf%40googlegroups.com.


[ANN] Deep Learning for Programmers - New Release 0.6.0

2019-07-25 Thread Dragan Djuric
Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, 
MKL-DNN, Java, and Clojure 


basically…

   - the only DL book for programmers
   - interactive & dynamic
   - step-by-step implementation
   - incredible speed
   - yet, No C++ hell (!)
   - Nvidia GPU (CUDA and cuDNN)
   - AMD GPU (yes, OpenCL too!)
   - Intel & AMD CPU (MKL-DNN)
   - Clojure (magic!)
   - Java Virtual Machine (without Java boilerplate!)
   - complete source code
   - beautiful typesetting (see sample chapters)
   - You can subscribe 
   

 to 
   access the drafts immediately. All proceeds go towards funding my work on 
open-source 
   Clojure libraries .


https://aiprobook.com/deep-learning-for-programmers/?release=0.6.0=cgroups

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/2d8863f4-cc33-4040-a0f5-55d6b183916f%40googlegroups.com.


[ANN] Numerical Linear Algebra for Programmers - New Release 0.2.0

2019-07-22 Thread Dragan Djuric
Linear Algebra for Programmers: An Interactive Tutorial with GPU, CUDA, 
OpenCL, Java, and Clojure 


New Release 0.2.0 is available.

A book written with programmers in mind:

The only AI book that walks the walk


   - complete, 100% executable, code inside
   - step-by-step instructions
   - full path from theory to implementation in actual code
   - superfast implementation
   - Other books are either math-only monographs for academics, or written 
   for non-technical end users.

You can subscribe  to access the 
drafts immediately. All proceeds go towards funding my work on open-source 
Clojure libraries .


https://aiprobook.com/numerical-linear-algebra-for-programmers/ 






-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/0d4c30d8-5c96-4e00-84bd-d34968b43ebe%40googlegroups.com.


[ANN] Deep Learning for Programmers - release 0.5.0 (Clojure book WIP)

2019-07-08 Thread Dragan Djuric
Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, 
MKL-DNN, Java, and Clojure 


is the only DL book written with programmers in mind:

the only AI book that walks the walk

   - complete, 100% executable, code inside
   - step-by-step instructions
   - full path from theory to implementation in actual code
   - superfast implementation
   - Other books are either math-only monographs for academics, or written 
   for non-technical end users.

You can subscribe 

 
to access the drafts immediately. All proceeds go towards funding my work 
on open-source Clojure libraries .

https://aiprobook.com/deep-learning-for-programmers/?release=0.5.0=cgroups

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/71f233ef-03ca-4342-a3df-d38836506a8c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Clojure Book WIP] Numerical Linear Algebra for Programmers

2019-06-25 Thread Dragan Djuric
Numerical Linear Algebra for Programmers: an Interactive Tutorial with GPU, 
CUDA, OpenCL, MKL, Java and Clojure

initial release 0.1.0

https://aiprobook.com/numerical-linear-algebra-for-programmers

basically…

   - a book for programmers
   - interactive & dynamic
   - direct link from theory to implementation
   - incredible speed
   - Nvidia GPU (CUDA and cuBLAS)
   - AMD GPU (yes, OpenCL too!)
   - Intel & AMD CPU (MKL)
   - Clojure (magic!)
   - Java Virtual Machine (without Java boilerplate!)
   - complete source code
   - beautiful typesetting (see sample chapters below)

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/062128cf-e719-4276-a387-b1ddcc7dc887%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Clojure Book WIP] Deep Learning for Programmers: an Interactive Tutorial with CUDA, OpenCL, MKL-DNN, Java and Clojure, New Release 0.4.0

2019-06-18 Thread Dragan Djuric
Thank you for your support!

On Tuesday, June 18, 2019 at 4:44:11 PM UTC+2, Daniel Carleton wrote:
>
> Subscribed! This is the book I've been waiting for. Was going to start 
> with your reading list, but now I'll start here. 
>
> On Tue, Jun 18, 2019, 2:49 AM Dragan Djuric  > wrote:
>
>> basically…
>>
>>1. the only DL book for programmers
>>2. interactive & dynamic
>>3. step-by-step implementation
>>4. incredible speed
>>5. yet, No C++ hell (!)
>>6. Nvidia GPU (CUDA and cuDNN)
>>7. AMD GPU (yes, OpenCL too!)
>>8. Intel & AMD CPU (MKL-DNN)
>>9. Clojure (magic!)
>>10. Java Virtual Machine (without Java boilerplate!)
>>11. complete source code
>>12. beautiful typesetting (see sample chapters)
>>
>>
>> https://aiprobook.com/deep-learning-for-programmers/?release=0.4.0
>>
>> -- 
>> You received this message because you are subscribed to the Google
>> Groups "Clojure" group.
>> To post to this group, send email to clo...@googlegroups.com 
>> 
>> Note that posts from new members are moderated - please be patient with 
>> your first post.
>> To unsubscribe from this group, send email to
>> clo...@googlegroups.com 
>> For more options, visit this group at
>> http://groups.google.com/group/clojure?hl=en
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Clojure" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to clo...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/clojure/f1dcbd74-1302-4e01-94ee-6453fa7ae3be%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/clojure/f1dcbd74-1302-4e01-94ee-6453fa7ae3be%40googlegroups.com?utm_medium=email_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/b8294371-e207-46ab-9733-c381918b12c4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[Clojure Book WIP] Deep Learning for Programmers: an Interactive Tutorial with CUDA, OpenCL, MKL-DNN, Java and Clojure, New Release 0.4.0

2019-06-18 Thread Dragan Djuric
basically…

   1. the only DL book for programmers
   2. interactive & dynamic
   3. step-by-step implementation
   4. incredible speed
   5. yet, No C++ hell (!)
   6. Nvidia GPU (CUDA and cuDNN)
   7. AMD GPU (yes, OpenCL too!)
   8. Intel & AMD CPU (MKL-DNN)
   9. Clojure (magic!)
   10. Java Virtual Machine (without Java boilerplate!)
   11. complete source code
   12. beautiful typesetting (see sample chapters)


https://aiprobook.com/deep-learning-for-programmers/?release=0.4.0

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/clojure/f1dcbd74-1302-4e01-94ee-6453fa7ae3be%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Adopt a Neanderthal function as your own pet! Support my Clojure work on Patreon.

2018-10-18 Thread Dragan Djuric
https://dragan.rocks/articles/18/Patreon-Announcement-Adopt-a-Function

You can become a proud sponsor of a pet Clojure function 
from one of Uncomplicate projects! I will 
add your name to the documentation data of a function, so you can follow 
the project and take care that your function is happy and free of bugs!

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using Clojure for public facing system in a bank - code security scanning - any luck?

2018-04-15 Thread Dragan Djuric
Hi all. Very interesting thread! I guess that not many Clojure developers 
are in this situation, but I hope many more will be; that would mean that 
Clojure got the foot in the door of the enterprise.

Gregg, I need a little clarification on the last thing you mentioned: Is a 
dependency treated as secure and given the green checkmark in usual 
security procedures if there is a (community) security audit that 
systematically listed vulnerabilities and recommended ways to avoid them? 
What is (in your experience with banking) the minimum amount of "burden" 
necessary so that an artifact is given a passing mark? Is there a broader 
standard, or each client has its own checklist? How defined those 
procedures are? Do they update at glacial place, or a good and honest 
efforts on case-to-case basis are accepted (such as hiring a security 
expert to audit the code with not-so-standard procedures)?

On Friday, April 13, 2018 at 11:24:54 PM UTC+2, Gregg Reynolds wrote:
>
>
>
> On Fri, Apr 13, 2018, 4:09 PM Aaron Bedra  > wrote:
>
>> Penetration testing is something performed on an application, but a 
>> source code review of the language is certainly an interesting idea. My 
>> company does these all the time. I ran this by my folks and there was 
>> certainly interest. If we could publish the results and create a healthy 
>> discussion my company would be happy to participate and do this at a fixed 
>> and heavily discounted price.
>>
>
> Naive question from the clueless peanut gallery: are you talking about a 
> security audit of clojure core (& etc) source, which could then be cited as 
> evidence by app developers?
>
> E.g. I build an app against a signed version of clojure which is 
> "certified" in some sense? Then I only have to audit my code (and lib 
> dependencies)?
>
> Gregg
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Cases of `RT/canSeq`

2017-05-19 Thread Dragan Djuric
To me it looks like a leftover from some Clojure pre-history where it made 
functional sense.

On Friday, May 19, 2017 at 6:54:32 PM UTC+2, Tianxiang Xiong wrote:
>
> That seems unlikely to be the reason here. ¯\_(ツ)_/¯
>
> As you said, the `null` check should come first if performance is the 
> driving concern. Besides, why check for a subinterface *and* its 
> superinterface only for this case? There are more cases that could be 
> checked.
>
> On Friday, May 19, 2017 at 7:09:00 AM UTC-7, Mikera wrote:
>>
>> One of the things that JVMs can do is create a small cache for the most 
>> recently seen classes in instanceof checks. I believe both OpenJDK and the 
>> Oracle JVM do this.
>>
>> So if you check for both ISeq and Seqable, you may find that you get 
>> twice as many classes cached, and therefore see a performance benefit. Of 
>> course this is implementation dependant so YMMV.
>>
>> On Friday, 19 May 2017 11:28:27 UTC+8, Tianxiang Xiong wrote:
>>>
>>> But if something is `ISeq`, it's `Seqable`, so checking for `Seqable` 
>>> alone should be sufficient. 
>>>
>>> Am I missing something here? Is there a performance performance benefit 
>>> of checking for `ISeq` *and* `Seqable` that I'm not aware of?
>>>
>>> On Wednesday, May 3, 2017 at 2:19:42 AM UTC-7, Mikera wrote:

 Clearly not necessary from a functional perspective.

 However I believe the ordering of these tests will affect JVM 
 optimisations. You want to test the common/fast cases first. And the JVM 
 does some clever things with caching most recently used lookups, which 
 will 
 again behave differently if you test things in different orders.

 Benchmarking on realistic workloads would typically be required to 
 determine the optimal order.

 FWIW I find it odd that the null check is third. This is extremely fast 
 (certainly faster than instance checks) and is a very common case given 
 the 
 amount of nil usage in idiomatic Clojure code (as an empty seq), so I 
 would 
 probably put it first.

 On Wednesday, 3 May 2017 11:59:29 UTC+8, Tianxiang Xiong wrote:
>
> Why does `clojure.lang.RT/canSeq` need to check both `ISeq` _and_ 
> `Seqable` when `ISeq <- IPersistentCollection <- Seqable`?
>
> static public boolean canSeq(Object coll){
> return coll instanceof ISeq
> || coll instanceof Seqable
> || coll == null
> || coll instanceof Iterable
> || coll.getClass().isArray()
> || coll instanceof CharSequence
> || coll instanceof Map;
> }
>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: slackpocalypse?

2017-05-18 Thread Dragan Djuric
Southeast Europe.

On Thu, May 18, 2017 at 10:45 PM Gregg Reynolds <d...@mobileink.com> wrote:

>
>
> On May 18, 2017 3:40 PM, "Dragan Djuric" <draga...@gmail.com> wrote:
>
> It works for me as always.
>
>
> hmm, maybe it's a hiccup.  where are you located, if you don't mind my
> asking.
>
>
> On Thursday, May 18, 2017 at 10:34:33 PM UTC+2, Gregg Reynolds wrote:
>>
>>
>>
>> On May 18, 2017 3:32 PM, "Jason Stewart" <jste...@fusionary.com> wrote:
>>
>> I'm experiencing the same thing, while I am able to connect with my other
>> slack teams.
>>
>>
>> this is not looking good.  https://davechen.net/2017/01/slack-user-limit/
>>
>>
>> On Thu, May 18, 2017 at 4:17 PM, Kenny Williams <kenny...@gmail.com>
>> wrote:
>>
>>> I am not able to connect via the web UI or Slack app either.
>>>
>>>
>>> On Thursday, May 18, 2017 at 1:15:17 PM UTC-7, Gregg Reynolds wrote:
>>>>
>>>> is it just me? i've been unable to connect to clojurians (by cellphone)
>>>> for about 30 minutes, but i can connect to other slack groups.
>>>>
>>>> have we hit
>>>> https://github.com/clojurians/clojurians-chat/wiki/Slackpocalypse?
>>>>  we're almost to 10K subscribers.
>>>>
>>>> g
>>>>
>>>>
>>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Clojure" group.
>>> To post to this group, send email to clo...@googlegroups.com
>>>
>>> Note that posts from new members are moderated - please be patient with
>>> your first post.
>>> To unsubscribe from this group, send email to
>>> clojure+u...@googlegroups.com
>>>
>>> For more options, visit this group at
>>> http://groups.google.com/group/clojure?hl=en
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "Clojure" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to clojure+u...@googlegroups.com.
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Clojure" group.
>> To post to this group, send email to clo...@googlegroups.com
>>
>> Note that posts from new members are moderated - please be patient with
>> your first post.
>> To unsubscribe from this group, send email to
>> clojure+u...@googlegroups.com
>>
>> For more options, visit this group at
>> http://groups.google.com/group/clojure?hl=en
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Clojure" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to clojure+u...@googlegroups.com.
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "Clojure" group.
>
> To unsubscribe from this group and stop receiving emails from it, send an
> email to clojure+unsubscr...@googlegroups.com.
>
>
> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Clojure" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/clojure/83gTh5FV8bM/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> clojure+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: slackpocalypse?

2017-05-18 Thread Dragan Djuric
It works for me as always.

On Thursday, May 18, 2017 at 10:34:33 PM UTC+2, Gregg Reynolds wrote:
>
>
>
> On May 18, 2017 3:32 PM, "Jason Stewart"  > wrote:
>
> I'm experiencing the same thing, while I am able to connect with my other 
> slack teams.
>
>
> this is not looking good.  https://davechen.net/2017/01/slack-user-limit/
>
>
> On Thu, May 18, 2017 at 4:17 PM, Kenny Williams  > wrote:
>
>> I am not able to connect via the web UI or Slack app either.
>>
>>
>> On Thursday, May 18, 2017 at 1:15:17 PM UTC-7, Gregg Reynolds wrote:
>>>
>>> is it just me? i've been unable to connect to clojurians (by cellphone) 
>>> for about 30 minutes, but i can connect to other slack groups.
>>>
>>> have we hit 
>>> https://github.com/clojurians/clojurians-chat/wiki/Slackpocalypse? 
>>>  we're almost to 10K subscribers.
>>>
>>> g
>>>
>>>
>>> -- 
>> You received this message because you are subscribed to the Google
>> Groups "Clojure" group.
>> To post to this group, send email to clo...@googlegroups.com 
>> 
>> Note that posts from new members are moderated - please be patient with 
>> your first post.
>> To unsubscribe from this group, send email to
>> clojure+u...@googlegroups.com 
>> For more options, visit this group at
>> http://groups.google.com/group/clojure?hl=en
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Clojure" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to clojure+u...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> -- 
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clo...@googlegroups.com 
> Note that posts from new members are moderated - please be patient with 
> your first post.
> To unsubscribe from this group, send email to
> clojure+u...@googlegroups.com 
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Clojure" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to clojure+u...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to Create Clojure `defn` Functions automatically?

2017-05-11 Thread Dragan Djuric
That's why I avoided to answer the main question. In my experience, 
whenever I thought I needed some weirdly complicated stuff like the one in 
the example, there existed much simpler solution that used regular 
techniques. So, when I encounter similar problems the first thing I do is 
to try to redefine the problem to avoid such hazardous coding practices. I 
do not claim that it is always possible, but I could bet it is possible in 
*most* cases, especially those where it is difficult to think up a demo of 
the problem.

On Thursday, May 11, 2017 at 4:58:48 PM UTC+2, tbc++ wrote:
>
> This is a somewhat weird answer to a overcomplicated problem. As 
> mentioned, the data is a map to start with, and maps are functions so 
> treating the maps as data is probably the best approach. And like Dragan, 
> I'm unsure why this example doesn't use `(data :able)`.
>
> When I do need to generate functions at runtime, and I can't use macros 
> (for the reasons mentioned), I'll either use a macro that creates a var, or 
> use eval perhaps in conjunction with a memoize. I used this a lot in my 
> work with JavaFx. Do some reflection, generate some code, eval the code and 
> return a function, memoize that process so we can get the generated 
> function via name. So the interface looks like this:
>
> ((get-setter button :text) "hey")
>
> Get-setter does a ton of reflection, but calling the returned function 
> remains fast due to the combination of eval and memoization. 
>
>
>
> On Thu, May 11, 2017 at 2:55 AM, Dragan Djuric <drag...@gmail.com 
> > wrote:
>
>> What's wrong with (foo :able) => "Adelicious!" and (:able foo) => 
>> "Adelicious!"?
>>
>>
>> On Thursday, May 11, 2017 at 9:20:19 AM UTC+2, Alan Thompson wrote:
>>>
>>> A recent question on StackOverflow raised the question of the best way 
>>> to automatically generate functions. Suppose you want to automate the 
>>> creation of code like this: 
>>>
>>>
>>>
>>> (def foo
>>>   {:able"Adelicious!"
>>>:baker   "Barbrallicious!"
>>>:charlie "Charlizable"})
>>> (def bar
>>>   {:able"Apple"
>>>:baker   "Berry"
>>>:charlie "Kumquat"})
>>>
>>> (defn manual-my-foo [item] (get foo item))
>>> (defn manual-my-bar [item] (get bar item))
>>>
>>> (manual-my-foo :able) => "Adelicious!"
>>> (manual-my-bar :charlie) => "Kumquat"
>>>
>>>
>>> You could write a macro to generate one of these at a time, but you 
>>> can't pass a macro to a higher-order function like `map`, so while this 
>>> would work:
>>>
>>>
>>> (generate-fn :foo)  ;=> creates `my-foo` w/o hand-writing it
>>>
>>>
>>> this wouldn't work:
>>>
>>>
>>> (map generate-fn [:foo :bar :baz])  
>>>
>>> While one could write a 2nd macro to replace `map`, this is a symptom of 
>>> the "Turtles All the Way Down" problem. One workaround is to avoid macros 
>>> altogether and use only functions to generate the required `my-foo` and 
>>> `my-bar` functions.  The trick is to make use of the built-in Clojure 
>>> function `intern`  both to save the newly generated functions into the 
>>> global environment and to retrieve the pre-existing maps `foo` and `bar`.  
>>> Full details are available Q at the StackOverflow post 
>>> <http://stackoverflow.com/questions/43904628/how-to-create-clojure-defn-functions-automatically/43904717#43904717>
>>> .
>>>
>>> Enjoy,
>>> Alan
>>>
>>> -- 
>> You received this message because you are subscribed to the Google
>> Groups "Clojure" group.
>> To post to this group, send email to clo...@googlegroups.com 
>> 
>> Note that posts from new members are moderated - please be patient with 
>> your first post.
>> To unsubscribe from this group, send email to
>> clojure+u...@googlegroups.com 
>> For more options, visit this group at
>> http://groups.google.com/group/clojure?hl=en
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Clojure" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to clojure+u...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> “One of the main causes of the fall of the Roman Empire was that–lacking 
> zero–they had no way to indicate successful termination of their C 
> programs.”
> (Robert Firth) 
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to Create Clojure `defn` Functions automatically?

2017-05-11 Thread Dragan Djuric
What's wrong with (foo :able) => "Adelicious!" and (:able foo) => 
"Adelicious!"?

On Thursday, May 11, 2017 at 9:20:19 AM UTC+2, Alan Thompson wrote:
>
> A recent question on StackOverflow raised the question of the best way to 
> automatically generate functions. Suppose you want to automate the creation 
> of code like this: 
>
>
>
> (def foo
>   {:able"Adelicious!"
>:baker   "Barbrallicious!"
>:charlie "Charlizable"})
> (def bar
>   {:able"Apple"
>:baker   "Berry"
>:charlie "Kumquat"})
>
> (defn manual-my-foo [item] (get foo item))
> (defn manual-my-bar [item] (get bar item))
>
> (manual-my-foo :able) => "Adelicious!"
> (manual-my-bar :charlie) => "Kumquat"
>
>
> You could write a macro to generate one of these at a time, but you can't 
> pass a macro to a higher-order function like `map`, so while this would 
> work:
>
>
> (generate-fn :foo)  ;=> creates `my-foo` w/o hand-writing it
>
>
> this wouldn't work:
>
>
> (map generate-fn [:foo :bar :baz])  
>
> While one could write a 2nd macro to replace `map`, this is a symptom of 
> the "Turtles All the Way Down" problem. One workaround is to avoid macros 
> altogether and use only functions to generate the required `my-foo` and 
> `my-bar` functions.  The trick is to make use of the built-in Clojure 
> function `intern`  both to save the newly generated functions into the 
> global environment and to retrieve the pre-existing maps `foo` and `bar`.  
> Full details are available Q at the StackOverflow post 
> 
> .
>
> Enjoy,
> Alan
>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal 0.9.0 with major improvements

2017-04-28 Thread Dragan Djuric
Version 0.10.0 is in clojars.

On Friday, March 31, 2017 at 4:39:35 PM UTC+2, Dragan Djuric wrote:
>
> More details in the announcement blog post: 
> http://dragan.rocks/articles/17/Neanderthal-090-released-Clojure-high-performance-computing
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ANN] ClojureCUDA: a Clojure library for CUDA GPU computing

2017-04-24 Thread Dragan Djuric
I'll write more in an introductory blog post in a day or two. Until that, 
there is a website http://clojurecuda.uncomplicate.org, that has the 
details and documentation.

It is similar to ClojureCL (http://clojurecl.uncomplicate.org), but is 
targeted to CUDA and Nvidia GPUs specifically. The main benefit over 
ClojureCL is that ClojureCUDA will make possible to call various Nvidia 
libraries such as cuBLAS, cuFFT or cuDNN from your Clojure code.

Fast matrix library Neanderthal (http://neanderthal.uncomplicate.org) will 
soon have a cuBLAS GPU backend in addition to MKL CPU, and OpenCL GPU 
backends. 

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ANN] Neanderthal 0.9.0 with major improvements

2017-03-31 Thread Dragan Djuric
More details in the announcement blog 
post: 
http://dragan.rocks/articles/17/Neanderthal-090-released-Clojure-high-performance-computing

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: off-topic: stackof developer survey

2017-03-25 Thread Dragan Djuric
Sure, I agree. I'm just saying that whoever would like to see whatever,
they'll have to commit some work or money into making it happen, or change
their preferences to other stuff :). For example, if someone is a
programmer who would like to learn ML to be able to land a better job, and
this employability is the main criterium, Python is the obvious choice. If
that programmer does not like Python and prefer Clojure, he or she has the
option to commit some work into making Clojure's ecosystem better, or to
learn to love Python. Likewise, if a company prefers Clojure, they can
open-source their libraries, or fund some work on Clojure open-source...

Many other small contributions can be made, even the simplest things like
writing getting starting guides, or improving the documentation and tests
of existing tools, but vast majority of people don't do even that. All in
all, if everyone just wait for the perfect solution to appear out of
nowhere, I'm afraid they will wait for a long time...

On Sat, Mar 25, 2017 at 9:23 PM, craig worrall <
craig.worr...@transacumen.com> wrote:

>
> Yes, if you have a 'product' perspective, but others will have a service
> provider perspective and would like to see employers committed to Clojure
> and looking to engage with practitioners.+
>
>
> On Sunday, March 26, 2017 at 4:49:45 AM UTC+11, Dragan Djuric wrote:
>>
>>
>> Isn't it advantageous in some sense to have access to stuff that your
>> competition doesn't have?
>>
>> On Friday, March 24, 2017 at 11:05:24 PM UTC+1, piast...@gmail.com wrote:
>>>
>>>
>>>
>>> > This did get me thinking though. If the community *did* want to score
>>> highly
>>> > on some of these metrics, what would those be?
>>>
>>> I'll be happy so long as Clojure is the popular choice for doing the
>>> things where it's advantages should matter: machine learning, AI, NLP,
>>> concurrent programming.
>>>
>>> It drives me crazy that Python is doing so well in all of the areas
>>> where Clojure should be winning. There are such beautiful libraries for
>>> working with vectors and matrices with Clojure, which should obviously help
>>> with NLP, yet people use Python instead. Likewise, so much of machine
>>> learning should be done as work in parallel, and Clojure makes that easy,
>>> yet Python is preferred. Drives me crazy.
>>>
>>> These last few years I've been at a lot of NLP startups, and the choice
>>> of Python makes me sad.
>>>
>>>
>>>
>>>
>>> On Wednesday, March 22, 2017 at 7:17:10 PM UTC-4, Luke Burton wrote:
>>>>
>>>>
>>>> On Mar 22, 2017, at 2:26 PM, Gregg Reynolds <d...@mobileink.com> wrote:
>>>>
>>>> very interesting stuff, esp. the sociological bits:
>>>>
>>>> http://stackoverflow.com/insights/survey/2017
>>>>
>>>> sadly, clojure does not even rank in popularity.  but it's number 1 in
>>>> pay worldwide.  o sweet vengeance!
>>>>
>>>>
>>>> Some fun reading in there, Clojure features a couple of times. It would
>>>> be fun to watch for spikes in traffic to Clojure related resources, because
>>>> I'm sure that landing "most highly paid" will cause a few people to sit up
>>>> and take notice.
>>>>
>>>> This did get me thinking though. If the community *did* want to score
>>>> highly on some of these metrics, what would those be? Or do none of them
>>>> adequately capture what is valued by the Clojure community?
>>>>
>>>> I think I'd claim that popularity is a terrible metric, even though it
>>>> can be gratifying to be popular. The fact that lots of people do a
>>>> particular thing doesn't mean that thing is inherently good, or worth
>>>> striving for. Some very popular things are bad lifestyle choices, like
>>>> smoking, a diet high in sugary foods, and writing JavaScript.
>>>>
>>>> Conversely some very, very good things can die from even the perception
>>>> of being unpopular. We often get people asking on the subreddit why they
>>>> find so many "abandoned" libraries in Clojure. The fact a piece of software
>>>> might have been written years ago, and still be perfectly usable, is such
>>>> an anomaly in more "popular" languages that people assume we've all curled
>>>> up and died. I recently had a project steered away from Clojure (suffice to
>>>> say it was a very good fit, I thought) due to concerns around th

Re: off-topic: stackof developer survey

2017-03-25 Thread Dragan Djuric
Clojure offers full support for GPU computing. See 
http://clojurecl.uncomplicate.org, as far as I know, Python doesn't have so 
well integrated GPU programming. It also supports full high-performance CPU 
acceleration. Also, although Neanderthal 
(http://neanderthal.uncomplicate.org) is not yet on par feature-wise with 
what Python offers with Numpy and other libraries, it is gaining features 
rapidly (major update comes next week), and what's there is, IMHO, much 
better than what Python offers. I'll also create ClojureCUDA, and integrate 
that into Neanderthal. cuDNN Clojure library is then relatively 
straightforward to create. 

Lots of stuff is there, and even more coming up, but what can you do... we 
have the best features no one knows about :)

On Saturday, March 25, 2017 at 7:23:40 PM UTC+1, Didier wrote:
>
> Is Clojure so great at AI, ML, NLP and concurrent programming?
>
> It seems to me the libraries are lacking. I also know there's a race for 
> performance, and it looks like CPU parallelization isn't even fast enough, 
> so distributed or GPU based solutions are being built, which I'm also not 
> sure Clojure offers much support for.
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: off-topic: stackof developer survey

2017-03-25 Thread Dragan Djuric
But why is it bad news if your competition don't use the best tool 
available (if it is true, of course)? I consider it a competitive advantage.

On the other hand, it is perfectly clear why everyone uses Python for ML 
and (almost) nobody uses Clojure:
1) All serious literature is in Python. There is almost no literature for 
Clojure. No, Pact books do not count as good literature in my opinion.
2) There are numerous turnkey solutions in Python. There are only partial 
solutions in Clojure. It doesn't matter whether Clojure is better or not; 
most people, naively in my opinion, want to become ML masters in a weekend 
or so, and Python gives them that promise.
3) I'm sure other people in the thread can come up with more reasons...

What is the solution, then? Well, other than creating and releasing good 
software and writing about it, I am not sure. To return to the starting 
point: is it so important? Isn't it advantageous in some sense to have 
access to stuff that your competition doesn't have?

On Friday, March 24, 2017 at 11:05:24 PM UTC+1, piast...@gmail.com wrote:
>
>
>
> > This did get me thinking though. If the community *did* want to score 
> highly 
> > on some of these metrics, what would those be?
>
> I'll be happy so long as Clojure is the popular choice for doing the 
> things where it's advantages should matter: machine learning, AI, NLP, 
> concurrent programming. 
>
> It drives me crazy that Python is doing so well in all of the areas where 
> Clojure should be winning. There are such beautiful libraries for working 
> with vectors and matrices with Clojure, which should obviously help with 
> NLP, yet people use Python instead. Likewise, so much of machine learning 
> should be done as work in parallel, and Clojure makes that easy, yet Python 
> is preferred. Drives me crazy. 
>
> These last few years I've been at a lot of NLP startups, and the choice of 
> Python makes me sad. 
>
>
>
>
> On Wednesday, March 22, 2017 at 7:17:10 PM UTC-4, Luke Burton wrote:
>>
>>
>> On Mar 22, 2017, at 2:26 PM, Gregg Reynolds  wrote:
>>
>> very interesting stuff, esp. the sociological bits:
>>
>> http://stackoverflow.com/insights/survey/2017
>>
>> sadly, clojure does not even rank in popularity.  but it's number 1 in 
>> pay worldwide.  o sweet vengeance!
>>
>>
>> Some fun reading in there, Clojure features a couple of times. It would 
>> be fun to watch for spikes in traffic to Clojure related resources, because 
>> I'm sure that landing "most highly paid" will cause a few people to sit up 
>> and take notice.
>>
>> This did get me thinking though. If the community *did* want to score 
>> highly on some of these metrics, what would those be? Or do none of them 
>> adequately capture what is valued by the Clojure community?
>>
>> I think I'd claim that popularity is a terrible metric, even though it 
>> can be gratifying to be popular. The fact that lots of people do a 
>> particular thing doesn't mean that thing is inherently good, or worth 
>> striving for. Some very popular things are bad lifestyle choices, like 
>> smoking, a diet high in sugary foods, and writing JavaScript.
>>
>> Conversely some very, very good things can die from even the perception 
>> of being unpopular. We often get people asking on the subreddit why they 
>> find so many "abandoned" libraries in Clojure. The fact a piece of software 
>> might have been written years ago, and still be perfectly usable, is such 
>> an anomaly in more "popular" languages that people assume we've all curled 
>> up and died. I recently had a project steered away from Clojure (suffice to 
>> say it was a very good fit, I thought) due to concerns around the 
>> availability of Clojure programmers in the long term. In Silicon Valley. 
>> Where you can throw a rock in the air and be certain it will hit a 
>> programmer on the way down.
>>
>> Anyway, my personal metric for Clojure success would be: "for projects 
>> where Clojure is an appropriate technical fit, how often are you able to 
>> choose Clojure?" It's a selfish metric but the higher it goes, the happier 
>> I am ;)
>>
>> Luke.
>>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: The major upgrade to Neanderthal (the matrix library) will be released soon, with lots of new functionality.

2017-03-22 Thread Dragan Djuric
Great! Please follow up with feedback when you do!

On Wednesday, March 22, 2017 at 2:55:57 PM UTC+1, Christian Weilbach wrote:
>
> Am 22.03.2017 um 02:41 schrieb Dragan Djuric: 
> > More details 
> > at: http://dragan.rocks/articles/17/Neanderthal-090-is-around-the-corner 
>
> Nice work! Hopefully I can play with it soon :). 
>
>
>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


The major upgrade to Neanderthal (the matrix library) will be released soon, with lots of new functionality.

2017-03-21 Thread Dragan Djuric
More details 
at: http://dragan.rocks/articles/17/Neanderthal-090-is-around-the-corner

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: structuring parallel code

2017-01-31 Thread Dragan Djuric
1) When you work with numerics, you have to take into account that 
numerical operations are typically order(s) of magnitude faster and consume 
much less resources per element than any of those concurrency mechanisms.
2) It is important whether the problem you're working on parallelizable in 
itself. Some algorithms are embarrassingly parallelizable, some are 
inherently sequential. The first step is to find out where your algorithm 
falls in this spectrum, and whether there is a more parallelizable 
algorithm, or any clever trick that could help with your problem.
3) Most of the mechanisms you mention are intended to work with heavier 
objects, not primitive numbers one-by-one. If you want to work with 
millions of numbers in parallel, you either separate them in batches, as 
Alex suggested, or have to go with optimized CPU solutions or GPUs.

On Tuesday, January 31, 2017 at 4:44:06 AM UTC+1, Brian Craft wrote:
>
> I think java locks may be the only good answer. I can't usefully divide 
> the vector, because the distribution of updates is uniform along the length 
> of it.
>
> Perhaps there's a solution with queues, with multiple threads generating 
> potential placements, and a single thread updating the vector and 
> re-queuing any conflicting placements.
>
> On Monday, January 30, 2017 at 7:11:03 PM UTC-8, Alex Miller wrote:
>>
>> One technique is to batch locks at a coarser granularity. You've explored 
>> both ends of the spectrum - 1 lock and N locks. You can also divide the 
>> overall vector into any group of refs between 1 and N.
>>
>> If refs are too heavy, there are several other locking mechanisms on the 
>> JVM. You could try Clojure atoms or Java locks. Atoms can only be used to 
>> protect a single value so you would need a protocol for locking acquisition 
>> to deal with that. For something like that, you'd probably end up using a 
>> mutable data structure like a Java array.
>>
>>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Slides for my talk at EuroClojure 2016

2016-10-23 Thread Dragan Djuric
Cool. Do you use any well-known textbook? It would be best if we could test 
some well-designed hierarchical model in Anglican? I prefer the book Doing 
Bayesian Data Analysis, since it has some decently serious models there, 
yet is self-contained and approachable for users. I also have practically 
all other most popular textbooks, but they do not seem as well-balanced as 
this one. Even some really good books tend to use somewhat "easy" models 
that work well and fast while you are learning, even with not-so-fast 
tools, but once you try something more demanding - they are stuck. It is 
not even the problem with data points (I think 100 is not so little at all) 
because Bayesian tools are more useful the less data you have :) I am 
talking about model complexity due to hierarchy that tends to explode 
rather quickly... 

On Sunday, October 23, 2016 at 10:13:48 PM UTC+2, Boris V. Schmid wrote:
>
> Not hierarchical, but continuous variables. It is our first foray into 
> bayesian inference, so we keep things somewhat simple. 
>
> Can't give an exact comparison, but to run a model simulating a single 
> city (rats and fleas and human populations, no spatial component) is in the 
> order of minutes for my student working with PyMC, and fitting a mortality 
> curve based on ~100 datapoints. Myself, I was mostly playing along while 
> supervising, and that model in Anglican is stuck halfway an upgrade to use 
> clojure.as a testing framework. But as I recall, it also used to be in 
> the order of minutes. Will see if I can finish the upgrade and put it 
> online.
>
> On Sunday, October 23, 2016 at 8:45:47 PM UTC+2, Dragan Djuric wrote:
>
>> Are those hierarchical models? I also suppose the variables are 
>> continuous? What are typical running times for your analysis with Anglican, 
>> and what with PyMC?
>>
>> On Sunday, October 23, 2016 at 8:17:16 PM UTC+2, Boris V. Schmid wrote:
>>>
>>> I am using Anglican for estimating parameters of epidemiological models, 
>>> generally in the shape of limited (mortality) data, and less than a dozen 
>>> parameters that need to be simultaneously estimated. Works fine for that. A 
>>> good example of that type of problem is here: 
>>> http://www.smallperturbation.com/epidemic-with-real-data (but with 
>>> PyMC, a similar package for python).
>>>
>>> But you might be right that it won't hold in high-dimensional problems. 
>>> People in genomics are running models with many thousands of parameters 
>>> when trying to figure out how different genes contribute to a particular 
>>> cell phenotype. Don't think I would try that in Anglican :-).
>>>
>>>
>>> On Sunday, October 23, 2016 at 6:06:49 PM UTC+2, Dragan Djuric wrote:
>>>>
>>>> Thanks. I know about Anglican, but it is not even in the same category, 
>>>> other than being Bayesian. Anglican also has MCMC, but, looking at the 
>>>> implementation, it seems it is useful only on smaller problems with 
>>>> straightforward and low-dimensional basic distributions, or discrete 
>>>> problems/distributions. I do not see how it can be used to solve even 
>>>> standard textbook examples in "real" bayesian data analysis. Otherwise, 
>>>> I'd 
>>>> use/improve Anglican, although its GPL license is a bit of a showstopper.
>>>>
>>>> I would loved to have been able to see how far Anglican can go 
>>>> performance-wise, and stretch it to its limits, though. However, it wasn't 
>>>> obvious how to construct any of more serious data analysis problems. 
>>>> Having 
>>>> seen its implementation, I expect the performance comparison would make 
>>>> Bayadera shine, so I hope I'll be able to construct some examples that can 
>>>> be implemented in both environments :)
>>>>
>>>> On Sunday, October 23, 2016 at 3:47:50 PM UTC+2, Boris V. Schmid wrote:
>>>>>
>>>>> Thanks Dragan.
>>>>>
>>>>> Interesting slides, and interesting section on Bayadera.  Incanter, as 
>>>>> far as I know indeed doesn't support MCMC, but there is a fairly large 
>>>>> project based on clojure that does a lot of bayesian inference.
>>>>>
>>>>> Just in case you haven't run into it:
>>>>> http://www.robots.ox.ac.uk/~fwood/anglican/examples/index.html
>>>>>
>>>>> (for the far future, there are some interesting developments happening 
>>>>> with approximate bayesian inference using neural network classification 
>>>>> to 
>&

Re: Slides for my talk at EuroClojure 2016

2016-10-23 Thread Dragan Djuric
Are those hierarchical models? I also suppose the variables are continuous? 
What are typical running times for your analysis with Anglican, and what 
with PyMC?

On Sunday, October 23, 2016 at 8:17:16 PM UTC+2, Boris V. Schmid wrote:
>
> I am using Anglican for estimating parameters of epidemiological models, 
> generally in the shape of limited (mortality) data, and less than a dozen 
> parameters that need to be simultaneously estimated. Works fine for that. A 
> good example of that type of problem is here: 
> http://www.smallperturbation.com/epidemic-with-real-data (but with PyMC, 
> a similar package for python).
>
> But you might be right that it won't hold in high-dimensional problems. 
> People in genomics are running models with many thousands of parameters 
> when trying to figure out how different genes contribute to a particular 
> cell phenotype. Don't think I would try that in Anglican :-).
>
>
> On Sunday, October 23, 2016 at 6:06:49 PM UTC+2, Dragan Djuric wrote:
>>
>> Thanks. I know about Anglican, but it is not even in the same category, 
>> other than being Bayesian. Anglican also has MCMC, but, looking at the 
>> implementation, it seems it is useful only on smaller problems with 
>> straightforward and low-dimensional basic distributions, or discrete 
>> problems/distributions. I do not see how it can be used to solve even 
>> standard textbook examples in "real" bayesian data analysis. Otherwise, I'd 
>> use/improve Anglican, although its GPL license is a bit of a showstopper.
>>
>> I would loved to have been able to see how far Anglican can go 
>> performance-wise, and stretch it to its limits, though. However, it wasn't 
>> obvious how to construct any of more serious data analysis problems. Having 
>> seen its implementation, I expect the performance comparison would make 
>> Bayadera shine, so I hope I'll be able to construct some examples that can 
>> be implemented in both environments :)
>>
>> On Sunday, October 23, 2016 at 3:47:50 PM UTC+2, Boris V. Schmid wrote:
>>>
>>> Thanks Dragan.
>>>
>>> Interesting slides, and interesting section on Bayadera.  Incanter, as 
>>> far as I know indeed doesn't support MCMC, but there is a fairly large 
>>> project based on clojure that does a lot of bayesian inference.
>>>
>>> Just in case you haven't run into it:
>>> http://www.robots.ox.ac.uk/~fwood/anglican/examples/index.html
>>>
>>> (for the far future, there are some interesting developments happening 
>>> with approximate bayesian inference using neural network classification to 
>>> speed things up. Fun stuff.)
>>>
>>> On Thursday, October 20, 2016 at 11:38:25 PM UTC+2, Dragan Djuric wrote:
>>>>
>>>> Hi all, I posted slides for my upcoming EuroClojure talk, so you can 
>>>> enjoy the talk without having to take notes: 
>>>> http://dragan.rocks/articles/16/Clojure-is-not-afraid-of-the-GPU-slides-EuroClojure
>>>>
>>>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Slides for my talk at EuroClojure 2016

2016-10-23 Thread Dragan Djuric
Thanks. I know about Anglican, but it is not even in the same category, 
other than being Bayesian. Anglican also has MCMC, but, looking at the 
implementation, it seems it is useful only on smaller problems with 
straightforward and low-dimensional basic distributions, or discrete 
problems/distributions. I do not see how it can be used to solve even 
standard textbook examples in "real" bayesian data analysis. Otherwise, I'd 
use/improve Anglican, although its GPL license is a bit of a showstopper.

I would loved to have been able to see how far Anglican can go 
performance-wise, and stretch it to its limits, though. However, it wasn't 
obvious how to construct any of more serious data analysis problems. Having 
seen its implementation, I expect the performance comparison would make 
Bayadera shine, so I hope I'll be able to construct some examples that can 
be implemented in both environments :)

On Sunday, October 23, 2016 at 3:47:50 PM UTC+2, Boris V. Schmid wrote:
>
> Thanks Dragan.
>
> Interesting slides, and interesting section on Bayadera.  Incanter, as far 
> as I know indeed doesn't support MCMC, but there is a fairly large project 
> based on clojure that does a lot of bayesian inference.
>
> Just in case you haven't run into it:
> http://www.robots.ox.ac.uk/~fwood/anglican/examples/index.html
>
> (for the far future, there are some interesting developments happening 
> with approximate bayesian inference using neural network classification to 
> speed things up. Fun stuff.)
>
> On Thursday, October 20, 2016 at 11:38:25 PM UTC+2, Dragan Djuric wrote:
>>
>> Hi all, I posted slides for my upcoming EuroClojure talk, so you can 
>> enjoy the talk without having to take notes: 
>> http://dragan.rocks/articles/16/Clojure-is-not-afraid-of-the-GPU-slides-EuroClojure
>>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Slides for my talk at EuroClojure 2016

2016-10-20 Thread Dragan Djuric
Fixed it to be a hardcoded absolute address. Should work everywhere now. 
Thanks for reporting.

On Friday, October 21, 2016 at 12:46:25 AM UTC+2, Colin Yates wrote:
>
> Chrome manages to interpret it correctly. 
>
> Any one fancy a diversion of the 'fail fast/help the user' dilemma? :-) 
>
> On 20 October 2016 at 23:40, Sean Corfield <se...@corfield.org 
> > wrote: 
> > The source of your blog post has 
> > 
> > 
> > 
> > The slides are available  > 
> href="http:/talks/EuroClojure2016/clojure-is-not-afraid-of-the-gpu.html">here,
>  
>
> > 
> > 
> > 
> > I’m surprised any browser manages to make a legal hyperlink out of that… 
> J 
> > 
> > 
> > 
> > Sean Corfield -- (970) FOR-SEAN -- (904) 302-SEAN 
> > An Architect's View -- http://corfield.org/ 
> > 
> > "If you're not annoying somebody, you're not really alive." 
> > -- Margaret Atwood 
> > 
> > 
> > 
> > On 10/20/16, 3:37 PM, "Dragan Djuric" <clo...@googlegroups.com 
>  on behalf of 
> > drag...@gmail.com > wrote: 
> > 
> > 
> > 
> > Hmm, what browser do you use? The link that I'm been shown in the 
> browser is 
> > 
> http://dragan.rocks/talks/EuroClojure2016/clojure-is-not-afraid-of-the-gpu.html
>  
> > 
> > and it works... 
> > 
> > On Thursday, October 20, 2016 at 11:43:05 PM UTC+2, Colin Yates wrote: 
> > 
> > Unfortunately clicking on the 'here' link takes you to a 404: 
> > http://talks/EuroClojure2016/clojure-is-not-afraid-of-the-gpu.html 
> > 
> > On 20 October 2016 at 22:38, Dragan Djuric <drag...@gmail.com> wrote: 
> >> Hi all, I posted slides for my upcoming EuroClojure talk, so you can 
> enjoy 
> >> the talk without having to take notes: 
> >> 
> >> 
> http://dragan.rocks/articles/16/Clojure-is-not-afraid-of-the-GPU-slides-EuroClojure
>  
> >> 
> > 
> > 
> > 
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "Clojure" group. 
> > To post to this group, send email to clo...@googlegroups.com 
>  
> > Note that posts from new members are moderated - please be patient with 
> your 
> > first post. 
> > To unsubscribe from this group, send email to 
> > clojure+u...@googlegroups.com  
> > For more options, visit this group at 
> > http://groups.google.com/group/clojure?hl=en 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "Clojure" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to clojure+u...@googlegroups.com . 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Slides for my talk at EuroClojure 2016

2016-10-20 Thread Dragan Djuric
It seems that some javascript doesn't get executed in Safari... I'll have
to see what's happening, but for now I'll update that to use a hardcoded
link.

On Fri, Oct 21, 2016 at 12:40 AM, Sean Corfield <s...@corfield.org> wrote:

> The source of your blog post has
>
>
>
> The slides are available here,
>
>
>
> I’m surprised any browser manages to make a legal hyperlink out of that… J
>
>
>
> Sean Corfield -- (970) FOR-SEAN -- (904) 302-SEAN
> An Architect's View -- http://corfield.org/
>
> "If you're not annoying somebody, you're not really alive."
> -- Margaret Atwood
>
>
>
> On 10/20/16, 3:37 PM, "Dragan Djuric" <clojure@googlegroups.com on behalf
> of draga...@gmail.com> wrote:
>
>
>
> Hmm, what browser do you use? The link that I'm been shown in the browser
> is http://dragan.rocks/talks/EuroClojure2016/clojure-is-
> not-afraid-of-the-gpu.html
>
> and it works...
>
> On Thursday, October 20, 2016 at 11:43:05 PM UTC+2, Colin Yates wrote:
>
> Unfortunately clicking on the 'here' link takes you to a 404:
> http://talks/EuroClojure2016/clojure-is-not-afraid-of-the-gpu.html
>
> On 20 October 2016 at 22:38, Dragan Djuric <drag...@gmail.com> wrote:
> > Hi all, I posted slides for my upcoming EuroClojure talk, so you can
> enjoy
> > the talk without having to take notes:
> > http://dragan.rocks/articles/16/Clojure-is-not-afraid-of-
> the-GPU-slides-EuroClojure
> >
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Clojure" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/clojure/ZrZHNtCEOMI/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> clojure+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Slides for my talk at EuroClojure 2016

2016-10-20 Thread Dragan Djuric
Hmm, what browser do you use? The link that I'm been shown in the browser 
is 
http://dragan.rocks/talks/EuroClojure2016/clojure-is-not-afraid-of-the-gpu.html
and it works...

On Thursday, October 20, 2016 at 11:43:05 PM UTC+2, Colin Yates wrote:
>
> Unfortunately clicking on the 'here' link takes you to a 404: 
> http://talks/EuroClojure2016/clojure-is-not-afraid-of-the-gpu.html 
>
> On 20 October 2016 at 22:38, Dragan Djuric <drag...@gmail.com 
> > wrote: 
> > Hi all, I posted slides for my upcoming EuroClojure talk, so you can 
> enjoy 
> > the talk without having to take notes: 
> > 
> http://dragan.rocks/articles/16/Clojure-is-not-afraid-of-the-GPU-slides-EuroClojure
>  
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "Clojure" group. 
> > To post to this group, send email to clo...@googlegroups.com 
>  
> > Note that posts from new members are moderated - please be patient with 
> your 
> > first post. 
> > To unsubscribe from this group, send email to 
> > clojure+u...@googlegroups.com  
> > For more options, visit this group at 
> > http://groups.google.com/group/clojure?hl=en 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "Clojure" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to clojure+u...@googlegroups.com . 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Slides for my talk at EuroClojure 2016

2016-10-20 Thread Dragan Djuric
Hi all, I posted slides for my upcoming EuroClojure talk, so you can enjoy 
the talk without having to take notes: 
http://dragan.rocks/articles/16/Clojure-is-not-afraid-of-the-GPU-slides-EuroClojure

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Clojure with Tensorflow, Torch etc (call for participation, brainstorming etc)

2016-10-20 Thread Dragan Djuric
Please note that this is a really a TOY nn project, actually a direct 
translation of gigasquid's nn hello-world. It is ridiculous to compare it 
with a library that delegates nn work to cuDNN. And it is a really old 
version of affe, that ddosic has improved in the meantime, but have not yet 
released.

What could be fair, for example, is to compare affe with another toy 
library built with core.matrix, or to compare a cuDNN bindings built with 
help from neanderthal's CUDA engine (not yet available) with Cortex, or to 
compare Cortex with something that delegates to one of experimental 
OpenCL-based "real" nn libraries...

On Thursday, October 20, 2016 at 3:13:40 PM UTC+2, Boris V. Schmid wrote:
>
> Small addition to this post:
>
> There is a tiny library (toy project) of ddosic, who build a neural 
> network with neanderthal. It might be interesting as a benchmark of what 
> speed neanderthal can reach (although it might not currently be a good 
> reflection of neanderthal), versus larger packages with more overhead.
>
> https://github.com/ddosic/affe
>
> On Monday, May 30, 2016 at 8:34:41 PM UTC+2, kovasb wrote:
>>
>> Anyone seriously working on deep learning with Clojure?
>>
>> I'm working with Torch at the day job, and have done work integrating 
>> Tensorflow into Clojure, so I'm fairly familiar with the challenges of what 
>> needs to be done. A bit too much to bite off on my own in my spare time. 
>>
>> So is anyone out there familiar enough with these tools to have a 
>> sensible conversation of what could be done in Clojure?
>>
>> The main question on my mind is: what level of abstraction would be 
>> useful?
>>
>> All the existing tools have several layers of abstraction. In Tensorflow, 
>> at the bottom theres the DAG of operations, and above that a high-level 
>> library of python constructs to build the DAG (and now of course libraries 
>> going higher still). In Torch, its more complicated: there's the excellent 
>> tensor library at the bottom; the NN modules that are widely used; and 
>> various non-orthogonal libraries and modules stack on top of those. 
>>
>> One could try to integrate at the bottom layer, and then re-invent the 
>> layers above that in Clojure. Or one could try to integrate at the higher 
>> layers, which is more complicated, but gives more leverage from the 
>> existing ecosystem. 
>>
>> Any thoughts?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] CIDER 0.14 (Berlin) released

2016-10-14 Thread Dragan Djuric
Thank you for the great tool. Happy birthday!

On Friday, October 14, 2016 at 9:08:50 AM UTC+2, Bozhidar Batsov wrote:
>
> Hey everyone,
>
> Yesterday I released CIDER 0.14 (Berlin). It's a rather small CIDER 
> update, mostly focusing on bug fixes. Thanks to everyone who contributed to 
> this release!
>
> The release notes are here 
> https://github.com/clojure-emacs/cider/releases/tag/v0.14.0
>
> Enjoy!
>
> P.S. Once again I'd like to solicit more contributions to the project from 
> the CIDER community. I haven't been able to find a lot of time for the 
> project lately (and this has been the case with other prominent team 
> members as well). As a result a ton of issues have piled on (
> https://github.com/clojure-emacs/cider/issues). Help me clean up the 
> tracker and make CIDER better! :-)
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Neanderhal 0.8.0 released - includes Windows build

2016-10-10 Thread Dragan Djuric
You're right - matrix inversion is on the TODO list. However, in most cases 
when you think you need matrix inverse (which is very computationally 
expensive), you actually need some factorization (which is computationally 
expensive, but less than the whole inversion). Factorizations are, though, 
also on the TODO. Basically, now BLAS 1, 2, and 3 is implemented, while 
LAPACK is on the list of things I'll add next.

On Monday, October 10, 2016 at 11:44:12 AM UTC+2, Sungjin Chun wrote:
>
> It seems that matrix inversion routine is not in the current version of 
> neanderthal or I cannot find what it is.
> However, some interesting routines like axpy (I've no experience on 
> BLAS/LAPACK) are fresh (positively) to me :-)
>
> Thank you.
>
> On Monday, October 10, 2016 at 2:10:27 PM UTC+9, Dragan Djuric wrote:
>>
>> Thank you for reporting broken links. They are fixed now. Anyway, it was 
>> broken only on the news release page, everything worked ok from anywhere 
>> else on the web site. Regarding the tutorials and docs, just follow the 
>> main page or the links in the menu.
>>
>> On Monday, October 10, 2016 at 1:55:32 AM UTC+2, Sungjin Chun wrote:
>>>
>>> Hi, 
>>>
>>> First, thank you for your great library.
>>> Second, the tutorial link 
>>> http://neanderthal.uncomplicate.org/articles/news/articles/getting_started.html
>>>  
>>> in the release
>>> page is missing.
>>> Third, I'd tried to test some simple linear algebra code of my own such 
>>> as inverting matrix, however, I cannot figure
>>> out how to do this with neanderthal. Are there any docs on this simple 
>>> subjects?
>>>
>>> On Monday, October 10, 2016 at 1:40:19 AM UTC+9, Dragan Djuric wrote:
>>>>
>>>> Windows users should not feel left out from high-performance computing 
>>>> experience in Clojure. Neanderthal now comes ready for Linux, OS X, AND 
>>>> Windows, on all CPUs and AMD, Nvidia, and Intel GPUs!
>>>>
>>>> Greatest thanks go to Dejan Dosic (https://github.com/ddosic), who 
>>>> wrestled Windows peculiarities and found how to install GNU toolchain 
>>>> properly to make ATLAS build smooth on Windows.
>>>>
>>>> http://neanderthal.uncomplicate.org/articles/news/release-0.8.0.html
>>>>
>>>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Neanderhal 0.8.0 released - includes Windows build

2016-10-09 Thread Dragan Djuric
Thank you for reporting broken links. They are fixed now. Anyway, it was 
broken only on the news release page, everything worked ok from anywhere 
else on the web site. Regarding the tutorials and docs, just follow the 
main page or the links in the menu.

On Monday, October 10, 2016 at 1:55:32 AM UTC+2, Sungjin Chun wrote:
>
> Hi, 
>
> First, thank you for your great library.
> Second, the tutorial link 
> http://neanderthal.uncomplicate.org/articles/news/articles/getting_started.html
>  
> in the release
> page is missing.
> Third, I'd tried to test some simple linear algebra code of my own such as 
> inverting matrix, however, I cannot figure
> out how to do this with neanderthal. Are there any docs on this simple 
> subjects?
>
> On Monday, October 10, 2016 at 1:40:19 AM UTC+9, Dragan Djuric wrote:
>>
>> Windows users should not feel left out from high-performance computing 
>> experience in Clojure. Neanderthal now comes ready for Linux, OS X, AND 
>> Windows, on all CPUs and AMD, Nvidia, and Intel GPUs!
>>
>> Greatest thanks go to Dejan Dosic (https://github.com/ddosic), who 
>> wrestled Windows peculiarities and found how to install GNU toolchain 
>> properly to make ATLAS build smooth on Windows.
>>
>> http://neanderthal.uncomplicate.org/articles/news/release-0.8.0.html
>>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Neanderhal 0.8.0 released - includes Windows build

2016-10-09 Thread Dragan Djuric
Windows users should not feel left out from high-performance computing 
experience in Clojure. Neanderthal now comes ready for Linux, OS X, AND 
Windows, on all CPUs and AMD, Nvidia, and Intel GPUs!

Greatest thanks go to Dejan Dosic (https://github.com/ddosic), who wrestled 
Windows peculiarities and found how to install GNU toolchain properly to 
make ATLAS build smooth on Windows.

http://neanderthal.uncomplicate.org/articles/news/release-0.8.0.html

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Clojure with Tensorflow, Torch etc (call for participation, brainstorming etc)

2016-10-07 Thread Dragan Djuric

>
>
> If people build on the 'wrong' api, thats a good problem to have. The 
> field is so in flux anyway. The problem can also be mitigated through 
> minimalism in what is released in the beginning. 
>
> This. 

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Clojure with Tensorflow, Torch etc (call for participation, brainstorming etc)

2016-10-07 Thread Dragan Djuric
Hi Mike,

Thanks for the update.


> Opening the source is not entirely my decision, this is a collaboration 
> with the Thinktopic folks (Jeff Rose et al.). I'm personally in favour of 
> being pretty open about this stuff but I do think that it would be a 
> mistake if people build too much stuff on top of the wrong API which is 
> likely to happen if we release prematurely.
>
> I believe that number of users that would try this is small anyway, and 
they will use exclusively for evaluation and hobby at first, so, as Kovas 
said, having people build too much on it is something we can only hope for! 
:) I hope the Thinktopic folks would support opening it...
 

> Things I'm paricularly keen to have nailed down in particular before we go 
> public:
> 1. A tensorflow-like DAG model that allows arbitrary operations to be 
> composed compose into (possibly nested) graphs
> 2. Enough abstraction that different backends work (ClojureScript, 
> Pure-JVM, Native, GPU etc.). core.matrix provides most of this, but there 
> are still some deep-learning specific operations that need tuning and can 
> be quite backend-specific, e.g. cuDNN has some specific ways of dealing 
> with mini-batches which we need to get right. I'd love to try Neanderthal 
> with this too if we can get the core.matrix integration working.
> 3. Decent, stable APIs for the algorithms that you typically want to run 
> for mini-batch training, cross-validation etc.
> 4. Pluggable gradient optimisation methods (we currently have stuff like 
> ADADELTA, SDG, ADAM etc. but would like to make sure this is sufficiently 
> general to support any optimisation method)
>
> I'll have a talk with the Thinktopic folks and see if we can come up with 
> a timeline for a open source release. In the meantime, if anyone is 
> *really* interested then we may be able to arrange collaboration on a 
> private basis.
>

It's not urgent, so any rough estimate would be great! 

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Clojure with Tensorflow, Torch etc (call for participation, brainstorming etc)

2016-10-07 Thread Dragan Djuric
Hi Kovas,


> One question:
>
> Is it possible to feed Neanderthal's matrix representation (the underlying 
> bytes) into one of these other libraries, to obtain 
> computations Neanderthal doesn't support? 
>

There are two parts to that question, I think: 1) How can you make 
Neanderthal work with any other library that you need to interoperate with? 
2) Does Neanderthal have the operation that you need, and what to do if it 
doesn't?
I'll start with 2:

2) Currently, Neanderthal supports all BLAS 1, 2 and 3 operations for 
vectors and dense matrices on CPU & GPU. The ultimate goal is to support 
all other standard matrix formats (TR, sparse, etc.) AND LAPACK, which has 
extensive support for linear algebra. The good news is that what is there 
works really, really well, because I concentrated my efforts on solving the 
hardest problem first (and succeeded!). Now it is a matter of putting the 
grunt work to repeat what's done for the existing things to cover more of 
CBLAS and LAPACK API, and even to do the integration with CUDA in a similar 
way I did the OpenCL integration. I could have even done it by now, but I 
preferred to work on other things, one of those being a bayesian data 
analysis library Bayadera, that puts what Neanderthal offers to great use 
:) I have also seen that the Yieldbot people forked Neanderthal and 
implemented some part of LAPACK, but did not release anything nor issued a 
PR. So, if the methods you need fall into the scope of matrices and linear 
algebra (BLAS + LAPACK), there is a good chance it will be supported, 
either by you or some other user providing it, or bugging me often enough 
that I realize it is urgent that I add it :)

1) There are at least two parts to the interoperation story - API for 
operations (like mm vs mmult or whatever) and that is the very easy part. 
The hard part is a multitude of matrix formats and those formats' 
representations in memory. This is what makes or breaks your performance, 
and not by a few percents but by a few orders of magnitude. The sad part is 
that almost all focus is always on the easy part, completely ignoring the 
hard part or just thinking that it will magically solve itself. So, suppose 
that you have data laid out in memory in the format A. That format may or 
may not be suitable for operation X, and if it is not, it is often a bad 
idea to shoehorn it in for convenience, instead of thinking harder about 
data flow and transition data to format B to be used appropriately. That 
means that even inside the same library, you often need to do the data 
transformations to best suit what you want to do with the data. Long story 
short, you need to do data transformations anyway, so having Neanderthal 
and ND4J support core.matrix mmult operation won't help you a bit here. 
You'll have to transform data from the one to the other. If you are lucky, 
they use the same underlying format, so the transformation is easy or even 
automatic, or can be, but the point is that someone needs to create 
explicit transformations to ensure the optimal way instead on relying on 
generic interoperability layer (at least for now).
 

> My situation: Neanderthal covers some of the algorithms I'd like to 
> implement. It looks easier to get started with and understand than the 
> alternatives. But down the line I'll likely want to do convolution, 
> sigmoid, the typical Neural Network layers. Note, I don't care about 
> 'tensors' exactly; the bookkeeping to simulate a tensor can be automated, 
> but the fundamental operations cannot. So there is a question of whether to 
> just absorb the learning curve and shortcomings of these more general 
> libraries rather than to risk switching horses midstream. 
>

I think it is important to note that the operations you mentioned are not 
in the scope of a matrix library, but in the scope of a neural networks 
library. Neanderthal is simply not a library that should have those 
operations, nor it can have all operations for all ML techniques (that are 
countlesss :)
 
On the other hand, what I created Neanderthal for, is exactly as a building 
block for such libraries. The focus is on: if you need to build a NN 
library, Neanderthal should (ideally) give you a standard matrix methods 
for computations and data shuffling, Clojure should enable you to create a 
great interface layer, and (if needed) ClojureCL should help you write 
custom optimized low-level algorithms for GPU and CPU. 

I imagine I'm not alone in this.. if there was a straightforward story for 
> how to interop Neanderthal when necessary with some more general library 
> that would be powerful. Unfortunately I'm not sufficiently expert to 
> evaluate which of the choices would be most pragmatic and how to pull it 
> off. 
>

Today's state (in Clojure, Java, and probably elsewhere) is, IMO: if you 
need a ready-made solution for NNs/DL, you have to pick one library that 
has the stuff that you need, and go with what they recommend 

Re: Clojure with Tensorflow, Torch etc (call for participation, brainstorming etc)

2016-10-06 Thread Dragan Djuric
Just a small addition: I looked at BidiMat's code and even at the JNI/C 
level they are doing some critical things that work on small scale but byte 
unexpectedly when JVM needs to rearrange memory and also may trigger 
copying.

On Thursday, October 6, 2016 at 10:46:04 PM UTC+2, Dragan Djuric wrote:
>
> Hi Kovas,
>
>
>> By the way, I'd love to see matrix/tensor benchmarks of Neanderthal and 
>> Vectorz vs ND4J, MXNet's NDArray, and BidMat..  :)
>>
>
> I don't have exact numbers, but will try to give you a few pointers to 
> help you if you decide to investigate this further:
>
> 0. Neanderthal's scope is matrices and linear algebra. NNs and other stuff 
> is something that could be built on top of it (assuming that the features 
> needed are implemented, which may or may not be true yet), but certainly 
> not in Neanderthal.
>
> 1. Neanderthal is a 100% Clojure solution. One of the main goals, other 
> than the speed, of course, is that it is simple and straightforward, with 
> no overhead. That means that you always know what backend you are using, 
> and you get exactly the speed of that backend. If it works, you are sure it 
> works at the full speed, with no slow fallback. Theoretically, of course, 
> there is always some overhead of FFI, but in Neanderthal it is so miniscule 
> that you can ignore it for all uses that come to my mind. So, basically, 
> Neanderthal is as fast as ATLAS on CPU and CLBlast on GPU (both offer 
> state-of-the-art speed) or any (not yet existing) pure java engine that I 
> might plug in in the future if necessary.
>
> 2. All those other libraries, besides not targeting Clojure at all except 
> for general "you can call Java from Clojure", are trying to be everything 
> for everyone. That has its strengths, because you are, generally, able to 
> accommodate more use cases. On the other hand, it complicates things too 
> much, and can lead to overblown beasts. For example, it might seem good to 
> support MKL, and, ATLAS, and OpenBLAS, and netlib BLAS, and some imaginary 
> fallback solution, like ND4J does (or tries to do), but what's the point of 
> it when today they have more or less the same performance (MKL being a bit 
> faster - in percentages only - but requires $$), and supporting all that 
> stuff makes code AND the installation much, much, more compelex. BLAS is so 
> mature that I think it is better to choose one solution and offer it out of 
> the box. Technically, neanderthal can support all other native blas 
> libraries too, but I intentionally restricted that option because I think 
> fiddling with it does more harm than good. I prefer to give users one Ford 
> model T, than let them choose between 20 different horse carriages. And, if 
> they can even choose the color, provided that their choice is black :)
>
> 3. ND4J is, in my opinion, a typical overblown solution. deeplearning4j 
> guys got the investment dollars, and have to rush to the market with 
> business-friendly solution, which usually favors having a checklist of 
> features regardless of whether those features make sense for the little 
> guy. I hope they succeed in the business sense, but the code I'm seeing 
> from them does not seem promising to me regarding Java getting a great 
> DL/NN library.
>
> 4. NDArray is actually, as the name suggests, a nd array library, and not 
> a matrix library. Why is this important?  Vectors and matrices are 
> something that has been very well researched through decades. The ins and 
> outs of algorithm/architecture fit are known and implemented in BLAS 
> libraries, so you are sure that you get the full performance. N-dimensional 
> vectors (sometime referred as tensors, although that name is not accurate 
> IMO) not so much. So, it is easy that an operation that looks convenient 
> does not lead to a good performance. I do not say it is bad, because if you 
> need that operation, it is better to have something than nothing, but for 
> now I decided to not support 3+ dimensions. This is something that might 
> belong to Neanderthal or on top of it. A long term goal, to be sure. 
> Another aspect of that story is knowledge: most books that I read from the 
> fields of ML/AI give all formulas as vectors of matrices. Basically, 
> matrices are at least 95% (if not more) of potential users need or even 
> understand!
>
> 5. BidiMat seems to have much larger scope. For example, at their 
> benchmark page, I see benchmarks for machine learning algorithms, but for 
> nothing matrix-y.
>
> The speed comparison; it boils down to this: both Neanderthal and those 
> libraries use (or can be linked to use) the same native BLAS libraries. I 
> took great care to make sure Neanderthal does not incur any copying or 
> calling o

Re: Clojure with Tensorflow, Torch etc (call for participation, brainstorming etc)

2016-10-06 Thread Dragan Djuric
Hi Kovas,


> By the way, I'd love to see matrix/tensor benchmarks of Neanderthal and 
> Vectorz vs ND4J, MXNet's NDArray, and BidMat..  :)
>

I don't have exact numbers, but will try to give you a few pointers to help 
you if you decide to investigate this further:

0. Neanderthal's scope is matrices and linear algebra. NNs and other stuff 
is something that could be built on top of it (assuming that the features 
needed are implemented, which may or may not be true yet), but certainly 
not in Neanderthal.

1. Neanderthal is a 100% Clojure solution. One of the main goals, other 
than the speed, of course, is that it is simple and straightforward, with 
no overhead. That means that you always know what backend you are using, 
and you get exactly the speed of that backend. If it works, you are sure it 
works at the full speed, with no slow fallback. Theoretically, of course, 
there is always some overhead of FFI, but in Neanderthal it is so miniscule 
that you can ignore it for all uses that come to my mind. So, basically, 
Neanderthal is as fast as ATLAS on CPU and CLBlast on GPU (both offer 
state-of-the-art speed) or any (not yet existing) pure java engine that I 
might plug in in the future if necessary.

2. All those other libraries, besides not targeting Clojure at all except 
for general "you can call Java from Clojure", are trying to be everything 
for everyone. That has its strengths, because you are, generally, able to 
accommodate more use cases. On the other hand, it complicates things too 
much, and can lead to overblown beasts. For example, it might seem good to 
support MKL, and, ATLAS, and OpenBLAS, and netlib BLAS, and some imaginary 
fallback solution, like ND4J does (or tries to do), but what's the point of 
it when today they have more or less the same performance (MKL being a bit 
faster - in percentages only - but requires $$), and supporting all that 
stuff makes code AND the installation much, much, more compelex. BLAS is so 
mature that I think it is better to choose one solution and offer it out of 
the box. Technically, neanderthal can support all other native blas 
libraries too, but I intentionally restricted that option because I think 
fiddling with it does more harm than good. I prefer to give users one Ford 
model T, than let them choose between 20 different horse carriages. And, if 
they can even choose the color, provided that their choice is black :)

3. ND4J is, in my opinion, a typical overblown solution. deeplearning4j 
guys got the investment dollars, and have to rush to the market with 
business-friendly solution, which usually favors having a checklist of 
features regardless of whether those features make sense for the little 
guy. I hope they succeed in the business sense, but the code I'm seeing 
from them does not seem promising to me regarding Java getting a great 
DL/NN library.

4. NDArray is actually, as the name suggests, a nd array library, and not a 
matrix library. Why is this important?  Vectors and matrices are something 
that has been very well researched through decades. The ins and outs of 
algorithm/architecture fit are known and implemented in BLAS libraries, so 
you are sure that you get the full performance. N-dimensional vectors 
(sometime referred as tensors, although that name is not accurate IMO) not 
so much. So, it is easy that an operation that looks convenient does not 
lead to a good performance. I do not say it is bad, because if you need 
that operation, it is better to have something than nothing, but for now I 
decided to not support 3+ dimensions. This is something that might belong 
to Neanderthal or on top of it. A long term goal, to be sure. Another 
aspect of that story is knowledge: most books that I read from the fields 
of ML/AI give all formulas as vectors of matrices. Basically, matrices are 
at least 95% (if not more) of potential users need or even understand!

5. BidiMat seems to have much larger scope. For example, at their benchmark 
page, I see benchmarks for machine learning algorithms, but for nothing 
matrix-y.

The speed comparison; it boils down to this: both Neanderthal and those 
libraries use (or can be linked to use) the same native BLAS libraries. I 
took great care to make sure Neanderthal does not incur any copying or 
calling overhead. From what I saw glancing at the code of other libraries, 
they didn't. They might support that if you set up everything well among 
lots of options, or they don't if you do not know how to ensure this. So I 
doubt any of those could be noticeably faster, and they can be much slower 
if you slip somewhere.
I would also love to see straightforward numbers, but I was unable to find 
anything like that for those libraries. BidiMat, for example, gives 
benchmarks of K-Means on MNIST dataset - I do not know how this can be used 
to discern how fast is it with matrices, other that it is, generally, fast 
at K-Means.

-- 
You received this message because you are subscribed to 

Re: Clojure with Tensorflow, Torch etc (call for participation, brainstorming etc)

2016-10-06 Thread Dragan Djuric
Hey Mike,

A friend asked me if I know of any good (usable) deep learning libraries 
for Clojure. I remembered you had some earlier neural networks library that 
was at least OK for experimenting, but seems abandoned for your current 
work in a similar domain. A bit of digging lead me to this post.

I understand that this library may not be completely ready yet, but I 
wandered wheter now you were able to give a better estimation of where it 
stands in comparison with other DL offerings, like what deeplearning4j guys 
are doing, or even with the established non-Java libraries such as Theano, 
Torch, Caffe, and TensorFlow. What is the chance of you releasing it even 
if it is not 100% ready? 

I get the reluctance to commit to a certain API, but I don't think everyone 
will rush to commit their code to the API you release anyway, and the open 
development will certainly help both the (potential) users and your team 
(by returning free testing & feedback).


On Tuesday, May 31, 2016 at 7:17:35 AM UTC+2, Mikera wrote:
>
> I've been working with a number of collaborators on a deep learning 
> library for Clojure. 
>
> Some key features:
> - An abstract API for key machine learning functionality
> - Ability to declare graphs / stacks of operations (somewhat analogous to 
> tensorflow)
> - Support for multiple underlying implementations (ClojureScript, JVM, 
> CPU, GPU)
> - Integration with core.matrix for N-dimensional data processing
>
> We intend to release as open source. We haven't released yet because we 
> want to get the API right first but it is looking very promising.
>
> On Tuesday, 31 May 2016 02:34:41 UTC+8, kovasb wrote:
>>
>> Anyone seriously working on deep learning with Clojure?
>>
>> I'm working with Torch at the day job, and have done work integrating 
>> Tensorflow into Clojure, so I'm fairly familiar with the challenges of what 
>> needs to be done. A bit too much to bite off on my own in my spare time. 
>>
>> So is anyone out there familiar enough with these tools to have a 
>> sensible conversation of what could be done in Clojure?
>>
>> The main question on my mind is: what level of abstraction would be 
>> useful?
>>
>> All the existing tools have several layers of abstraction. In Tensorflow, 
>> at the bottom theres the DAG of operations, and above that a high-level 
>> library of python constructs to build the DAG (and now of course libraries 
>> going higher still). In Torch, its more complicated: there's the excellent 
>> tensor library at the bottom; the NN modules that are widely used; and 
>> various non-orthogonal libraries and modules stack on top of those. 
>>
>> One could try to integrate at the bottom layer, and then re-invent the 
>> layers above that in Clojure. Or one could try to integrate at the higher 
>> layers, which is more complicated, but gives more leverage from the 
>> existing ecosystem. 
>>
>> Any thoughts?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Neanderthal (fast CPU & GPU matrix library for Clojure) will also support Windows out of the box

2016-10-05 Thread Dragan Djuric
No, I do not have a cooperation with them, nor did I know they use 
Neanderthal.

Well, I guess good software finds its way to the most unexpected places :)

On Wednesday, October 5, 2016 at 12:42:00 PM UTC+2, Christian Weilbach 
wrote:
>
> Nice! I have seen that the neanderthal licence file is in the libnd4j 
> repository: 
>
> https://github.com/deeplearning4j/libnd4j 
>
> Do you have some cooperation with the dl4j people? 
>
> Cheers, 
> Christian 
>
> On 04.10.2016 17:53, Dragan Djuric wrote: 
> > Hi all, 
> > 
> > I've just spent some time building ATLAS for windows, and created a 
> > windows binary of the current snapshot version. 
> > 
> > There seems to be no performance tax on windows, or at least it is not 
> > large. I wasn't unable to compare it on the same machine, but on my i7 
> > laptop (2.4 GHz) with Windows, some brief tests that I run are at about 
> > 60% of the speed I get on i7 4770K at 4.4 GHz on Linux (single-thread 
> > mode). 
> > 
> > I guess this works as expected, and you can expect speedups as 
> > in http://neanderthal.uncomplicate.org/articles/benchmarks.html 
> > 
> > So, from the version 0.8.0 and on, Neanderthal will support Linux, OSX, 
> > AND Windows out of the box! OSX will always run optimized (since it 
> > comes with its own native BLAS), and on Linux and Windows you'll be able 
> > to use a generic ATLAS binary or to compile your own optimized version. 
> > Either of those would be much faster than any pure Java alternative. 
> > There's something for everyone now (except for the ClojureScript folks - 
> > that may come in future versions of neanderthal) :) 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "Clojure" group. 
> > To post to this group, send email to clo...@googlegroups.com 
>  
> > Note that posts from new members are moderated - please be patient with 
> > your first post. 
> > To unsubscribe from this group, send email to 
> > clojure+u...@googlegroups.com  
> > For more options, visit this group at 
> > http://groups.google.com/group/clojure?hl=en 
> > --- 
> > You received this message because you are subscribed to the Google 
> > Groups "Clojure" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to clojure+u...@googlegroups.com  
> > <mailto:clojure+u...@googlegroups.com >. 
> > For more options, visit https://groups.google.com/d/optout. 
>
>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Neanderthal (fast CPU & GPU matrix library for Clojure) will also support Windows out of the box

2016-10-04 Thread Dragan Djuric
Hi all,

I've just spent some time building ATLAS for windows, and created a windows 
binary of the current snapshot version. 

There seems to be no performance tax on windows, or at least it is not 
large. I wasn't unable to compare it on the same machine, but on my i7 
laptop (2.4 GHz) with Windows, some brief tests that I run are at about 60% 
of the speed I get on i7 4770K at 4.4 GHz on Linux (single-thread mode). 

I guess this works as expected, and you can expect speedups as in 
http://neanderthal.uncomplicate.org/articles/benchmarks.html

So, from the version 0.8.0 and on, Neanderthal will support Linux, OSX, AND 
Windows out of the box! OSX will always run optimized (since it comes with 
its own native BLAS), and on Linux and Windows you'll be able to use a 
generic ATLAS binary or to compile your own optimized version. Either of 
those would be much faster than any pure Java alternative. There's 
something for everyone now (except for the ClojureScript folks - that may 
come in future versions of neanderthal) :)

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: parallel sequence side-effect processor

2016-09-24 Thread Dragan Djuric
Just for the reference, this is the code with neanderthal, so we can get 
the difference when running with primitives. No native code has been used 
here, just pure Clojure.

(def sv1 (sv v1))
(def sv2 (sv v2))

(defn side-effect-4 ^double [^double a ^double b] 1.0)

(with-progress-reporting (quick-bench  (foldmap (double-fn +) 0 
side-effect-4 sv1 sv2)))

 Execution time mean : 16.436371 µs



On Saturday, September 24, 2016 at 8:07:21 PM UTC+2, Dragan Djuric wrote:
>
> Francis,
>
> The times you got also heavily depend on the actual side-effect function, 
> which in this case is much faster when called with one arg, instead of with 
> varargs, that fluokitten need here.
>
> If we give fluokitten a function that does not create a sequence for 
> multiple arguments, it is much faster than the fastest example that you 
> give (although it does not do any numerical optimizations).
>
> For example, your code runs the following times on my machine (measured by 
> criterium):
>
> (def v1 (vec (range 1)))
>
> (def v2 (vec (range 1)))
>
> (def side-effect (constantly nil))
>
> (defn side-effect-2 [a b] nil)
>
> (with-progress-reporting (quick-bench  (foldmap op nil side-effect-2 v1 v2
> )))
>
> Execution time mean : 290.822794 µs
>
> (with-progress-reporting (quick-bench  (run! side-effect (mapv vector v1 
> v2
>
> Execution time mean : 847.292924 µs
>
> (defn side-effect-3 [a] nil)
>
> (with-progress-reporting (quick-bench  (run! side-effect-3 (mapv vector 
> v1 v2
>
>  Execution time mean : 847.554524 µs
>
> So, in your previous example, all the "normal" clojure code seems to be 
> optimally written, but fluokitten was handicapped by the side-effect's use 
> of varargs for only 2 arguments. When side-effects is written "better, 
> fluokitten is 3 times faster than the fastest "vanilla" solution (of those 
> mentioned).
>
> Of course, most of the time in real life, except with numerics, the 
> side-effect function would be much slower than in this example, so I expect 
> all of those examples to have similar performance. The actual issue becomes 
> memory allocation, and it seems to me that marsi0 asked the question with 
> this in mind.
>
> On Saturday, September 24, 2016 at 4:59:35 PM UTC+2, Francis Avila wrote:
>>
>> BTW I noticed that sequence got hotter and eventually became the fastest, 
>> but it took many more runs:
>>
>> (time (dorun (sequence (map side-effect) col1 col2)))
>> "Elapsed time: 31.321698 msecs"
>> => nil
>> (time (dorun (sequence (map side-effect) col1 col2)))
>> "Elapsed time: 15.492247 msecs"
>> => nil
>> (time (dorun (sequence (map side-effect) col1 col2)))
>> "Elapsed time: 10.9549 msecs"
>> => nil
>> (time (dorun (sequence (map side-effect) col1 col2)))
>> "Elapsed time: 9.122967 msecs"
>> => nil
>> (time (dorun (sequence (map side-effect) col1 col2)))
>> "Elapsed time: 18.056823 msecs"
>> => nil
>> (time (dorun (sequence (map side-effect) col1 col2)))
>> "Elapsed time: 9.381068 msecs"
>> => nil
>>
>>
>> This is likely close to the theoretical maximum (using loop+recur plus a 
>> mutable iterator over multiple colls). The only thing faster would get rid 
>> of the iterator, which would require using a memory-indexable data 
>> structure like an array, and now you are in core.matrix and neanderthal 
>> territory.
>>
>>
>> On Thursday, September 22, 2016 at 11:02:09 PM UTC-5, Mars0i wrote:
>>>
>>> This is almost the same as an issue I raised in this group over a year 
>>> and a half ago, here 
>>> <https://groups.google.com/forum/#!msg/clojure/bn5QmxQF7vI/vO1XLSyPXC8J;context-place=forum/clojure>.
>>>   
>>> I suggested that Clojure should include function with map's syntax but that 
>>> was executed only for side-effects, without constructing sequences.  No one 
>>> else was interested--no problem.   It's still bugging me.
>>>
>>> It's not map syntax that I care about at this point.  What's bugging me 
>>> is that there's no standard, built-in way to process multiple sequences for 
>>> side effects without (a) constructing unnecessary sequences or (b) rolling 
>>> my own function with loop/recur or something else.
>>>
>>> If I want to process multiple sequences for side-effects in the the way 
>>> that 'for' does, Clojure gives me 'doseq'.  Beautiful.  I can operate on 
>>> the cross product of the sequences, or filter them in various ways.
>>>
>>> If

Re: parallel sequence side-effect processor

2016-09-24 Thread Dragan Djuric
Francis,

The times you got also heavily depend on the actual side-effect function, 
which in this case is much faster when called with one arg, instead of with 
varargs, that fluokitten need here.

If we give fluokitten a function that does not create a sequence for 
multiple arguments, it is much faster than the fastest example that you 
give (although it does not do any numerical optimizations).

For example, your code runs the following times on my machine (measured by 
criterium):

(def v1 (vec (range 1)))

(def v2 (vec (range 1)))

(def side-effect (constantly nil))

(defn side-effect-2 [a b] nil)

(with-progress-reporting (quick-bench  (foldmap op nil side-effect-2 v1 v2
)))

Execution time mean : 290.822794 µs

(with-progress-reporting (quick-bench  (run! side-effect (mapv vector v1 v2


Execution time mean : 847.292924 µs

(defn side-effect-3 [a] nil)

(with-progress-reporting (quick-bench  (run! side-effect-3 (mapv vector v1 
v2

 Execution time mean : 847.554524 µs

So, in your previous example, all the "normal" clojure code seems to be 
optimally written, but fluokitten was handicapped by the side-effect's use 
of varargs for only 2 arguments. When side-effects is written "better, 
fluokitten is 3 times faster than the fastest "vanilla" solution (of those 
mentioned).

Of course, most of the time in real life, except with numerics, the 
side-effect function would be much slower than in this example, so I expect 
all of those examples to have similar performance. The actual issue becomes 
memory allocation, and it seems to me that marsi0 asked the question with 
this in mind.

On Saturday, September 24, 2016 at 4:59:35 PM UTC+2, Francis Avila wrote:
>
> BTW I noticed that sequence got hotter and eventually became the fastest, 
> but it took many more runs:
>
> (time (dorun (sequence (map side-effect) col1 col2)))
> "Elapsed time: 31.321698 msecs"
> => nil
> (time (dorun (sequence (map side-effect) col1 col2)))
> "Elapsed time: 15.492247 msecs"
> => nil
> (time (dorun (sequence (map side-effect) col1 col2)))
> "Elapsed time: 10.9549 msecs"
> => nil
> (time (dorun (sequence (map side-effect) col1 col2)))
> "Elapsed time: 9.122967 msecs"
> => nil
> (time (dorun (sequence (map side-effect) col1 col2)))
> "Elapsed time: 18.056823 msecs"
> => nil
> (time (dorun (sequence (map side-effect) col1 col2)))
> "Elapsed time: 9.381068 msecs"
> => nil
>
>
> This is likely close to the theoretical maximum (using loop+recur plus a 
> mutable iterator over multiple colls). The only thing faster would get rid 
> of the iterator, which would require using a memory-indexable data 
> structure like an array, and now you are in core.matrix and neanderthal 
> territory.
>
>
> On Thursday, September 22, 2016 at 11:02:09 PM UTC-5, Mars0i wrote:
>>
>> This is almost the same as an issue I raised in this group over a year 
>> and a half ago, here 
>> .
>>   
>> I suggested that Clojure should include function with map's syntax but that 
>> was executed only for side-effects, without constructing sequences.  No one 
>> else was interested--no problem.   It's still bugging me.
>>
>> It's not map syntax that I care about at this point.  What's bugging me 
>> is that there's no standard, built-in way to process multiple sequences for 
>> side effects without (a) constructing unnecessary sequences or (b) rolling 
>> my own function with loop/recur or something else.
>>
>> If I want to process multiple sequences for side-effects in the the way 
>> that 'for' does, Clojure gives me 'doseq'.  Beautiful.  I can operate on 
>> the cross product of the sequences, or filter them in various ways.
>>
>> If I want to process multiple sequences by applying an n-ary function to 
>> the first element of each of n sequences, then to the second element of 
>> each sequence, and so on, I can use 'map' or 'mapv', but that means 
>> constructing unnecessary collections.
>>
>> Or I can splice my n sequences together, and make a single sequence of 
>> n-tuples, and use doseq to process it.  (There's an example like this on 
>> the doseq doc page.)  Or I can process such a sequence with 'reduce'.  More 
>> unnecessary collections, though.
>>
>> Or I can use 'dotimes', and index into each of the collections, which is 
>> OK if they're vectors, but ... ugh. why?
>>
>> Or I can construct my own function using first and rest or first and next 
>> on each of my sequences, via loop/recur, for example.   But that seems odd 
>> to me.  
>>
>> Isn't this a common use case?  Is processing multiple sequences for 
>> side-effects with corresponding elements in each application so unusual?  
>> (Am I the only one?)  Isn't it odd that we have doseq and map but nothing 
>> that processes multiple sequences for side-effects, in sequence, rather 
>> than as a cross-product?  
>>
>> (I admit that in my current use case, the sequences are small, so 
>> 

Re: parallel sequence side-effect processor

2016-09-24 Thread Dragan Djuric
That's why I still think that in this particular case, there is no new
sequence creation (by fluokitten). yes, it does call first/next but those
do not require (significant) new memory or copying. They reuse the memory
of the underlying vectors, if I understood that well. Or there is something
else that you referred to, Tim?

Additional note is that it is the current implementation. If someone points
me to a better way to implement this case, I'll improve it. The API would
stay the same. foldmap, in any case, is more general than this use case,
and also does the folding (which is unneeded here, but very useful in
general).

And, as Francis said, if the priority is speed, I would use Neanderthal
here, or maybe even ClojureCL if side effects are also numerical and there
is enough data that GPU or CPU vectorization can make a difference.

On Saturday, September 24, 2016, Alex Miller  wrote:

> Just a general note of caution. range is a highly specialized (and
> optimized) impl for both reducing and seq traversal. Also note that seqs on
> vecs are about the next-most optimized seq (because they also don't
> allocate or cache - rather they just pull from the vector nodes as the
> backing store for elements). So just be cautious about making general
> conclusions about reduce and seq performance from these two specific cases.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> 
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> 
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Clojure" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/clojure/Q5uZcYRdbgg/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> clojure+unsubscr...@googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: parallel sequence side-effect processor

2016-09-24 Thread Dragan Djuric
Now I am on the REPL, and the solution is straightforward:

(foldmap op nil println [1 2 3] [4 5 6])

gives:

1 4
2 5
3 6
nil

The first function is a folding function. In this case we can use op, a 
monoid operation. Since nil is also a monoid, everything will be folded to 
nil. The second part is a function that maps inputs to the monoid that we 
need (nil in this case). println does just this, and does the side effect 
that you asked for. No extra sequence building involved, and we do not need 
to write extra functions. 

On Saturday, September 24, 2016 at 2:42:16 AM UTC+2, Mars0i wrote:
>
> Thanks everyone--I'm very much appreciating the discussion, though I'm not 
> sure I follow every point.  
>
> Dragan, thank you very much for suggesting (and writing) foldmap.  Very 
> nice.  I certainly don't mind using a library, though I still think there 
> ought to be something like what I've described in the language core.
>
> Francis, thanks for separating out the different types of intermediate 
> collections.  I'm not entirely clear on type 1.  It sounds like those are 
> just collections that are already there before processing them.  Or are you 
> saying that Clojure has to convert them to a seq as such?  Why would doseq 
> have to do that, for example?
>
> It's an understatement to say that my understanding of monads, monoids, 
> etc. is weak.   Haven't used Haskell in years, and I never understood 
> monads, etc..  They seem trivial or/and or deeply mysterious.  One day I'll 
> have to sit down and figure it out.  For foldmap, I don't understand what 
> the first function argument to foldmap is supposed to do, but it's easy 
> enough to put something there that does nothing.
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: parallel sequence side-effect processor

2016-09-23 Thread Dragan Djuric
There are a few typos and defn is missing from fb in my last message, but I 
hope it is still readable. Sorry, I am typing this on a mobile device while 
watching the last episode of The Man in the High Castle :) Also, I am 
talking about the code I wrote years ago from the top of my mind without 
access to the repl :)

On Saturday, September 24, 2016 at 1:44:10 AM UTC+2, Dragan Djuric wrote:
>
> A couple of things:
> 1. How fold/foldmap and any other function works, depends on the actual 
> type. For example, if you look at 
> https://github.com/uncomplicate/neanderthal/blob/master/src/clojure/uncomplicate/neanderthal/impl/fluokitten.clj#L396
>  
> you can see that there no intermediate allocations, and everything is 
> primitive.
> 2. Now, if you give foldmap a sequence (or a vector), it goes to the 
> implementation that you pointed to. Now, the difference from map: if I 
> understand well, map would produce an unnecessary resulting sequence. 
> foldmap does not. your accumulating function does need to return 1, or 
> sequences - why not return nil? Also, the accumulator is nil, and can use 
> any dummy function that just nils everything. There is only a matter of 
> calling first/next. Do they really produce any new instance objects? That 
> depends on the implementation of seq, I believe, but it's the same even if 
> we used loop/recur, I believe? 
> 3. The sequences in your printout results are the result of how clojure 
> treat varargs, or I am missing something. So, if I give it a function such 
> as (fb [_ a b] (println a b)), what is exactly allocated, that is not 
> allocated even when using loop/recur directly with first/next?
>
> On Saturday, September 24, 2016 at 1:25:14 AM UTC+2, tbc++ wrote:
>>
>> Yeah, I have to call you out on this one Dragan. I ran the following 
>> code: 
>>
>> (ns fold-test
>> (:require [uncomplicate.fluokitten.core :refer [foldmap]]))
>>
>> (defn fa [& args]
>> (println "fa " args)
>> 1)
>>
>> (defn fb [& args]
>> (println "fb " args)
>> 1)
>>
>> (defn test-fold []
>> (foldmap fa nil fb [1 2 3] [4 5 6]))
>>
>> (test-fold)
>>
>>
>> This code produced: 
>>
>> fb  (1 4)
>> fa  (nil 1)
>> fb  (2 5)
>> fa  (1 1)
>> fb  (3 6)
>> fa  (1 1)
>>
>> So I put a breakpoint in `fb` and ran it again. The stacktrace says it 
>> ends up in algo/collection-foldmap which we can see here: 
>> https://github.com/uncomplicate/fluokitten/blob/master/src/uncomplicate/fluokitten/algo.clj#L415-L443
>>
>> That function is creating seqs out of all its arguments! So it really is 
>> not better than clojure.core/map as far as allocation is concerned. 
>>
>> Timothy
>>
>> On Fri, Sep 23, 2016 at 5:15 PM, Francis Avila <fav...@breezeehr.com> 
>> wrote:
>>
>>> There are a few intermediate collections here:
>>>
>>>
>>>1. The source coll may produce a seq object. How costly this is 
>>>depends on the type of coll and the quality of its iterator/ireduce/seq 
>>>implementations.
>>>2. You may need to collect multiple source colls into a tuple-like 
>>>thing to produce a single object for the side-effecting function
>>>3. You may have an intermediate seq/coll of these tuple-like things.
>>>4. You may have a useless seq/coll of "output" from the 
>>>side-effecting function
>>>
>>> In the single-coll case:
>>>
>>> (map f col1) pays 1,4.
>>> (doseq [x col1] (f x)) pays 1.
>>> (run! f col1) pays 1 if coll has an inefficient IReduce, otherwise it 
>>> pays nothing.
>>> (fold f col1) is the same (using reducers r/fold protocol for vectors, 
>>> which ultimately uses IReduce)
>>>
>>> In the multi-coll case:
>>>
>>> (map f coll1 col2) pays all four. 
>>> (run! (fn [[a b]] (f a b)) (map vector col1 col2)) pays 1, 2, and 3.
>>> (doseq [[a b] (map vector col1 col2)] (f a b)) pays 1, 2, 3.
>>> (fold f col1 col2) pays 1 from what I can see? (It uses first+next to 
>>> walk over the items stepwise? There's a lot of indirection so I'm not 100% 
>>> sure what the impl is for vectors that actually gets used.)
>>>
>>> There is no way to avoid 1 in the multi-step case (or 2 if you are fully 
>>> variadic), all you can do is use the most efficient-possible intermediate 
>>> object to track the traversal. Iterators are typically cheaper than seqs, 
>>> so the ideal case would be a loop-recur over multiple iterators.
>>>
&g

Re: parallel sequence side-effect processor

2016-09-23 Thread Dragan Djuric
A couple of things:
1. How fold/foldmap and any other function works, depends on the actual 
type. For example, if you look 
at 
https://github.com/uncomplicate/neanderthal/blob/master/src/clojure/uncomplicate/neanderthal/impl/fluokitten.clj#L396
 
you can see that there no intermediate allocations, and everything is 
primitive.
2. Now, if you give foldmap a sequence (or a vector), it goes to the 
implementation that you pointed to. Now, the difference from map: if I 
understand well, map would produce an unnecessary resulting sequence. 
foldmap does not. your accumulating function does need to return 1, or 
sequences - why not return nil? Also, the accumulator is nil, and can use 
any dummy function that just nils everything. There is only a matter of 
calling first/next. Do they really produce any new instance objects? That 
depends on the implementation of seq, I believe, but it's the same even if 
we used loop/recur, I believe? 
3. The sequences in your printout results are the result of how clojure 
treat varargs, or I am missing something. So, if I give it a function such 
as (fb [_ a b] (println a b)), what is exactly allocated, that is not 
allocated even when using loop/recur directly with first/next?

On Saturday, September 24, 2016 at 1:25:14 AM UTC+2, tbc++ wrote:
>
> Yeah, I have to call you out on this one Dragan. I ran the following code: 
>
> (ns fold-test
> (:require [uncomplicate.fluokitten.core :refer [foldmap]]))
>
> (defn fa [& args]
> (println "fa " args)
> 1)
>
> (defn fb [& args]
> (println "fb " args)
> 1)
>
> (defn test-fold []
> (foldmap fa nil fb [1 2 3] [4 5 6]))
>
> (test-fold)
>
>
> This code produced: 
>
> fb  (1 4)
> fa  (nil 1)
> fb  (2 5)
> fa  (1 1)
> fb  (3 6)
> fa  (1 1)
>
> So I put a breakpoint in `fb` and ran it again. The stacktrace says it 
> ends up in algo/collection-foldmap which we can see here: 
> https://github.com/uncomplicate/fluokitten/blob/master/src/uncomplicate/fluokitten/algo.clj#L415-L443
>
> That function is creating seqs out of all its arguments! So it really is 
> not better than clojure.core/map as far as allocation is concerned. 
>
> Timothy
>
> On Fri, Sep 23, 2016 at 5:15 PM, Francis Avila <fav...@breezeehr.com 
> > wrote:
>
>> There are a few intermediate collections here:
>>
>>
>>1. The source coll may produce a seq object. How costly this is 
>>depends on the type of coll and the quality of its iterator/ireduce/seq 
>>implementations.
>>2. You may need to collect multiple source colls into a tuple-like 
>>thing to produce a single object for the side-effecting function
>>3. You may have an intermediate seq/coll of these tuple-like things.
>>4. You may have a useless seq/coll of "output" from the 
>>side-effecting function
>>
>> In the single-coll case:
>>
>> (map f col1) pays 1,4.
>> (doseq [x col1] (f x)) pays 1.
>> (run! f col1) pays 1 if coll has an inefficient IReduce, otherwise it 
>> pays nothing.
>> (fold f col1) is the same (using reducers r/fold protocol for vectors, 
>> which ultimately uses IReduce)
>>
>> In the multi-coll case:
>>
>> (map f coll1 col2) pays all four. 
>> (run! (fn [[a b]] (f a b)) (map vector col1 col2)) pays 1, 2, and 3.
>> (doseq [[a b] (map vector col1 col2)] (f a b)) pays 1, 2, 3.
>> (fold f col1 col2) pays 1 from what I can see? (It uses first+next to 
>> walk over the items stepwise? There's a lot of indirection so I'm not 100% 
>> sure what the impl is for vectors that actually gets used.)
>>
>> There is no way to avoid 1 in the multi-step case (or 2 if you are fully 
>> variadic), all you can do is use the most efficient-possible intermediate 
>> object to track the traversal. Iterators are typically cheaper than seqs, 
>> so the ideal case would be a loop-recur over multiple iterators.
>>
>> In the multi-coll case there is also no way IReduce can help. IReduce is 
>> a trade: you give up the power to see each step of iteration in order to 
>> allow the collection to perform the overall reduction operation more 
>> efficiently. However with multi-coll you really do need to control the 
>> iteration so you can get all the items at an index together.
>>
>> The ideal for multi-collection would probably be something that 
>> internally looks like clojure.core/sequence but doesn't accumulate the 
>> results. (Unfortunately some of the classes necessary to do this 
>> (MultiIterator) are private.)
>>
>> Fluokitten could probably do it with some tweaking to its 
>> algo/collection-foldmap to use iterators where possible inste

Re: parallel sequence side-effect processor

2016-09-23 Thread Dragan Djuric
fluokitten does not have anything to do with haskell, or how haskell does 
those things. It is completely tailored to clojure. Regarding marsi0's 
task, now I see that, actually, foldmap is what should be used, and not 
fold. so, it's something like (foldmap dummy-nil-accumulating-fn nil 
side-effect-fn a b).

It does not create intermediate allocations.

If you use neanderthal vectors/matrices for the data, even better! They 
support fluokitten, and keep everything primitive, so even objects are kept 
in check.

I think that the best way is to try it in a simple project and see how it 
works. Maybe I am missing something, but it seems to me this is more or 
less what marsi0 is looking for.

On Saturday, September 24, 2016 at 12:52:48 AM UTC+2, tbc++ wrote:
>
> How is this done, I searched the source, but there was so much indirection 
> I can't find it. I'm familiar with how Haskell does rfold vs lfold, but one 
> of those does create allocations, they may be in the form of closures, but 
> they are allocations none the less. So does fold use iterators or something?
>
> Timothy
>
> On Fri, Sep 23, 2016 at 4:23 PM, Dragan Djuric <drag...@gmail.com 
> > wrote:
>
>> fluokitten's fold is MUCH better than (map f a b) because it does NOT 
>> create intermediate collections. just use (fold f a b) and it would fold 
>> everything into one thing (in this case nil). If f is a function with side 
>> effects, it will invoke them. No intermediate collection is created AND the 
>> folding would be optimized per the type of a.
>>
>> On Friday, September 23, 2016 at 10:56:00 PM UTC+2, tbc++ wrote:
>>>
>>> How is fluokitten's fold any better than using seqs like (map f a b) 
>>> would? Both create intermediate collections.
>>>
>>> On Fri, Sep 23, 2016 at 11:40 AM, Dragan Djuric <drag...@gmail.com> 
>>> wrote:
>>>
>>>> If you do not insist on vanilla clojure, but can use a library, fold 
>>>> from fluokitten might enable you to do this. It is similar to reduce, but 
>>>> accepts multiple arguments. Give it a vararg folding function that prints 
>>>> what you need and ignores the first parameter, and you'd get what you 
>>>> asked 
>>>> for.
>>>>
>>>>
>>>> On Friday, September 23, 2016 at 7:15:42 PM UTC+2, Mars0i wrote:
>>>>>
>>>>> On Friday, September 23, 2016 at 11:11:07 AM UTC-5, Alan Thompson 
>>>>> wrote:
>>>>>>
>>>>>> ​Huh.  I was also unaware of the run! function.​
>>>>>>
>>>>>> I suppose you could always write it like this:
>>>>>>
>>>>>> (def x (vec (range 3)))
>>>>>> (def y (vec (reverse x)))
>>>>>>
>>>>>> (run!
>>>>>>   (fn [[x y]] (println x y))
>>>>>>
>>>>>>   (map vector x y))
>>>>>>
>>>>>>
>>>>>>  > lein run
>>>>>> 0 2
>>>>>> 1 1
>>>>>> 2 0
>>>>>>
>>>>>>
>>>>> Yes.  But that's got the same problem.  Doesn't matter with a toy 
>>>>> example, but the (map vector ...) could be undesirable with large 
>>>>> collections in performance-critical code.
>>>>>
>>>>> although the plain old for loop with dotimes looks simpler:
>>>>>>
>>>>>> (dotimes [i (count x) ]
>>>>>>   (println (x i) (y i)))
>>>>>>
>>>>>>
>>>>>> maybe that is the best answer? It is hard to beat the flexibility of 
>>>>>> a a loop and an explicit index.
>>>>>>
>>>>>
>>>>> I agree that this is clearer, but it kind of bothers me to index 
>>>>> through a vector sequentially in Clojure.  We need indexing In Clojure 
>>>>> because sometimes you need to access a vector more arbitrarily.  If 
>>>>> you're 
>>>>> just walking the vector in order, we have better methods--as long as we 
>>>>> don't want to walk multiple vectors in the same order for side effects.
>>>>>
>>>>> However, the real drawback of the dotimes method is that it's not 
>>>>> efficient for the general case; it could be slow on lists, lazy 
>>>>> sequences, 
>>>>> etc. (again, on non-toy examples).  Many of the most convenient Clojure 
>>>>> functions return lazy sequences.  Even the non-lazy sequences returned by 
>>

Re: parallel sequence side-effect processor

2016-09-23 Thread Dragan Djuric
fluokitten's fold is MUCH better than (map f a b) because it does NOT 
create intermediate collections. just use (fold f a b) and it would fold 
everything into one thing (in this case nil). If f is a function with side 
effects, it will invoke them. No intermediate collection is created AND the 
folding would be optimized per the type of a.

On Friday, September 23, 2016 at 10:56:00 PM UTC+2, tbc++ wrote:
>
> How is fluokitten's fold any better than using seqs like (map f a b) 
> would? Both create intermediate collections.
>
> On Fri, Sep 23, 2016 at 11:40 AM, Dragan Djuric <drag...@gmail.com 
> > wrote:
>
>> If you do not insist on vanilla clojure, but can use a library, fold from 
>> fluokitten might enable you to do this. It is similar to reduce, but 
>> accepts multiple arguments. Give it a vararg folding function that prints 
>> what you need and ignores the first parameter, and you'd get what you asked 
>> for.
>>
>>
>> On Friday, September 23, 2016 at 7:15:42 PM UTC+2, Mars0i wrote:
>>>
>>> On Friday, September 23, 2016 at 11:11:07 AM UTC-5, Alan Thompson wrote:
>>>>
>>>> ​Huh.  I was also unaware of the run! function.​
>>>>
>>>> I suppose you could always write it like this:
>>>>
>>>> (def x (vec (range 3)))
>>>> (def y (vec (reverse x)))
>>>>
>>>> (run!
>>>>   (fn [[x y]] (println x y))
>>>>
>>>>   (map vector x y))
>>>>
>>>>
>>>>  > lein run
>>>> 0 2
>>>> 1 1
>>>> 2 0
>>>>
>>>>
>>> Yes.  But that's got the same problem.  Doesn't matter with a toy 
>>> example, but the (map vector ...) could be undesirable with large 
>>> collections in performance-critical code.
>>>
>>> although the plain old for loop with dotimes looks simpler:
>>>>
>>>> (dotimes [i (count x) ]
>>>>   (println (x i) (y i)))
>>>>
>>>>
>>>> maybe that is the best answer? It is hard to beat the flexibility of a 
>>>> a loop and an explicit index.
>>>>
>>>
>>> I agree that this is clearer, but it kind of bothers me to index through 
>>> a vector sequentially in Clojure.  We need indexing In Clojure because 
>>> sometimes you need to access a vector more arbitrarily.  If you're just 
>>> walking the vector in order, we have better methods--as long as we don't 
>>> want to walk multiple vectors in the same order for side effects.
>>>
>>> However, the real drawback of the dotimes method is that it's not 
>>> efficient for the general case; it could be slow on lists, lazy sequences, 
>>> etc. (again, on non-toy examples).  Many of the most convenient Clojure 
>>> functions return lazy sequences.  Even the non-lazy sequences returned by 
>>> transducers aren't efficiently indexable, afaik.  Of course you can always 
>>> throw any sequence into 'vec' and get out a vector, but that's an 
>>> unnecessary transformation if you just want to iterate through the 
>>> sequences element by element.
>>>
>>> If I'm writing a function that will plot points or that will write data 
>>> to a file, it shouldn't be a requirement for the sake of efficiency that 
>>> the data come in the form of vectors.  I should be able to pass in the data 
>>> in whatever form is easiest.  Right now, if I wanted efficiency for walking 
>>> through sequences in the same order, without creating unnecessary data 
>>> structures, I'd have to write the function using loop/recur.  On the other 
>>> hand, if I wanted the cross product of the sequences, I'd use doseq and be 
>>> done a lot quicker with clearer code.
>>>
>> -- 
>> You received this message because you are subscribed to the Google
>> Groups "Clojure" group.
>> To post to this group, send email to clo...@googlegroups.com 
>> 
>> Note that posts from new members are moderated - please be patient with 
>> your first post.
>> To unsubscribe from this group, send email to
>> clojure+u...@googlegroups.com 
>> For more options, visit this group at
>> http://groups.google.com/group/clojure?hl=en
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Clojure" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to clojure+u...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
&g

Re: parallel sequence side-effect processor

2016-09-23 Thread Dragan Djuric
If you do not insist on vanilla clojure, but can use a library, fold from 
fluokitten might enable you to do this. It is similar to reduce, but 
accepts multiple arguments. Give it a vararg folding function that prints 
what you need and ignores the first parameter, and you'd get what you asked 
for.

On Friday, September 23, 2016 at 7:15:42 PM UTC+2, Mars0i wrote:
>
> On Friday, September 23, 2016 at 11:11:07 AM UTC-5, Alan Thompson wrote:
>>
>> ​Huh.  I was also unaware of the run! function.​
>>
>> I suppose you could always write it like this:
>>
>> (def x (vec (range 3)))
>> (def y (vec (reverse x)))
>>
>> (run!
>>   (fn [[x y]] (println x y))
>>
>>   (map vector x y))
>>
>>
>>  > lein run
>> 0 2
>> 1 1
>> 2 0
>>
>>
> Yes.  But that's got the same problem.  Doesn't matter with a toy example, 
> but the (map vector ...) could be undesirable with large collections in 
> performance-critical code.
>
> although the plain old for loop with dotimes looks simpler:
>>
>> (dotimes [i (count x) ]
>>   (println (x i) (y i)))
>>
>>
>> maybe that is the best answer? It is hard to beat the flexibility of a a 
>> loop and an explicit index.
>>
>
> I agree that this is clearer, but it kind of bothers me to index through a 
> vector sequentially in Clojure.  We need indexing In Clojure because 
> sometimes you need to access a vector more arbitrarily.  If you're just 
> walking the vector in order, we have better methods--as long as we don't 
> want to walk multiple vectors in the same order for side effects.
>
> However, the real drawback of the dotimes method is that it's not 
> efficient for the general case; it could be slow on lists, lazy sequences, 
> etc. (again, on non-toy examples).  Many of the most convenient Clojure 
> functions return lazy sequences.  Even the non-lazy sequences returned by 
> transducers aren't efficiently indexable, afaik.  Of course you can always 
> throw any sequence into 'vec' and get out a vector, but that's an 
> unnecessary transformation if you just want to iterate through the 
> sequences element by element.
>
> If I'm writing a function that will plot points or that will write data to 
> a file, it shouldn't be a requirement for the sake of efficiency that the 
> data come in the form of vectors.  I should be able to pass in the data in 
> whatever form is easiest.  Right now, if I wanted efficiency for walking 
> through sequences in the same order, without creating unnecessary data 
> structures, I'd have to write the function using loop/recur.  On the other 
> hand, if I wanted the cross product of the sequences, I'd use doseq and be 
> done a lot quicker with clearer code.
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Preparing a proposal for EuroClojure presentation about Clojure and GPU, high-performance computing - suggestions welcome

2016-07-14 Thread Dragan Djuric
noted

On Thursday, July 14, 2016, Ashish Negi  wrote:

> Not any specific request.. but
> i would be highly interested in showing ML in clojure landscape..
> showing something end to end.. debugging and optimization tips would be
> great.
>
> I will be waiting..
>
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> 
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> 
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Clojure" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/clojure/-A8vLn4Sek8/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> clojure+unsubscr...@googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Preparing a proposal for EuroClojure presentation about Clojure and GPU, high-performance computing - suggestions welcome

2016-07-13 Thread Dragan Djuric
Hi Lee,

Although converting a CPU system to GPU is not a straightforward task, I 
think that the theme I already intend to discuss would be highly beneficial 
(although not directly related) regarding the GP thing that you are asking 
for - Markov Chain Monte Carlo simulation. And, I sped it up tens of 
thousands of times (depending on the actual problem, it could be more, but 
also less!) comparing to the state of the art software (JAGS, Stan). Now, 
since the talk is not long enough to offer time for deep detail digging, 
I'll also look for examples that are much easier for the audience to digest 
- in that regard vectors and matrices are much more appropriate.
Also, I intend to take special care of the stuff that if often overlooked: 
how to generally structure the code and do data manipulation in an 
efficient way on the three fronts that you need to cover: JVM, native CPU, 
and GPU.
I'll be glad to give you pointers if you'd like to try implementing a GP 
hello world in ClojureCL and Neanderthal.

On Wednesday, July 13, 2016 at 11:41:28 PM UTC+2, Lee wrote:
>
> Dragan,
>
>
> I would personally be interested in anything you might show about using 
> GPUs to speed up up genetic programming in Clojure. 
>
>
> A fair bit has been done using GPUs for GP (some can be found by searching 
> for GPU here 
> <http://liinwww.ira.uka.de/bibliography/Ai/genetic.programming.html+%3Chttp://liinwww.ira.uka.de/bibliography/Ai/genetic.programming.html>,
>  
> but as far as I know, none of it in Clojure.
>
>
> It would be wonderful to see a minimal example of how to take a minimal GP 
> system (I'd be happy to provide code) and to exploit GPUs to do bigger runs 
> more quickly.
>
>
> -Lee
>
> On Wednesday, July 13, 2016 at 4:33:37 PM UTC-4, Dragan Djuric wrote:
>>
>> I'm preparing a presentation proposal for EuroClojure 2016 about Clojure 
>> and GPU computing, high-performance computing, data analysis, and machine 
>> learning. If you are interested in that area, I am open to suggestions 
>> about specific stuff that you would like to be covered (regardless of 
>> whether you plan to attend the conference itself), so I can better tailor 
>> the proposal to what would potentially be most interesting to the audience. 
>> The tools/libraries that the (proposed) talk will be based on are 
>> uncomplicate.org (clojurecl, neanderthal, bayadera), but I can also 
>> cover other aspects of the topic.
>>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Preparing a proposal for EuroClojure presentation about Clojure and GPU, high-performance computing - suggestions welcome

2016-07-13 Thread Dragan Djuric
I'm preparing a presentation proposal for EuroClojure 2016 about Clojure 
and GPU computing, high-performance computing, data analysis, and machine 
learning. If you are interested in that area, I am open to suggestions 
about specific stuff that you would like to be covered (regardless of 
whether you plan to attend the conference itself), so I can better tailor 
the proposal to what would potentially be most interesting to the audience. 
The tools/libraries that the (proposed) talk will be based on are 
uncomplicate.org (clojurecl, neanderthal, bayadera), but I can also cover 
other aspects of the topic.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal 0.6.0: new support for AMD, Nvidia, and Intel GPUs on Linux, Windows and OS X (fast matrix library)

2016-07-07 Thread Dragan Djuric
Neanderthal 0.7.0 released: Nvidia, AMD, and Intel GPU on LInux, Windows, 
and OS X  http://neanderthal.uncomplicate.org#matrix  #Clojure #GPU 
 #GPGPU

On Monday, May 23, 2016 at 9:57:00 PM UTC+2, Dragan Djuric wrote:
>
> This is a major release of Neanderthal, a fast native & GPU matrix library:
>
> In this release, spotlight is on the new GPU engine, that:
>
> * Works on all three major hardware platforms: AMD, Nvidia, and Intel
> * Works on all three major operating systems: Linux, Windows, and OS X
> * Is even faster, so it is now more than 1000x faster than the optimized 
> Java libraries for 8192x8192 matrix multiplication.
>
> Version 0.6.0 is in clojars
> Documentation and the tutorials can be found at the usual place: 
> http://neanderthal.uncomplicate.org
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ANN] Neanderthal 0.6.0: new support for AMD, Nvidia, and Intel GPUs on Linux, Windows and OS X (fast matrix library)

2016-05-23 Thread Dragan Djuric
This is a major release of Neanderthal, a fast native & GPU matrix library:

In this release, spotlight is on the new GPU engine, that:

* Works on all three major hardware platforms: AMD, Nvidia, and Intel
* Works on all three major operating systems: Linux, Windows, and OS X
* Is even faster, so it is now more than 1000x faster than the optimized 
Java libraries for 8192x8192 matrix multiplication.

Version 0.6.0 is in clojars
Documentation and the tutorials can be found at the usual place: 
http://neanderthal.uncomplicate.org

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] ClojureCL now supports Linux, Windows and OS X (GPGPU and high performance parallel computing)

2016-05-20 Thread Dragan Djuric
OpenCL in Action even contains a chapter on (home-made) FFT. That might be 
a first step if you've never done this.

On Friday, May 20, 2016 at 8:32:27 AM UTC+2, Terje Dahl wrote:
>
> YESS!  
> Just what I wanted to try.  Now to figure out how to use this to do lots 
> of FFT (on audio) in parallel.
>
>
> On Wednesday, May 18, 2016 at 1:37:50 AM UTC+2, Dragan Djuric wrote:
>>
>> http://clojurecl.uncomplicate.org
>>
>>
>> https://www.reddit.com/r/Clojure/comments/4jtqhm/clojurecl_gpu_programming_now_works_on_linux/
>>
>> ClojureCL is a library for OpenCL high-performance numerical computing 
>> that supports GPU and CPU optimizations.
>>
>> ClojureCL supports OpenCL 2.0 and 1.2 standards.
>>
>> Major news is that now it works out of the box on all major operating 
>> systems (if you have installed the drivers for your GPU, of course).
>>
>>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ANN] ClojureCL now supports Linux, Windows and OS X (GPGPU and high performance parallel computing)

2016-05-17 Thread Dragan Djuric
http://clojurecl.uncomplicate.org

https://www.reddit.com/r/Clojure/comments/4jtqhm/clojurecl_gpu_programming_now_works_on_linux/

ClojureCL is a library for OpenCL high-performance numerical computing that 
supports GPU and CPU optimizations.

ClojureCL supports OpenCL 2.0 and 1.2 standards.

Major news is that now it works out of the box on all major operating 
systems (if you have installed the drivers for your GPU, of course).

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Porting Clojure to Native Platforms

2016-04-26 Thread Dragan Djuric
 
On Tuesday, April 26, 2016 at 5:53:42 AM UTC+2, puzzler wrote:
>
> On Mon, Apr 25, 2016 at 1:50 PM, Timothy Baldridge  > wrote:
>
>> As someone who has spent a fair amount of time playing around with such 
>> things, I'd have to say people vastly misjudge the raw speed you get from 
>> the JVM's JIT and GC. In fact, I'd challenge someone to come up with a 
>> general use, dynamic language that is not based on the JVM and comes even 
>> close to the speed of Clojure. 
>>
>>
> Julia reportedly outperforms Clojure (I haven't benchmarked it myself).  
> Julia is designed primarily for numeric computation, but unlike a lot of 
> other "math languages" like Octave, R, and Matlab whose general programming 
> constructs are horribly slow, the overall general-purpose machinery of 
> Julia (function calls, looping, data structures, dispatch, etc.) is said to 
> be quite fast.  
>
> Speaking of numeric computation, it is, in my opinion, a real weak point 
> of Clojure.  There is a huge perf. penalty for boxed numbers, and since 
> anything in a Clojure data structure gets boxed, it's rather difficult to 
> work around Clojure's poor numeric performance for anything more 
> algorithmically complicated than a super-simple loop.  A number of dynamic 
> languages use tagged numbers (primitive types with a couple bits set aside 
> to mark the type of the number), and these are going to do much better than 
> Clojure for a wide variety of math-oriented computational tasks.  I've 
> benchmarked some specific programs in Clojure vs Racket and Racket tended 
> to come out ahead when there was a lot of arithmetic. 
>

On the other hand, Julia relies on external native libraries for 
performance (BLAS, GPU, whatever). The speed of those libraries is not due 
to the language (usually C or ASM) but to the fact that they use algorithms 
that are heavily optimized for each hardware architecture. If you'd write 
vector and matrix functions in plain everyday C, you'd get the same speed 
as with Java arrays in Clojure or Java, sometimes even slower. In Cloure 
and Java, you can link to those same libraries that Julia links to, with 
the same tradeoffs.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Quil 2.4.0 and improved live editor

2016-03-24 Thread Dragan Djuric
Thank you for keeping this fantastic library alive :)

On Thursday, March 24, 2016 at 9:03:42 PM UTC+1, Nikita Beloglazov wrote:
>
> Happy to announce Quil 2.4.0 release.
>
> Quil is a Clojure/ClojureScript library for creating interactive drawings 
> and animations.
>
> The release available on clojars: https://clojars.org/quil. List of 
> changes:
>
>- Updated cheatsheet 
>
> . 
>Thanks to @SevereOverfl0w .
>- Added support for some new processing functions: pixel-density, 
>display-density, clip, no-clip, key-modifiers.
>- Fixes: #170 , #179 
>.
>- Upgraded to ProcessingJS 1.4.16 and Processing 3.0.2.
>- Drop support of Clojure 1.6.0.
>- Migrated from cljx to cljc and various refactorings to make Quil 
>compile with self-hosted cljs.
>
> Documentation on http://quil.info has been updated as well.
>
> *Live editor*
>
> Editor  on quil.info was revamped and 
> migrated to self-hosted ClojureScript! That means that now you can evaluate 
> code ~instantly in browser and even reevaluate parts of code. Editor 
> provides following features:
>
>
>- full client-side compilation including macros from quil.core;
>- partial evaluation: use Ctrl+Enter to evaluate selected code or form 
>under cursor
>- warning and error messages;
>- easy sketch sharing via URL;
>
> Feedback and bug reports are welcome! Feel free to reply to this thread or 
> file a bug in Quil repo . 
>
>
> Happy hacking!
>
> Nikita
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Similar lisps and emacs reimplementations?

2016-03-19 Thread Dragan Djuric
I understand your position (you have bosses that you have to answer to), but I 
would like to thank you for reminding me to thank Rich for choosing the license 
that is unfriendly to software patent litigators. here is how I look at this: 
there is a wonderful free software that lots of people contribute too. Even if 
you are a corporation, you can use it to your benefit. But, evil corporations 
that promulgate software patents - should be screwed!

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-16 Thread Dragan Djuric
Christopher, Bobby, Mars0i,

I agree with you! core.matrix works fine for what you need and it is quite 
sane and wise to continue happily using it.

Whether you want to touch Neanderthal or not is up to you, and it is same 
to me one way or the other - I earn exactly $0 if you do and lose the same 
amount if you don't. I haven't ever said anywhere that you (or anyone else) 
should switch from core.matrix to Neanderthal, so I am quite puzzled by 
your insistence on telling me why you wouldn't. I created Neanderthal 
because I needed state of the art capabilities for my other projects, and 
open-sourced it because I know that there are other people (no matter if 
only 1%, 10%, or 100%) who need exactly that. My main goal is not to 
convert everyone (or even anyone) to it, but to attract like minded people 
*who need the same stuff* to use it, and hopefully contribute to it, or 
build other useful open-source libraries on top of it that *I* can use too. 

It seems to me that your main argument is that, whether good or not, 
core.matrix is a unification point that somehow makes the ecosystem and 
community thrive, while you'd like to avoid the "lisp curse" of fragmented 
libraries. That's a plausable hypothesis (and quite logical), but if we 
look around we could see it is simply not true. Often it is the opposite, 
even in Clojure land. 

How many competing web libraries, sql bindings, network libs, and various 
other infrastructure projects there are for Clojure? Tons! There are even 
several competing wrappers for React! And - all of them thrive! They build 
useful stuff on top of those that often match what is available elsewhere. 
And no one is going around scolding them for their heresy.

We can see the similar situation even in the state of the art tools 
elsewhere. BLAS itself has at least 2 competitive, mutually incompatible, 
state of the art open source implementations, and tons of proprietary ones. 
In Java, there are tons of competitive libraries, ditto for Python, R, etc. 
And all of them have several tools in each area that have arisen of 
thousands experiments and thrived.

Clojure's "matrix" (or NDarray, or whatever we should call it) space seems 
to be unified, or so I've her. The reasoning is that it will produce a 
thriving ecosystem of data analysis, machine learning, numerical computing, 
and other tools. But what are the results by now? Here and there, there is 
some upstart project or a library that is a good learning exercise for 
their author(s), but otherwise completely unusable for anything beyond 
that. Even the tools that had received some love from their authors are far 
from anything available in other languages. I asked a few times to be 
pointed to any competitive data-science/ML library in Clojure, and no one 
stepped in with examples.

The most telling example of this state is maybe the most serious of those 
libraries - Incanter. 6 years ago, it looked as an interesting project that 
was actively maintained and somehow usable. If not comparable to the 
alternatives in other languages, at least actively developed and with a 
potential and bits "Clojure's awesomeness will solve everything" promises. 
But, the author left for greener pastures long ago, and Incanter started 
stalling. A few maintainers helped it staying afloat for some time, until 
the main focus of the project became the porting to the one true lord and 
savior core.matrix. That took a lot of time and resources, and did it help? 
Judging by the commit history (master and develop), the main thing that is 
now happening for Incanter are occasional bumps of core.matrix dependency 
to the new version.

I am glad that Incanter, however dead it is, is still usable for (many?) 
people, or that core.matrix is usable and used in closed-source projects. 
But that is obviously not what I need, and I know better than to follow the 
peer pressure to accept the dogma. I do not mind being reminded every time 
to hear about our true lord and savior core.matrix, but please understand 
me when I reserve my right to not believe in that.

PS. I really think that this discussion, however heated it may be is still 
a technical discussion, so whatever I say about various technologies does 
not mean to say anything bad against their authors and users. I think these 
are amazing people who give they work for free so we can all freely use it 
and learn from it.




On Wednesday, March 16, 2016 at 12:55:40 AM UTC+1, Christopher Small wrote:
>
> Ditto; same boat as Mars0i.
>
> I think the work Dragan has done is wonderful, and that he bears no 
> responsibility to implement the core.matrix api himself. I'm also 
> personally not interested in touching it without core.matrix support, 
> because similarly for me, the performance/flexibility tradeoff isn't worth 
> it. I wish I had time to work on this myself, since it would be an awesome 
> contribution to the community, but I just don't have the time right now 
> with everything else on 

Re: Neanderthal 0.5.0 - much easier installation and out of the box Mac OS X support

2016-03-15 Thread Dragan Djuric
Forgot to add the link: 
http://neanderthal.uncomplicate.org/articles/getting_started.html

On Tuesday, March 15, 2016 at 6:26:29 PM UTC+1, Dragan Djuric wrote:
>
> Most notable new features:
>
>- Streamlined dependencies: no longer need 2 dependencies in project 
>files. The dependency on uncomplicate/neanderthal is enough
>- Comes with Mac OS X build out of the box. No need even for external 
>ATLAS.
>- release and with-release moved from ClojureCL to uncomplicate/commons
>- Support for Fluokitten's fmap!, fmap, fold, foldmap, op...
>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] ClojureCL - OpenCL 2.0 Clojure library (GPGPU and high performance parallel computing)

2016-03-15 Thread Dragan Djuric
A few minor changes so Neanderthal can work better. New version of 
ClojureCL 0.5.0 has been released to Clojars Clojars. 
http://clojurecl.uncomplicate.org

On Tuesday, March 15, 2016 at 1:32:34 AM UTC+1, Dragan Djuric wrote:
>
> New version of ClojureCL 0.4.0 has been released to Clojars Clojars. 
> http://clojurecl.uncomplicate.org
>
> On Wednesday, October 21, 2015 at 7:18:27 PM UTC+2, Dragan Djuric wrote:
>>
>> New version of ClojureCL 0.3.0 is out in Clojars. 
>> http://clojurecl.uncomplicate.org
>>
>> On Wednesday, June 17, 2015 at 4:59:02 PM UTC+2, Dragan Djuric wrote:
>>>
>>> Certainly, but that is not a priority, since I do not use (nor need) 
>>> OpenGL myself. I would be very interested to include contributions as soon 
>>> as I get them, and the foundations are already there (in JOCL), so it is 
>>> not that hard at might look at first glance - I just do not have time to be 
>>> sure that I do it properly now.
>>>
>>> On Wednesday, June 17, 2015 at 4:54:45 PM UTC+2, Bobby Bobble wrote:
>>>>
>>>> superb!
>>>>
>>>> are there any plans to include opengl context sharing for visualisation 
>>>> ?
>>>>
>>>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Neanderthal 0.5.0 - much easier installation and out of the box Mac OS X support

2016-03-15 Thread Dragan Djuric


Most notable new features:

   - Streamlined dependencies: no longer need 2 dependencies in project 
   files. The dependency on uncomplicate/neanderthal is enough
   - Comes with Mac OS X build out of the box. No need even for external 
   ATLAS.
   - release and with-release moved from ClojureCL to uncomplicate/commons
   - Support for Fluokitten's fmap!, fmap, fold, foldmap, op...

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] ClojureCL - OpenCL 2.0 Clojure library (GPGPU and high performance parallel computing)

2016-03-14 Thread Dragan Djuric
New version of ClojureCL 0.4.0 has been released to Clojars Clojars. 
http://clojurecl.uncomplicate.org

On Wednesday, October 21, 2015 at 7:18:27 PM UTC+2, Dragan Djuric wrote:
>
> New version of ClojureCL 0.3.0 is out in Clojars. 
> http://clojurecl.uncomplicate.org
>
> On Wednesday, June 17, 2015 at 4:59:02 PM UTC+2, Dragan Djuric wrote:
>>
>> Certainly, but that is not a priority, since I do not use (nor need) 
>> OpenGL myself. I would be very interested to include contributions as soon 
>> as I get them, and the foundations are already there (in JOCL), so it is 
>> not that hard at might look at first glance - I just do not have time to be 
>> sure that I do it properly now.
>>
>> On Wednesday, June 17, 2015 at 4:54:45 PM UTC+2, Bobby Bobble wrote:
>>>
>>> superb!
>>>
>>> are there any plans to include opengl context sharing for visualisation ?
>>>
>>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-14 Thread Dragan Djuric
At least for the JNI part - there are many Java libraries that generate 
JNI, but my experience is that it is easier to just write JNI by hand (it 
is simple if you know what you are doing) than to learn to use one of 
those, usually poorly documented, tools.

As for code generation - OpenCL as a standard is moving towards SPIR 
intermediate format in recent releases (I think they already provided an 
implementation for C++ translation), which looks to me as a kind of a 
native bytecode, whose purpose is just that - to make possible for higher 
level languages to generate native kernels for CPUs, GPUs and various 
hardware accelerators. Unfortunately, language compilers is not an area 
that interests me, so I am unable to provide that for Clojure.

On Monday, March 14, 2016 at 6:46:53 PM UTC+1, raould wrote:
>
> Awesome would be a way for Cojure to generate C (perhaps with e.g. 
> Boehm–Demers–Weiser GC to get it kicked off) and JNI bindings all 
> automagically. 
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-14 Thread Dragan Djuric

>
>
> Its a fact that the JVM is not the state of the art for numerical 
> computing, including big swaths of data science/machine learning. There is 
> 0 chance of this changing until at least Panama and Valhalla come to 
> fruition (5 year timeline). 
>

I agree, but I would not dismiss even today's JVM (+ JNI). Python and R are 
even worse in that regard, but they have a huge pie in data science, in my 
opinion, because of their pragmatism. In Java land, almost all people avoid 
native like the plague, while Python and R people almost exclusively use 
native for number crunching while building a pleasant (to them :) user 
interface around that.

I too shunned JNI as a dirty, clunky, ugly slime, and believed that pure 
Java is fast enough, until I came across problems that are slow even in 
native land, and take eternity in Java. And I started measuring more 
diligently, and, instead of following gossip and dis-use of JNI, I took 
time to learn that stuff and saw that it is not that ugly, and most of the 
ugliness could be hidden from the user, so I am not that pessimistic about 
JVM. It is a good mule :) 

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-14 Thread Dragan Djuric
Thank you for the encouragement, Sergey.

As I mentioned in one of the articles, a decent vectorized/GPU support is 
not a solution on its own. It is a foundation for writing your own custom 
GPU or SSE algorithms. For that, you'll have to drop to the native level 
for some parts of the code, which is fortunately approachable in a 
not-so-un-clojure way through ClojureCL: http://clojurecl.uncomplicate.org

On Monday, March 14, 2016 at 5:46:40 PM UTC+1, Sergey Didenko wrote:
>
> Dragan, thank you for your library and detailed explanations!
>
> Beeing close to state of the art FORTAN libraries and GPU is important for 
> long calculations. 
>
> You give me hope to use Clojure more for data science. Last time when I 
> benchmarked Incanter's  vs Octave I decided to pause using Clojure for data 
> science and move to other languages.
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-14 Thread Dragan Djuric


On Monday, March 14, 2016 at 4:56:19 PM UTC+1, tbc++ wrote:
>
> Just a side comment, Dragan, if you don't want to be compared against some 
> other tech, it might be wise to not make the subtitle of a release "X times 
> faster than Y". Make the defining feature of a release that it's better 
> than some other tech, and the proponents of that tech will probably start 
> to get a bit irritated. 
>

I agree with you, and I am not against Neanderthal being compared to 
Vectorz or core.matrix, or Clatrix or Y. I only commented that it is funny 
that every time Neanderthal is mentioned anywhere, Mike jumps in with the 
comment that I should step off the heretic work immediately and convert 
Neanderthal to the only true core.matrix api. I am sorry if it came out as 
if I was offended by that, *which I am not*.
 

> And I think the distinction made here about floats vs doubles is very 
> important. We can all construct benchmarks that show tech X being faster 
> than tech Y. But if I am evaluating those things I really want to know why 
> X is faster than Y. I want to know what tradeoffs were made, what 
> restrictions I have in using the faster tech, etc. Free performance gains 
> are very rare, so the moment I see "we are X times faster" I immediately 
> start looking for a caveats section or a rationale on why something is 
> faster. Not seeing that, or benchmark documentation, I almost immediately 
> consider the perf numbers to be hyperbole. 
>
>
Sure.
1) The benchmark page links to the benchmark source code on github, and all 
3 projects are open-source, so I expect any potential user would be wise 
enough to evaluate libraries himself/herself. I took care to mention 
drawbacks more times than is usual. I do not want to give people false hope.
2) The reason I am posting the benchmarks is mainly that in Java (and 
Clojure) land there is much superstition about what can be done and what 
performance to expect. People read some comment somewhere and tend to 
either expect superspeed for free (and be disappointed when their naive 
approach doesn't work well) or to dismiss an approach because "it can't be 
done" or "calling the native library is slow" (it isn't, if done right).

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-14 Thread Dragan Djuric

>
>
> 2) I disagree with. Most real world data science is about data 
> manipulation and transformation, not raw computation. 1% of people need to 
> optimise the hell out of a specific algorithm for a specific use case, 99% 
> just want convenient tools and the ability to get an answer "fast enough".
>

And most of those people are quite happy with R and Python and don't care 
about Clojure. I, on the other hand, am perhaps in the other 1%, but my 
needs are also valid (IMHO), especially when I am prepared to do the 
required work to satisfy them myself.
 

> But if you only offer native and GPU dependencies then you aren't really 
> offering an agnostic API. Won't even work on my machine. could you add 
> Windows support? Maybe you want to add pure JVM implementations as well? 
> And for Clojure data structures? What about ClojureScript support? Sparse 
> arrays? Oh, could you maybe support n-dimensional arrays? Datasets?
>

Native and GPU are already available *now* and work well. Windows support 
is there, you (or someone else) need to compile the Windows binaries and 
contribute them. Pure JVM implementation is quite easy to add, since the 
majority of Neanderthal is not only pure Java, but  Clojure; the only thing 
I haven't implement (yet) is a pure Java BLAS engine. I already wrote a 
(much harder) GPU BLAS implementation from scratch, so compared to that 
pure Clojure BLAS is like a walk in the park. If/when I need those, the 
infrastructure is already there to support it quite easily, without API 
changes. ClojureScript support is more complex, but mostly a matter of 
putting work weeks into it, rather than as a technical challenge, since by 
now I know what causes most problems and how to solve them. Of course, I 
won't do that just as an exercise, but only when the need arises (if ever). 
Of course, I'll welcome quality PRs.
 

> Interesting. I don't recall you posting such benchmarks, apologies if I 
> missed these.
>
> I'd be happy to benchmark myself if I could get Neanderthal to build. I 
> don't believe 9x though... this operation is usually bound by memory 
> bandwidth IIRC
>

OK, to be more precise it is around 8.5x with floats. With double precision 
(which I do not need, and probably neither do you in your work on deep 
learning) it  is "just" 4x :)
 

>  
>
>>  
>>
>>> If anyone has other ideas / a better strategy I'd love to hear, and I 
>>> welcome a good constructive debate. I'm not precious about any of my own 
>>> contributions. But I do genuinely think this is the best way forward for 
>>> Clojure data science overall, based on where we are right now.
>>>
>>
>> I would like to propose a strategy where more love is given to the actual 
>> libraries (incanter is rather indisposed and stagnant IMO) that solve 
>> actual problems instead of trying to unify what does not exist (yet!). 
>> Then, people will use what works best, and what does not work will not be 
>> important. That's how things go in open-source...
>>  
>>
>
> core.matrix already exists, is widely used and already unifies several 
> different implementations that cover a wide variety of use cases. It 
> provides an extensible toolkit that can be used either directly or by 
> library / tool implementers. It's really very powerful, and it's solving 
> real problems for a lot of people right now. It has the potential to make 
> Clojure one of the best languages for data science.
>

I agree that this sounds great, but, come on... It sounds like a marketing 
pitch. Clojure currently doesn't offer a single data science library that 
would be even a distant match to the state of the art. Nice toys - sure. I 
hope you didn't mean Incanter?...
 

> Don't get me wrong, I think Neanderthal is a great implementation with a 
> lot of good ideas. I'd just like to see it work well *with* the core.matrix 
> API, not be presented as an alternative. The Clojure data science ecosystem 
> as a whole will benefit if we can make that work.
>

That is up to people that would find that useful. I support them. 

BTW, each Neanderthal structure is a sequence, so, technically, it already 
supports core.matrix.

BBTW, I wish only the best to core.matrix, and think you do a great work 
with it. Also, I suppose we are leading a technical discussion here, so I 
am not getting you wrong and I appreciate your appreciation of Neanderthal 
technical aspects.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-14 Thread Dragan Djuric

>
>
>>
> There is a set of BLAS-like API functions in core.matrix already. See: 
> https://github.com/mikera/core.matrix/blob/develop/src/main/clojure/clojure/core/matrix/blas.cljc
>

GitHub history says they were added 7 days ago. Nevermind that they just 
delegate, so the only BLAS-y thing is the 4 method names taken out  of 
Neanderthal (BLAS has a bit more stuff than that), but why you reinvented 
the wheel instead just creating core.matrix (or vectorz) implementation of 
Neanderthal's API? 
 

> Having said that, I don't personally think the BLAS API is a particularly 
> good fit for Clojure (it depends on mutability, and I think it is a pretty 
> clumsy design by modern API standards). But if you simply want to copy the 
> syntax, it's certainly trivial to do in core.matrix.
>

If you look at Neanderthal's API you'll see that I took a great care to 
make it fit into Clojure, which I think I succeeded. 
Regarding mutability:
1) Neanderthal provides both mutable and pure functions
2) Trying to do numeric computing without mutability (and primitives) for 
anything than toy problems is... well, sometimes it is better to plant a 
Sequoia seed, wait for the tree to grow, cut it, make an abacus and compute 
with it... 


> An important point to note is that they don't do the same thing at all: 
> core.matrix is an API providing an general purpose array programming 
> abstraction with pluggable implementation support. Neanderthal is a 
> specific implementation tied to native BLAS/ATLAS. They should ideally work 
> in harmony, not be seen as alternatives.
>

* Neanderthal has an agnostic api and it is not in any way tied to 
BLAS/ATLAS *
Neanderthal also has pluggable implementation support - and it already 
provides two high-performance implementations that elegantly unify two very 
different *hardware* platforms: CPU and GPU. And it does it quite 
transparently (more about that can be read here: 
http://neanderthal.uncomplicate.org/articles/tutorial_opencl.html)

>
> Neanderthal is more closely comparable to Vectorz, which *is* a matrix 
> implementation (and I think it matches or beats Neanderthal in performance 
> for virtually every operation *apart* from large matrix multiplication for 
> which ATLAS is obviously fantastic for).
>
 
You think without having tried that. I tried that, and *Neanderthal is 
faster for virtually *ALL* operations, even 1D. Yesterday I did a quick 
measure of asum (1D vector operation), for example, and neanderthal was, if 
I remember correctly, * 9x faster than Vectorz in that simple summing *

. I even pointed to you that Neanderthal is faster even in ALL those cases 
when you raised that argument the last time, but you seem to ignore it.
 

> If anyone has other ideas / a better strategy I'd love to hear, and I 
> welcome a good constructive debate. I'm not precious about any of my own 
> contributions. But I do genuinely think this is the best way forward for 
> Clojure data science overall, based on where we are right now.
>

I would like to propose a strategy where more love is given to the actual 
libraries (incanter is rather indisposed and stagnant IMO) that solve 
actual problems instead of trying to unify what does not exist (yet!). 
Then, people will use what works best, and what does not work will not be 
important. That's how things go in open-source...
 

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-14 Thread Dragan Djuric

>
>
> Please remember that core.matrix is primarily intended as an API, not a 
> matrix implementation itself. The point is that different matrix 
> implementations can implement the standard protocols, and users and library 
> writers can then code to a standard API while maintaining flexibility to 
> use the implementation that best suits their use cases (of which 
> Neanderthal could certainly be one). 
>

Exactly the same could be said about Neanderthal. It also has an abstract 
API that could be implemented quite flexibly, and is even better than 
core.matrix (IMO, of course, so I won't try to argue about that since it is 
a matter of personal preferences and needs and arguing about that leads 
nowhere).
 

>
>> I understand your emotions about core.matrix, and I empathize with you. I 
>> support your contributions to Clojure open-source space, and am glad if 
>> core.matrix is a fine solution for a number of people. Please also 
>> understand that it is not a solution to every problem, and that it can also 
>> be an obstacle, when it fells short in a challenge.
>>
>
> Interested to understand that statement. Please let me know what use cases 
> you think don't work for core.matrix. A lot of people have worked on the 
> API to make it suitable for a large class of problems, so I'm interested to 
> know if there are any we have missed. 
>
> For any point you have here, I'm happy to either:
> a) Explain how it *does* work
> b) Take it as an issue to address in the near future.
>

I do not say that core.matrix is a bad API. I just think BLAS is even more 
mature, and battle tested.

 

>  
>
>>  
>>
>>> In the absence of that, we'll just need to develop separate BLAS 
>>> implementations for core.matrix. 
>>>
>>
>> I support you. If you do a good job, I might even learn something now and 
>> improve Neanderthal.
>>  
>>
>>> Would be great if you could implement the core.matrix protocols and 
>>> solve this issue. It really isn't much work, I'd even be happy to do it 
>>> myself if Neanderthal worked on Windows (last time I tried it doesn't).
>>>
>>
>> I am happy that it is not much work, since it will be easy for you or 
>> someone else to implement it ;) Contrary to what you said on slack, I am 
>> *not against it*. I said that many times. Go for it. The only thing that I 
>> said is that *I* do not have time for that nor I have any use of 
>> core.matrix.
>>
>> Regarding Windows - Neanderthal works on Windows. I know this because a 
>> student of mine compiled it (he's experimenting with an alternative GPU 
>> backend for Neanderthal and prefers to work on Windows). As I explained to 
>> you in the issue that you raised on GitHub last year, You have to install 
>> ATLAS on your machine, and Neanderthal has nothing un-Windowsy in its code. 
>> There is nothing Neanderthal specific there, it is all about comiling 
>> ATLAS. Follow any ATLAS or Nympu + ATLAS or R + ATLAS guide for 
>> instructions. Many people did that installation, so I doubt it'd be a real 
>> obstacle for you.
>>
>
> Every time I have tried it has failed on my machine. I'm probably doing 
> something wrong, but it certainly isn't obvious how to fix it. Can you 
> point me to a canonical guide and binary distribution that works "out of 
> the box"? 
>

Googling "numpy atlas install windows" gave me thousands of results, here 
is the first: 
http://www.scipy.org/scipylib/building/windows.html#atlas-and-lapack 

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-13 Thread Dragan Djuric
On Monday, March 14, 2016 at 12:28:24 AM UTC+1, Mikera wrote:

> It would be great if Neanderthal simply implemented the core.matrix 
> protocols, then people could use it as a core.matrix implementation for 
> situations where it makes sense. I really think it is an architectural 
> dead-end for Neanderthal to develop a separate API. You'll simply get less 
> users for Neanderthal and fragment the Clojure library ecosystem which 
> doesn't help anyone.
>

Mike, I explained that many times in detail what's wrong with core.matrix, 
and I think it is a bit funny that you jump in every time Neanderthal is 
mentioned with the same dreams about core.matrix, without even trying 
Neanderthal, or discussing the issues that I raised. Every time your answer 
is that core.matrix is fine for *YOUR* use cases. That's fine with me and I 
support your choice, but core.matrix fell short for *MY* use cases, and 
after detailed inspection I decided it was unsalvageable. If I thought I 
could improve it, it would have been easier for me to do that than to spend 
my time fiddling with JNI and GPU minutes.

I understand your emotions about core.matrix, and I empathize with you. I 
support your contributions to Clojure open-source space, and am glad if 
core.matrix is a fine solution for a number of people. Please also 
understand that it is not a solution to every problem, and that it can also 
be an obstacle, when it fells short in a challenge.
 

> In the absence of that, we'll just need to develop separate BLAS 
> implementations for core.matrix. 
>

I support you. If you do a good job, I might even learn something now and 
improve Neanderthal.
 

> Would be great if you could implement the core.matrix protocols and solve 
> this issue. It really isn't much work, I'd even be happy to do it myself if 
> Neanderthal worked on Windows (last time I tried it doesn't).
>

I am happy that it is not much work, since it will be easy for you or 
someone else to implement it ;) Contrary to what you said on slack, I am 
*not against it*. I said that many times. Go for it. The only thing that I 
said is that *I* do not have time for that nor I have any use of 
core.matrix.

Regarding Windows - Neanderthal works on Windows. I know this because a 
student of mine compiled it (he's experimenting with an alternative GPU 
backend for Neanderthal and prefers to work on Windows). As I explained to 
you in the issue that you raised on GitHub last year, You have to install 
ATLAS on your machine, and Neanderthal has nothing un-Windowsy in its code. 
There is nothing Neanderthal specific there, it is all about comiling 
ATLAS. Follow any ATLAS or Nympu + ATLAS or R + ATLAS guide for 
instructions. Many people did that installation, so I doubt it'd be a real 
obstacle for you.

 

> On Sunday, 13 March 2016 23:34:23 UTC+8, Dragan Djuric wrote:
>>
>> I am soon going to release a new version of Neanderthal.
>>
>> I reinstalled ATLAS, so I decided to also update benchmarks with threaded 
>> ATLAS bindings.
>>
>> The results are still the same for doubles on one core: 10x faster than 
>> Vectorz and 2x faster than JBlas.
>>
>> The page now covers some more cases: multicore ATLAS (on my 4-core 
>> i7-4790k) and floats. Neanderthal is 60 times faster with multi-threaded 
>> ATLAS and floats than Vectorz (core.matrix).
>>
>> For the rest of the results, please follow the link. This will work with 
>> older versions of Neanderthal.
>>
>>
>> https://www.reddit.com/r/Clojure/comments/4a8o9n/new_matrix_multiplication_benchmarks_neanderthal/
>> http://neanderthal.uncomplicate.org/articles/benchmarks.html
>>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


New Matrix Multiplication benchmarks - Neanderthal up to 60 times faster than core.matrix with Vectorz

2016-03-13 Thread Dragan Djuric


I am soon going to release a new version of Neanderthal.

I reinstalled ATLAS, so I decided to also update benchmarks with threaded 
ATLAS bindings.

The results are still the same for doubles on one core: 10x faster than 
Vectorz and 2x faster than JBlas.

The page now covers some more cases: multicore ATLAS (on my 4-core 
i7-4790k) and floats. Neanderthal is 60 times faster with multi-threaded 
ATLAS and floats than Vectorz (core.matrix).

For the rest of the results, please follow the link. This will work with 
older versions of Neanderthal.

https://www.reddit.com/r/Clojure/comments/4a8o9n/new_matrix_multiplication_benchmarks_neanderthal/
http://neanderthal.uncomplicate.org/articles/benchmarks.html

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Is there any desire or need for a Clojure DataFrame? (X-POST from Numerical Clojure mailing list)

2016-03-10 Thread Dragan Djuric

>
> This is already working well for the array programming APIs (it's easy to 
> mix and match Clojure data structures, Vectorz Java-based arrays, GPU 
> backed arrays in computations). 
>

While we could agree to some extent on the other parts of your post but the 
GPU part is *NOT* true: I would like you to point me to a single 
implementation anywhere (Clojure or other) that (easily or not) mixes and 
matches arrays in RAM and arrays on the GPU backend. It simply does not 
work that way.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Fluokitten - Category theory concepts in Clojure - Functors, Applicatives, Monads, Monoids and more

2016-03-10 Thread Dragan Djuric
*New version, 0.4.0 released:*

http://fluokitten.uncomplicate.org/ has lots of documentation and 
tutorials. Source at: https://github.com/uncomplicate/fluokitten

New features:

   - Added PseudoFunctor, PseudoApplicative, and PseudoMonad, to support 
   destructive operations in Neanderthal.
   - Better support for functions and curried functions.
   - fold, foldmap, and op much improved with variadic versions.
   - Varargs versions of pure, return, and unit.

Changes:

   - fmap implementation for function changed to be in line with bind; 
   supports multi-arity functions and offer super-comp.
   - Collections use reducers where appropriate.
   - op, fold, foldmap, support multiple arguments, have better 
   implementations.




On Monday, July 22, 2013 at 4:33:48 PM UTC+2, Phillip Lord wrote:
>
>
>
> That's a good answer! I've enjoyed reading the documentation of both 
> fluokitten and morph and understood it. The functionality certainly 
> seems useful. 
>
> Phil 
>
> Dragan Djuric <drag...@gmail.com > writes: 
>
> > If Clojure has all of the Haskell's type features, I guess there would 
> be 
> > only one Clojure monad library, more or less a direct port of Haskell's. 
> As 
> > Clojure is different, there are different ways to approach monads from 
> > neither of which can be the same as Haskell's, each having its pros and 
> > cons, so there are many libraries. Additional motivation in my case is 
> that 
> > the other libraries (except morph, which is also a newcomer) were poorly 
> > documented or not documented at all, and that even simple examples from 
> > Haskell literature were not simple at all in those libraries, and in 
> many 
> > cases, not even supported (many of them don't even define functors and 
> > monoids, let alone applicative functors). 
> > 
> > What I've not yet understood is what the difference is between all of 
> >> these libraries? 
> >> 
> >> 
> > 
> > -- 
>
> -- 
> Phillip Lord,   Phone: +44 (0) 191 222 7827 
> Lecturer in Bioinformatics, Email: philli...@newcastle.ac.uk 
>  
> School of Computing Science,
> http://homepages.cs.ncl.ac.uk/phillip.lord 
> Room 914 Claremont Tower,   skype: russet_apples 
> Newcastle University,   twitter: phillord 
> NE1 7RU 
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Any chance of core.logic getting extended with probKanren?

2015-11-25 Thread Dragan Djuric
I am working on something related to probabilistic 
programming/inference/learning. Not yet ready for use but I hope to get it 
there the next year.
Although some key building blocks are the same (MCMC), I really cannot see 
how such things could be integrated to core.logic, and even why. So, I 
would like very much to hear some of the needed use cases for this (without 
reading hundreds of pages of someone's PhD dissertation, please :)

Can you write a few "hello world" examples of what you would like to see in 
core.logic related to probabilistic programming?

On Friday, November 20, 2015 at 2:51:05 PM UTC+1, Henrik Larsson wrote:
>
> I have started to play around with ProbLog2 and find the concept of 
> probabilistic logic programming to be super fun. When googeling miniKanren 
> and probabilistic logic programming the following came up:
> https://github.com/webyrd/probKanren
>
> So my question now is what are the chances that something like probKanren 
> getting implemented in core.logic and how advance is probKanren vs 
> ProbLog2? What im after is the conditional probabilites that ProbLog2 can 
> handle.
>
> There are some documentation on core.logic (
> https://github.com/clojure/core.logic/wiki/CLP(Prob)) but it is dated at 
> 2013 and im not sure what the roadmap is for core.logic or if it even has a 
> roadmap.
>
>
> Thanks for any input regarding this.
>
> Best regards Henrik
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Any chance of core.logic getting extended with probKanren?

2015-11-25 Thread Dragan Djuric
I know well how probabilistic logic works. It is a superset of the 
true/false logic, and goes much beyond "half true" etc. My question was 
about how this would look like when integrated with core.logic vs just 
using full-blown probabilistic logic-based library/language.

On Wednesday, November 25, 2015 at 3:15:11 PM UTC+1, Carl Cotner wrote:
>
> > Although some key building blocks are the same (MCMC), I really cannot 
> see 
> > how such things could be integrated to core.logic, and even why. 
>
> This is not a concrete answer to your question, but probability can be 
> thought of as a continuous extension of logic where 0 = False and 1 = 
> True (and .5 = "half true", etc.). 
>
> This extension is unique given a few natural conditions. 
>
> Carl 
>
>
> On Wed, Nov 25, 2015 at 5:49 AM, Dragan Djuric <drag...@gmail.com 
> > wrote: 
> > I am working on something related to probabilistic 
> > programming/inference/learning. Not yet ready for use but I hope to get 
> it 
> > there the next year. 
> > Although some key building blocks are the same (MCMC), I really cannot 
> see 
> > how such things could be integrated to core.logic, and even why. So, I 
> would 
> > like very much to hear some of the needed use cases for this (without 
> > reading hundreds of pages of someone's PhD dissertation, please :) 
> > 
> > Can you write a few "hello world" examples of what you would like to see 
> in 
> > core.logic related to probabilistic programming? 
> > 
> > 
> > On Friday, November 20, 2015 at 2:51:05 PM UTC+1, Henrik Larsson wrote: 
> >> 
> >> I have started to play around with ProbLog2 and find the concept of 
> >> probabilistic logic programming to be super fun. When googeling 
> miniKanren 
> >> and probabilistic logic programming the following came up: 
> >> https://github.com/webyrd/probKanren 
> >> 
> >> So my question now is what are the chances that something like 
> probKanren 
> >> getting implemented in core.logic and how advance is probKanren vs 
> ProbLog2? 
> >> What im after is the conditional probabilites that ProbLog2 can handle. 
> >> 
> >> There are some documentation on core.logic 
> >> (https://github.com/clojure/core.logic/wiki/CLP(Prob)) but it is dated 
> at 
> >> 2013 and im not sure what the roadmap is for core.logic or if it even 
> has a 
> >> roadmap. 
> >> 
> >> 
> >> Thanks for any input regarding this. 
> >> 
> >> Best regards Henrik 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "Clojure" group. 
> > To post to this group, send email to clo...@googlegroups.com 
>  
> > Note that posts from new members are moderated - please be patient with 
> your 
> > first post. 
> > To unsubscribe from this group, send email to 
> > clojure+u...@googlegroups.com  
> > For more options, visit this group at 
> > http://groups.google.com/group/clojure?hl=en 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "Clojure" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to clojure+u...@googlegroups.com . 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Any chance of core.logic getting extended with probKanren?

2015-11-25 Thread Dragan Djuric
I am sorry if my reply sounded like I was offended. I was not, I was just 
going straight to the point without beating around the bush :)

On Wednesday, November 25, 2015 at 3:28:14 PM UTC+1, Carl Cotner wrote:
>
> Sorry, didn't mean to offend. I was just offering a reason for the 
> "even why". Perhaps I misinterpreted ... 
>
>
> On Wed, Nov 25, 2015 at 9:20 AM, Dragan Djuric <drag...@gmail.com 
> > wrote: 
> > I know well how probabilistic logic works. It is a superset of the 
> > true/false logic, and goes much beyond "half true" etc. My question was 
> > about how this would look like when integrated with core.logic vs just 
> using 
> > full-blown probabilistic logic-based library/language. 
> > 
> > On Wednesday, November 25, 2015 at 3:15:11 PM UTC+1, Carl Cotner wrote: 
> >> 
> >> > Although some key building blocks are the same (MCMC), I really 
> cannot 
> >> > see 
> >> > how such things could be integrated to core.logic, and even why. 
> >> 
> >> This is not a concrete answer to your question, but probability can be 
> >> thought of as a continuous extension of logic where 0 = False and 1 = 
> >> True (and .5 = "half true", etc.). 
> >> 
> >> This extension is unique given a few natural conditions. 
> >> 
> >> Carl 
> >> 
> >> 
> >> On Wed, Nov 25, 2015 at 5:49 AM, Dragan Djuric <drag...@gmail.com> 
> wrote: 
> >> > I am working on something related to probabilistic 
> >> > programming/inference/learning. Not yet ready for use but I hope to 
> get 
> >> > it 
> >> > there the next year. 
> >> > Although some key building blocks are the same (MCMC), I really 
> cannot 
> >> > see 
> >> > how such things could be integrated to core.logic, and even why. So, 
> I 
> >> > would 
> >> > like very much to hear some of the needed use cases for this (without 
> >> > reading hundreds of pages of someone's PhD dissertation, please :) 
> >> > 
> >> > Can you write a few "hello world" examples of what you would like to 
> see 
> >> > in 
> >> > core.logic related to probabilistic programming? 
> >> > 
> >> > 
> >> > On Friday, November 20, 2015 at 2:51:05 PM UTC+1, Henrik Larsson 
> wrote: 
> >> >> 
> >> >> I have started to play around with ProbLog2 and find the concept of 
> >> >> probabilistic logic programming to be super fun. When googeling 
> >> >> miniKanren 
> >> >> and probabilistic logic programming the following came up: 
> >> >> https://github.com/webyrd/probKanren 
> >> >> 
> >> >> So my question now is what are the chances that something like 
> >> >> probKanren 
> >> >> getting implemented in core.logic and how advance is probKanren vs 
> >> >> ProbLog2? 
> >> >> What im after is the conditional probabilites that ProbLog2 can 
> handle. 
> >> >> 
> >> >> There are some documentation on core.logic 
> >> >> (https://github.com/clojure/core.logic/wiki/CLP(Prob)) but it is 
> dated 
> >> >> at 
> >> >> 2013 and im not sure what the roadmap is for core.logic or if it 
> even 
> >> >> has a 
> >> >> roadmap. 
> >> >> 
> >> >> 
> >> >> Thanks for any input regarding this. 
> >> >> 
> >> >> Best regards Henrik 
> >> > 
> >> > -- 
> >> > You received this message because you are subscribed to the Google 
> >> > Groups "Clojure" group. 
> >> > To post to this group, send email to clo...@googlegroups.com 
> >> > Note that posts from new members are moderated - please be patient 
> with 
> >> > your 
> >> > first post. 
> >> > To unsubscribe from this group, send email to 
> >> > clojure+u...@googlegroups.com 
> >> > For more options, visit this group at 
> >> > http://groups.google.com/group/clojure?hl=en 
> >> > --- 
> >> > You received this message because you are subscribed to the Google 
> >> > Groups 
> >> > "Clojure" group. 
> >> > To unsubscribe from this group and stop receiving emails from it, 
> send 
> >> > an 
> >> > email to clojure+u...@googlegroups.com. 
> >> > For more options, visit https

Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-10-25 Thread Dragan Djuric
WRT their documentation, I do not think it means much for me now, since I do 
not need that library and its functionality. So I guess the integration depends 
on whether
1) it is technically viable
2) someone needs it

I will support whomever wants to take on that task, but have no time and need 
to do it myself.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Neanderthal, a fast, native matrix and linear algebra library for Clojure released + call for help

2015-10-23 Thread Dragan Djuric
Neanderthal 0.4.0 has just been released with OpenCL-based GPU support and 
pluggable engines.

http://neanderthal.uncomplicate.org

On Tuesday, June 23, 2015 at 12:39:40 AM UTC+2, Dragan Djuric wrote:
>
> As it is a *sparse matrix*, C++ library unavailable on JVM, I don't 
> consider it relevant for comparison as these are really apples and 
> pineapples. For now, at least.
>
> On Tue, Jun 23, 2015 at 12:13 AM, A <> wrote:
>
>>
>> Here's another benchmark for comparison: 
>> https://code.google.com/p/redsvd/wiki/English
>>
>> -A
>>
>>
>>
>> On Monday, June 22, 2015 at 12:27:57 PM UTC-7, Dragan Djuric wrote:
>>>
>>> core.matrix claims that it is fast on its project page (with which I 
>>> agree in some cases). I expected from that, and from the last couple of 
>>> your posts in this discussion, that there are some concrete numbers to 
>>> show, which I can't find.
>>>
>>> My claim to win "ALL benchmarks" (excluding maybe tiny objects) came 
>>> only as a response to mike's remarks that I have only proven that 
>>> neanderthal is faster for dgemm etc.
>>>
>>> OK, maybe the point is that other libraries do not care that much about 
>>> speed, or that current speed is enough, or whatever, and I am ok with that. 
>>> I would just like it to be explicitly said, so I do not lose time arguing 
>>> about what is not important. Or it would be nice to see some numbers shown 
>>> to draw at least rough picture of what can be expected. I am glad if my 
>>> raising this issue would improve the situation, but I do not insist...
>>>
>>> On Monday, June 22, 2015 at 9:16:15 PM UTC+2, Christopher Small wrote:
>>>>
>>>> Well, we also weren't claiming to win "ALL benchmarks" compared to 
>>>> anything :-)
>>>>
>>>> But your point is well taken, better benchmarking should be pretty 
>>>> valuable to the community moving forward.
>>>>
>>>> Chris
>>>>
>>>>
>>>> On Mon, Jun 22, 2015 at 12:10 PM, Dragan Djuric <drag...@gmail.com> 
>>>> wrote:
>>>>
>>>>> So, there are exactly two measurements there: matrix multiplication 
>>>>> and vector addition for dimension 100 (which is quite small and should 
>>>>> favor vectorz). Here are the results on my machine:
>>>>>
>>>>> Matrix multiplications are given at the neanderthal web site at 
>>>>> http://neanderthal.uncomplicate.org/articles/benchmarks.html in much 
>>>>> more details than that, so I won't repeat that here.
>>>>>
>>>>> Vector addition according to criterium: 124ns vectorz vs 78ns 
>>>>> neanderthal on my i7 4790k
>>>>>
>>>>> Mind you that the project you pointed uses rather old library 
>>>>> versions. I updated them to the latest versions. Also, the code does not 
>>>>> run for both old and new versions properly (it complains about :clatrix) 
>>>>> so 
>>>>> I had to evaluate it manually in the repl.
>>>>>
>>>>> I wonder why you complained that I didn't show more benchmark data 
>>>>> about my claims when I had shown much more (and relevant) data than it is 
>>>>> available for core.matrix, but I would use the opportunity to appeal to 
>>>>> core.matrix community to improve that.
>>>>>
>>>>> On Monday, June 22, 2015 at 8:13:29 PM UTC+2, Christopher Small wrote:
>>>>>>
>>>>>> For benchmarking, there's this: 
>>>>>> https://github.com/mikera/core.matrix.benchmark. It's pretty simple 
>>>>>> though. It would be nice to see something more robust and composable, 
>>>>>> and 
>>>>>> with nicer output options. I'll put a little bit of time into that now, 
>>>>>> but 
>>>>>> again, a bit busy to do as much as I'd like here :-)
>>>>>>
>>>>>> Chris
>>>>>>
>>>>>>
>>>>>> On Mon, Jun 22, 2015 at 9:14 AM, Dragan Djuric <drag...@gmail.com> 
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>> As for performance benchmarks, I have to echo Mike that it seemed 
>>>>>>>> strange to me that you were claiming you were faster on ALL benchmarks 
>>>>>>>> when 
>>>>>>>

Re: [ANN] ClojureCL - OpenCL 2.0 Clojure library (GPGPU and high performance parallel computing)

2015-10-21 Thread Dragan Djuric
New version of ClojureCL 0.3.0 is out in Clojars. 
http://clojurecl.uncomplicate.org

On Wednesday, June 17, 2015 at 4:59:02 PM UTC+2, Dragan Djuric wrote:
>
> Certainly, but that is not a priority, since I do not use (nor need) 
> OpenGL myself. I would be very interested to include contributions as soon 
> as I get them, and the foundations are already there (in JOCL), so it is 
> not that hard at might look at first glance - I just do not have time to be 
> sure that I do it properly now.
>
> On Wednesday, June 17, 2015 at 4:54:45 PM UTC+2, Bobby Bobble wrote:
>>
>> superb!
>>
>> are there any plans to include opengl context sharing for visualisation ?
>>
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: The Reading List

2015-10-14 Thread Dragan Djuric
Is there a living person who read all, or even a majority, of these books? 
If there is (although I doubt), what he/she thinks about them? I am 
genuinely interested. In my opinion, there is too much Cool Aid magic in 
the list, and too little of more tangible stuff. Is not that the books are 
useless, I just think that many of such books are really some good points 
mixed with too much of fluff to stretch to full books, unless you want to 
specialize in the book's topic. Some are outright non-topic in the context 
of Clojure, like Design Patterns or Growing OO Software...

I won't post my list, since I do not think it would be widely applicable, 
but when it is about the "THE" lists, I think those lists must be much, 
much shorter, and to the point.

On Wednesday, October 14, 2015 at 9:20:04 PM UTC+2, Alan Thompson wrote:
>
> Hi,
>
> Just saw a good presentation on InfoQ by Jason McCreary  (
> http://www.infoq.com/presentations/the-reading-list) where he mentions 
> several topics from "The Reading List", which is an informal list of books 
> that are considered by many in the industry to be "required reading" for 
> software engineers.  He has a blog post with the full list 
> , reproduced 
> below.  Lots of good stuff to think about.
>
> Alan
>
>
> 
>
> A few years ago I continued my journey to become a true software engineer 
> . I 
> interviewed with big tech names and worked with brilliant people. Along the 
> way, I asked for ways to improve my craft.
>
> The following is the intersection of a long list of books. It has been 
> culled through cross-reference and repeated recommendation. I call this *The 
> Reading List*. Some believe it’s what every software engineer must read.
> The Reading List
>
>- Agile Software Development 
>
> 
>- Agile Testing 
>
>- Analysis Patterns 
>
> 
>- Art of Capacity Planning 
>
> 
>- Art of Software Testing 
>
>- Clean Code 
>
> 
>- Code Complete 2 
>
> 
>- Continuous Delivery 
>
> 
>- Continuous Integration 
>
> 
>- Design Patterns 
>
> 
>- Domain Driven Design 
>
> 
>- Even Faster Web Sites 
>
>- Experiences of Test Automation 
>
> 
>- Extreme Programming Explained: Embrace Change 
>
> 
>- Founders at Work 
>- Fundamentals of Object Oriented Design in UML 
>
> 
>- Growing Object-Oriented Software Guided by Tests 
>
> 
>- High Performance MySQL 
>
> 
>- High Performance Web Sites 
>
>- Implementation Patterns 
>
>- JavaScript The Good Parts 
>
> 
>- Joe Celko’s SQL for Smarties 
>
>- Joe Celko’s Thinking in Sets 
>
>- Large-Scale C++ Software Design 
>
> 
>- Lean Architecture 
>
> 
>- Lean Startup 
>

[ANN] Neanderthal 0.3.0 with GPU matrix operations released

2015-08-07 Thread Dragan Djuric
New and noteworthy:
1. GPU engine now available (OpenCL 2.0 required, works superfast on AMD 
Radeons and FirePros)
2. Support for pluggable engines and datastructures (so pure Java engine 
would be relatively easy to add)

*** New, very detailed tutorials with benchmarks available ** Discuss 
at https://news.ycombinator.com/item?id=10022776 
https://news.ycombinator.com/item?id=10022776*

http://neanderthal.uncomplicate.org

-- 
You received this message because you are subscribed to the Google
Groups Clojure group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
Clojure group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Clojure 1.8.0-alpha4

2015-08-05 Thread Dragan Djuric
Trying to compile an application using ztellman/vertigo 1.3.0 library. 
Worked with Clojure 1.7.0, Clojure 1.8.0-alpha4 raises the following 
exception:
An app that worked with vertigo 1.3.0 and Clojure 1.7.0 causes the 
following exception in the Clojure compiler:
java.lang.NoClassDefFoundError: IllegalName: 
compile__stub.vertigo.bytes.vertigo.bytes/ByteSeq, 
compiling:(vertigo/bytes.clj:90:1)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6894)
at clojure.lang.Compiler.analyze(Compiler.java:6688)
at clojure.lang.Compiler.analyze(Compiler.java:6649)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:6025)
at clojure.lang.Compiler$LetExpr$Parser.parse(Compiler.java:6343)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6887)
at clojure.lang.Compiler.analyze(Compiler.java:6688)
at clojure.lang.Compiler.analyze(Compiler.java:6649)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:6025)
at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5401)
at clojure.lang.Compiler$FnExpr.parse(Compiler.java:3975)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6885)
at clojure.lang.Compiler.analyze(Compiler.java:6688)
at clojure.lang.Compiler.analyze(Compiler.java:6649)
at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3769)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6889)
at clojure.lang.Compiler.analyze(Compiler.java:6688)
at clojure.lang.Compiler.analyze(Compiler.java:6649)
at clojure.lang.Compiler$LetExpr$Parser.parse(Compiler.java:6255)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:6887)
at clojure.lang.Compiler.analyze(Compiler.java:6688)
at clojure.lang.Compiler.analyze(Compiler.java:6649)
at clojure.lang.Compiler.compile1(Compiler.java:7483)
at clojure.lang.Compiler.compile(Compiler.java:7555)
at clojure.lang.RT.compile(RT.java:406)
at clojure.lang.RT.load(RT.java:451)
at clojure.lang.RT.load(RT.java:419)
at clojure.core$load$fn__5445.invoke(core.clj:5871)
at clojure.core$load.invokeStatic(core.clj:5870)
at clojure.core$load_one.invokeStatic(core.clj:5671)
at clojure.core$load_one.invoke(core.clj)
at clojure.core$load_lib$fn__5394.invoke(core.clj:5716)
at clojure.core$load_lib.invokeStatic(core.clj:5715)
at clojure.core$load_lib.doInvoke(core.clj)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invokeStatic(core.clj:635)
at clojure.core$load_libs.invokeStatic(core.clj:5753)
at clojure.core$load_libs.doInvoke(core.clj)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invokeStatic(core.clj:635)
at clojure.core$require.invokeStatic(core.clj:5775)
at clojure.core$require.doInvoke(core.clj)
at clojure.lang.RestFn.invoke(RestFn.java:457)
at vertigo.structs$loading__5337__auto3313.invoke(structs.clj:1)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3654)
at clojure.lang.Compiler.compile1(Compiler.java:7488)
at clojure.lang.Compiler.compile1(Compiler.java:7478)
at clojure.lang.Compiler.compile(Compiler.java:7555)
at clojure.lang.RT.compile(RT.java:406)
at clojure.lang.RT.load(RT.java:451)
at clojure.lang.RT.load(RT.java:419)
at clojure.core$load$fn__5445.invoke(core.clj:5871)
at clojure.core$load.invokeStatic(core.clj:5870)
at clojure.core$load_one.invokeStatic(core.clj:5671)
at clojure.core$load_one.invoke(core.clj)
at clojure.core$load_lib$fn__5394.invoke(core.clj:5716)
at clojure.core$load_lib.invokeStatic(core.clj:5715)
at clojure.core$load_lib.doInvoke(core.clj)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invokeStatic(core.clj:635)
at clojure.core$load_libs.invokeStatic(core.clj:5753)
at clojure.core$load_libs.doInvoke(core.clj)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invokeStatic(core.clj:635)
at clojure.core$require.invokeStatic(core.clj:5775)
at clojure.core$require.doInvoke(core.clj)
at clojure.lang.RestFn.invoke(RestFn.java:551)
at vertigo.core$loading__5337__auto1375.invoke(core.clj:1)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3654)
at clojure.lang.Compiler.compile1(Compiler.java:7488)
at clojure.lang.Compiler.compile1(Compiler.java:7478)
at clojure.lang.Compiler.compile(Compiler.java:7555)
at clojure.lang.RT.compile(RT.java:406)
at clojure.lang.RT.load(RT.java:451)
at clojure.lang.RT.load(RT.java:419)
at clojure.core$load$fn__5445.invoke(core.clj:5871)
at clojure.core$load.invokeStatic(core.clj:5870)
at clojure.core$load_one.invokeStatic(core.clj:5671)
at clojure.core$load_one.invoke(core.clj)
at clojure.core$load_lib$fn__5394.invoke(core.clj:5716)
at clojure.core$load_lib.invokeStatic(core.clj:5715)
at clojure.core$load_lib.doInvoke(core.clj)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invokeStatic(core.clj:635)
at clojure.core$load_libs.invokeStatic(core.clj:5757)
at clojure.core$load_libs.doInvoke(core.clj)
at 

  1   2   >