Bill

>That would seem to be pretty pessimistic wrt do Machine Learning which almost 
>requires he compute resources of a GPU. Has anyone harnessed say a Titan 
>within J, or nearby?

greg
~krsnadas.org

--

from: Henry Rich <[email protected]>
to: [email protected]
date: 21 June 2016 at 18:55
subject: Re: [Jgeneral] GPU APL compiler work

>Actually, I think matrix multiplication IS exceptional: at least it's 
>different from vector addition or multiplication.  The operation takes O(n^3) 
>arithmetic operations to produce a result of O(n^2) atoms, so even if the 
>data-transfer is relatively expensive, a big reduction in time spent on 
>arithmetic may outweigh the cost of data management.  For n>100 I'd expect the 
>GPU to be a winner.

--

from: bill lam <[email protected]>
to: 'Pascal Jasmin' via General <[email protected]>
date: 21 June 2016 at 18:43
subject: Re: [Jgeneral] GPU APL compiler work

>The main difficulty in using GPU is memory, not just memory bandwidth, but 
>also how to pipe data into GPU and fine tuning block size so that memory 
>reference can be localized within each core. matrix multiplication is no 
>exception.

--

from: 'Pascal Jasmin' via General <[email protected]>
to: "[email protected]" <[email protected]>
date: 21 June 2016 at 18:11
subject: Re: [Jgeneral] GPU APL compiler work

in my tessts with Arrayfire (bindings here:
https://github.com/Pascal-J/Jfire )

>what I found annoying was the JIT compilation step.  I think Futhark does away 
>with this step, or at least provides a saveable version.

>all recent Intel/AMD chips have decent built in GPUs with low latency.

>Even on faster dedicated cards though, you can keep data/results there if 
>there is further processing to do.

>things like martix multiplication and other similar tasks are 10x to 100x 
>faster (iirc) including the round trip back to cpu.

--

from: bill lam <[email protected]>
to: 'Pascal Jasmin' via General <[email protected]>
date: 21 June 2016 at 17:26
subject: Re: [Jgeneral] GPU APL compiler work

>INO benefit of using GPU for implementing APL (and J) primiivies
is questionable. Most primitives are simple and the efficiency
of APL/J comes the processing large arrays. The time needed to
read/write GPU memory for large array is not justified
unless the job is highly looped eg, encoding/decoding jpeg.

--

from: 'Pascal Jasmin' via General <[email protected]>
to: "[email protected]" <[email protected]>
date: 21 June 2016 at 08:58
subject: Re: [Jgeneral] GPU APL compiler work

I think gnu apl is a full apl.


These projects I believe implement 30 or so primitives.

>Its writing gpu code with APL snippets subvocabulary.  Not a full APL 
>environment.

--

from: Marc Simpson <[email protected]> via gmail.com
to: general <[email protected]>
date: 20 June 2016 at 17:27
subject: Re: [Jgeneral] GPU APL compiler work

Thanks.

Q: How do these projects relate to GNU APL? As far as I can tell,
they're independent research.

--

from: 'Pascal Jasmin'
reply-to: [email protected]
to: General Forum <[email protected]>
date: 20 June 2016 at 08:58
subject: [Jgeneral] GPU APL compiler work

Interesting recent projects,

TAIL - typed array intermediate language
http://www.elsman.com/pdf/array14_final.pdf

>uses structures very similar to J's internal noun format.  (all of the items 
>are the same anyway, though it perhaps only has int and double data types)

>Semantics for core operations are similar to J (take with negative index takes 
>from the end)

used with a SML apl to TAIL compiler

https://github.com/melsman/apltail/

>A more interesting project is the Futhark language, and its leveraging of the 
>above 2 projects to target GPUs, and extends datatypes to char, bool, tuples.

Futhark feels higher level and cleaner than TAIL.

spec paper: http://futhark-lang.org/publications/fhpc16.pdf

more general overview/benchmark/example site:

http://futhark-lang.org/index.html

pretty much every link there is interesting.
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to