I would imagine that for most sentences, once data was in GPU, it
would stay there (until some operation, such as foreigns, required
moving it back out).

That said, this would not be a small project and GPU limitations may
require imposing some seemingly arbitrary limitations - especially in
early drafts.

Thanks,

-- 
Raul


On Wed, Dec 13, 2017 at 7:02 PM, bill lam <[email protected]> wrote:
> I had tried making use of opencl inside j engine. From my experience
>
> 1. Double precision can be slower than single precision by an order of
> magnitude because gpu has very few number of alu for double precision
> operations. More expensive ones that designed for scientific computing
> instead of gaming support double precision better.
>
> 2. For most primitive operations such as + - * % *. +/ etc, the overall
> head of moving data to/from gpu memory is too large compared with actual
> real value operations. Using avx simd is much simpler and effective.
>
> 3. Opencl is useful for complex  or  highly loopy operations where the
> overhead is justified. Eg. Matrix multiplication, but still tuning is
> required and can be important. Again the issue of double precision applies.
> It is unlikely to achieve TFlops on matrix multiplication on double
> precision on commodity gaming gpu.
>
> 4. Apart from J built-in operations, user should be allowed to write kernel
> in j scripts to solve their specific problem, eg some image processing
> routines.
>
> Mathematica supports opencl and J may learn something from there of how to
> use opencl.
>
>
> On Dec 14, 2017 3:17 AM, "TongKe Xue" <[email protected]> wrote:
>
>> Hi,
>>
>>
>>   1. There are two questions in this email, but they're closely
>> related. Feel free to reply to either or both.
>>
>>   2. "J-like" here means NumPy/MatLab, but with J syntax. It only
>> needs support tensors of Floats -- no need for Complex, Chars, Boxes,
>> file IO, GUI, ...
>>
>>   3. I know about https://github.com/arcfide/Co-dfns but I don't think
>> it's what I want. (Not familiar with Dyalog APL)
>>
>>   Actual Questions:
>>
>>   Is there a J-like that is backed by Cuda or OpenCL ? I want the
>> tensors to be stored on GPU memory. Lexing/parsing can happen either
>> on CPU or GPU.
>>
>>   Is there a J-like that is backed by Webassembly? I want the tensotrs
>> to be stored in WebAssembly controlled memory, and JS to submit J
>> code.
>>
>>
>> Thanks,
>> --TongKe
>> ----------------------------------------------------------------------
>> For information about J forums see http://www.jsoftware.com/forums.htm
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to