This particular JIRA is only partially related. Niketan and Nakul
worked out the details - the only reason I show up as the reporter is
that, if I remember correctly, we split a larger scoped JIRA for
low-level optimizations (GPU, codegen, compression) into individual
JIRAs and created the
Hi Matthias,
Was this related to long term plan for GPU codegen?
Thank you,
Janardhan
Thanks Matthias! It will be great for passing the model to the paramserv
function.
Regards,
Guobao
2018-05-10 21:47 GMT+02:00 Matthias Boehm :
> just FYI: we now have support for list and named-list data types in
> SystemML, which allow passing the entire model as a single
just FYI: we now have support for list and named-list data types in
SystemML, which allow passing the entire model as a single handle. For
example, you can define the following
l1 = list(W1, b1, W2, b2, W3, b3, W4, b4), or
l2 = list(a=W1, b=b1, c=W2, d=b2, e=W3, f=b3, g=W4, h=b4)
and access
Hi Janardhan,
>> 1. Can you help me, estimate how much would it take to implement
blocksparse kernels practically.
This is a difficult question to answer as it depends on how comfortable you
are with writing and optimizing sparse kernels. To implement block-sparse
kernel as per your document,