[
https://issues.apache.org/jira/browse/SYSTEMML-1140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15847630#comment-15847630
]
Matthias Boehm commented on SYSTEMML-1140:
------------------------------------------
Could you please specify the workloads to reproduce the mentioned issues?
> Sparse/Caching performance bugs related to deep learning scripts
> ----------------------------------------------------------------
>
> Key: SYSTEMML-1140
> URL: https://issues.apache.org/jira/browse/SYSTEMML-1140
> Project: SystemML
> Issue Type: Bug
> Affects Versions: SystemML 1.0
> Reporter: Niketan Pansare
> Priority: Blocker
>
> We have identified two performance bugs that frequently occurs in deep
> learning script.
> First, we repeatedly perform unnecessary conversion to sparse format. Also,
> the operations such as matrix multiplication (including BLAS and CuBLAS) are
> optimized for dense.
>
> Second, even with large memory budget, we sometimes spend almost 20-30% time
> in caching.
> [~mboehm7] [~reinwald] [[email protected]] I am labeling this bug as
> blocker for SystemML 1.0. Please feel free to assign this issue to yourself.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)