Mike Dusenberry commented on SYSTEMML-946:

The DataFrames were previously computed and saved to HDFS, so one thing I will 
try is re-computing & saving those again but with an explicit repartition to 
60,000 partitions.  I checked and the largest partition is 215 rows * 256*256*3 
features * 8 bytes / 1024 / 1024 = 322.5MB, so increasing partitions from 
20,000 -> 60,000 should help relieve pressure per partition.

> OOM on spark dataframe-matrix / csv-matrix conversion
> -----------------------------------------------------
>                 Key: SYSTEMML-946
>                 URL: https://issues.apache.org/jira/browse/SYSTEMML-946
>             Project: SystemML
>          Issue Type: Bug
>          Components: Runtime
>            Reporter: Matthias Boehm
>         Attachments: mnist_lenet.dml
> The decision on dense/sparse block allocation in our dataframeToBinaryBlock 
> and csvToBinaryBlock data converters is purely based on the sparsity. This 
> works very well for the common case of tall & skinny matrices. However, for 
> scenarios with dense data but huge number of columns a single partition will 
> rarely have 1000 rows to fill an entire row of blocks. This leads to 
> unnecessary allocation and dense-sparse conversion as well as potential 
> out-of-memory errors because the temporary memory requirement can be up to 
> 1000x larger than the input partition.

This message was sent by Atlassian JIRA

Reply via email to