Sure that can be configurable. Kickoff the work by open a JIRA.

Thanks
Yang

On Tue, Apr 17, 2018 at 10:21 AM, <[email protected]> wrote:

> Currently in Spark cubing, the StorageLevel is set to
> StorageLevel.MEMORY_AND_DISK_SER, which will take up a lot of memory if
> the RDD of the layer is large. Can we make StorageLevel configurable ? So
> that for large cube, user can set it to Disk to avoid OOM error.
>

Reply via email to