[
https://issues.apache.org/jira/browse/KYLIN-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112848#comment-17112848
]
wangrupeng commented on KYLIN-4450:
-----------------------------------
If "spark.driver.memory" is not set, kylin will auto adjust driver memory with
the number of cuboids.
> Add the feature that adjusting spark driver memory adaptively
> -------------------------------------------------------------
>
> Key: KYLIN-4450
> URL: https://issues.apache.org/jira/browse/KYLIN-4450
> Project: Kylin
> Issue Type: Improvement
> Components: Storage - Parquet
> Reporter: xuekaiqi
> Assignee: wangrupeng
> Priority: Major
> Fix For: v4.0.0-beta
>
> Original Estimate: 16h
> Remaining Estimate: 16h
>
> For now the cubing job can adaptively adjust the following spark properties
> to use of resources retionally, but the driver memory of the spark job
> uploaded to cluster haven't been done.
>
> {code:java}
> spark.executor.memory
> spark.executor.cores
> spark.executor.memoryOverhead
> spark.executor.instances
> spark.sql.shuffle.partitions
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)