[
https://issues.apache.org/jira/browse/SPARK-3071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344186#comment-14344186
]
Joseph K. Bradley edited comment on SPARK-3071 at 3/3/15 1:00 AM:
------------------------------------------------------------------
+1 for increasing default driver memory. The default value should be >= the
amount of memory used in unit tests (2GB).
My personal interest: This increase would fix an issue (in local mode) with an
MLlib example in the docs caused by needing to save a Parquet file whose schema
has a large number of columns (about 13 when flattened):
[https://issues.apache.org/jira/browse/SPARK-6120]
was (Author: josephkb):
+1000 for increasing default driver memory. The default value should be >= the
amount of memory used in unit tests (2GB).
My personal interest: This increase would fix an issue with an MLlib example in
the docs caused by needing to save a Parquet file whose schema has a large
number of columns (about 13 when flattened):
[https://issues.apache.org/jira/browse/SPARK-6120]
> Increase default driver memory
> ------------------------------
>
> Key: SPARK-3071
> URL: https://issues.apache.org/jira/browse/SPARK-3071
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Reporter: Xiangrui Meng
>
> The current default is 512M, which is usually too small because user also
> uses driver to do some computation. In local mode, executor memory setting is
> ignored while only driver memory is used, which provides more incentive to
> increase the default driver memory.
> I suggest
> 1. 2GB in local mode and warn users if executor memory is set a bigger value
> 2. same as worker memory on an EC2 standalone server
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]