Github user srowen commented on the issue:
https://github.com/apache/spark/pull/16913
You should not set a global init heap size like this and this is pretty
much exactly why. There is no need for the launcher to immediately requests
that much memory otherwise. Please close this
Github user Pashugan commented on the issue:
https://github.com/apache/spark/pull/16913
There must be some misunderstanding. May I have a chance you have a look at
my micro-patch because it has nothing to do with the driver and its options. In
fact, it fixes the call of the "launcher
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/16913
You should not be using JAVA_TOOL_OPTIONS. Set Spark's JVM properties
directly with spark.driver.extraJavaOptions, for example.
---
If your project is set up for it, you can reply to this email and
Github user Pashugan commented on the issue:
https://github.com/apache/spark/pull/16913
JAVA_TOOL_OPTIONS doesn't take precedence over command line options (in
oppose to _JAVA_OPTIONS). Thus, JVM is trying to start with an initial heap
size set in JAVA_TOOL_OPTIONS and a max heap
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/16913
Well, you need to set a larger max heap too. I can't see how this is a
Spark problem.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16913
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this