Hi Alex,

you seem to hit SPARK-26606 [1] which has been fixed in 2.4.1. Could you
try it out with latest version?

Thanks,
Jungtaek Lim (HeartSaVioR)

1. https://issues.apache.org/jira/browse/SPARK-26606

On Tue, Aug 20, 2019 at 3:43 AM Alex Landa <metalo...@gmail.com> wrote:

> Hi,
>
> We are using Spark Standalone 2.4.0 in production and publishing our Scala
> app using cluster mode.
> I saw that extra java options passed to the driver don't actually pass.
> A submit example:
> *spark-submit --deploy-mode cluster --master spark://<master ip>:7077
> --driver-memory 512mb --conf
> "spark.driver.extraJavaOptions=-XX:+HeapDumpOnOutOfMemoryError" --class
> App  app.jar *
>
> Doesn't pass *-XX:+HeapDumpOnOutOfMemoryError *as a JVM argument, but
> pass instead
> *-Dspark.driver.extraJavaOptions=-XX:+HeapDumpOnOutOfMemoryError*I
> created a test app for it:
>
> val spark = SparkSession.builder()
>   .master("local")
>   .appName("testApp").getOrCreate()
> import spark.implicits._
>
> // get a RuntimeMXBean reference
> val runtimeMxBean = ManagementFactory.getRuntimeMXBean
>
> // get the jvm's input arguments as a list of strings
> val listOfArguments = runtimeMxBean.getInputArguments
>
> // print the arguments
> listOfArguments.asScala.foreach(a => println(s"ARG: $a"))
>
>
> I see that for client mode I get :
> ARG: -XX:+HeapDumpOnOutOfMemoryError
> while in cluster mode I get:
> ARG: -Dspark.driver.extraJavaOptions=-XX:+HeapDumpOnOutOfMemoryError
>
> Would appreciate your help how to work around this issue.
> Thanks,
> Alex
>
>

-- 
Name : Jungtaek Lim
Blog : http://medium.com/@heartsavior
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior

Reply via email to