[ 
https://issues.apache.org/jira/browse/SPARK-31028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun resolved SPARK-31028.
-----------------------------------
    Resolution: Invalid

Thank you for reporting, but this makes Spark fail on older JVM like the 
following.

{code}
$ docker run -it --rm openjdk:8u171-jre-alpine java -XX:ActiveProcessorCount=1
Unrecognized VM option 'ActiveProcessorCount=1'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
{code}

Since Apache Spark still supports Java 8u93+, I'll close this as `Invalid`.

- https://spark.apache.org/docs/3.0.0-preview2/

bq. Java 8 prior to version 8u92 support is deprecated as of Spark 3.0.0

Please try to use the existing configurations like 
`spark.executor.extraJavaOptions`.

> Add "-XX:ActiveProcessorCount" to Spark driver and executor in Yarn mode
> ------------------------------------------------------------------------
>
>                 Key: SPARK-31028
>                 URL: https://issues.apache.org/jira/browse/SPARK-31028
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 2.4.5
>            Reporter: shanyu zhao
>            Priority: Major
>
> When starting Spark driver and executors on Yarn cluster, the JVM process can 
> discover all CPU cores on the system and set thread-pool or GC threads based 
> on that value. We should limit what the JVM sees for the number of cores set 
> by the user (spark.driver.cores or spark.executor.cores) by 
> "-XX:ActiveProcessorCount", which was introduced in Java 8u191.
> Especially in running Spark on Yarn inside Kubernetes container, the number 
> of CPU cores discovered sometimes is 1, which means it always use 1 thread in 
> the default thread pool, or GC threads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to