dongjoon-hyun commented on a change in pull request #30903:
URL: https://github.com/apache/spark/pull/30903#discussion_r547782377



##########
File path: docs/core-migration-guide.md
##########
@@ -30,6 +30,8 @@ license: |
 
 - In Spark 3.0 and below, `SparkContext` can be created in executors. Since 
Spark 3.1, an exception will be thrown when creating `SparkContext` in 
executors. You can allow it by setting the configuration 
`spark.executor.allowSparkContext` when creating `SparkContext` in executors.
 
+- In Spark 3.0 and below, it propagated the Hadoop class path from 
`yarn.application.classpath` and `mapreduce.application.classpath` into the 
Spark application submitted to YARN when Spark distribution is with the 
built-in Hadoop. Since Spark 3.1, it does not propagate anymore when the Spark 
distribution is with the built-in Hadoop in order to prevent the failure from 
the different transitive dependencies picked up from the Hadoop cluster such as 
Guava and Jackson. To restore the behavior before Spark 3.1, you can set 
`spark.yarn.populateHadoopClasspath` to `true`.

Review comment:
       `it propagated` -> `Spark propagated`?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to