ulysses-you commented on a change in pull request #32184:
URL: https://github.com/apache/spark/pull/32184#discussion_r692598890



##########
File path: docs/job-scheduling.md
##########
@@ -252,10 +252,11 @@ properties:
 
 The pool properties can be set by creating an XML file, similar to 
`conf/fairscheduler.xml.template`,
 and either putting a file named `fairscheduler.xml` on the classpath, or 
setting `spark.scheduler.allocation.file` property in your
-[SparkConf](configuration.html#spark-properties).
+[SparkConf](configuration.html#spark-properties). The file path can either be 
a local file path or HDFS file path.

Review comment:
       @HyukjinKwon sorry, not sure I get your point.
   
   > So, if users from old Spark versions use a path like /path/to/file, the 
files will be written into HDFS after the upgrade.
   
   Why we need to write files into HDFS ? This PR is to support read remote 
file as the schedule pool. I think there is no behavior change but just a new 
feature.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to