mridulm commented on code in PR #39703:
URL: https://github.com/apache/spark/pull/39703#discussion_r1083660245


##########
conf/fairscheduler-default.xml.template:
##########
@@ -0,0 +1,26 @@
+<?xml version="1.0"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<allocations>
+  <pool name="default">
+    <schedulingMode>FAIR</schedulingMode>
+    <weight>1</weight>
+    <minShare>0</minShare>
+  </pool>
+</allocations>

Review Comment:
   There is a `conf/fairscheduler.xml.template` - why do we need this ?
   If it is for testing, move it as a resource there instead of in conf ?



##########
core/src/main/scala/org/apache/spark/scheduler/SchedulableBuilder.scala:
##########
@@ -86,10 +87,17 @@ private[spark] class FairSchedulableBuilder(val rootPool: 
Pool, sc: SparkContext
           logInfo(s"Creating Fair Scheduler pools from default file: 
$DEFAULT_SCHEDULER_FILE")
           Some((is, DEFAULT_SCHEDULER_FILE))
         } else {
-          logWarning("Fair Scheduler configuration file not found so jobs will 
be scheduled in " +
-            s"FIFO order. To use fair scheduling, configure pools in 
$DEFAULT_SCHEDULER_FILE or " +
-            s"set ${SCHEDULER_ALLOCATION_FILE.key} to a file that contains the 
configuration.")
-          None
+          val is = 
Utils.getSparkClassLoader.getResourceAsStream(DEFAULT_SCHEDULER_TEMPLATE_FILE)
+          if (is != null) {
+            logInfo("Creating Fair Scheduler pools from default template file: 
" +
+              s"$DEFAULT_SCHEDULER_TEMPLATE_FILE.")
+            Some((is, DEFAULT_SCHEDULER_TEMPLATE_FILE))
+          } else {
+            logWarning("Fair Scheduler configuration file not found so jobs 
will be scheduled in " +
+              s"FIFO order. To use fair scheduling, configure pools in 
$DEFAULT_SCHEDULER_FILE " +
+              s"or set ${SCHEDULER_ALLOCATION_FILE.key} to a file that 
contains the configuration.")
+            None
+          }

Review Comment:
   We should not be relying on template file - in deployments, template file 
can be invalid - admin's are not expecting it to be read by spark.
   
   Instead, why not simply rely on returning `None` here ?
   
   Note - if this is only for testing, we can special case it that way via 
`spark.testing`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to