HeartSaVioR commented on a change in pull request #23743: [SPARK-26843][MESOS] 
Use ConfigEntry for hardcoded configs for "mesos" resource manager
URL: https://github.com/apache/spark/pull/23743#discussion_r255358549
 
 

 ##########
 File path: 
resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala
 ##########
 @@ -80,9 +81,9 @@ class MesosFineGrainedSchedulerBackendSuite
   }
 
   test("Use configured mesosExecutor.cores for ExecutorInfo") {
-    val mesosExecutorCores = 3
+    val mesosExecutorCores = 3.0
 
 Review comment:
   I'm honestly not a expert on this but the documentation of 
`spark.mesos.mesosExecutor.cores` clarify that it can receive floating point 
number so the type still makes sense.
   
   https://github.com/apache/spark/blob/master/docs/running-on-mesos.md
   
   ```
   <tr>
     <td><code>spark.mesos.mesosExecutor.cores</code></td>
     <td><code>1.0</code></td>
     <td>
       (Fine-grained mode only) Number of cores to give each Mesos executor. 
This does not
       include the cores used to run the Spark tasks. In other words, even if 
no Spark task
       is being run, each Mesos executor will occupy the number of cores 
configured here.
       The value can be a floating point number.
     </td>
   </tr>
   ```
   
   The default value is also explained as `1.0` instead of 1. I'll update the 
config entry to reflect the sync.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to