Github user attilapiros commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21068#discussion_r185447393
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
 ---
    @@ -170,8 +170,7 @@ class CoarseGrainedSchedulerBackend(scheduler: 
TaskSchedulerImpl, val rpcEnv: Rp
             if (executorDataMap.contains(executorId)) {
               executorRef.send(RegisterExecutorFailed("Duplicate executor ID: 
" + executorId))
               context.reply(true)
    -        } else if (scheduler.nodeBlacklist != null &&
    -          scheduler.nodeBlacklist.contains(hostname)) {
    +        } else if 
(scheduler.nodeBlacklistWithExpiryTimes.contains(hostname)) {
    --- End diff --
    
    I like the current solution. As on production nodeBlacklist really cannot 
be null and that null came from the tests. So we adopted the tests to be more 
close to the production case (and not the other way around: adapting the 
production code to be prepared for the side effect of the testing).


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to