Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13603#discussion_r67943657
  
    --- Diff: 
core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala ---
    @@ -96,22 +120,13 @@ class TaskSchedulerImplSuite extends SparkFunSuite 
with LocalSparkContext with L
         taskDescriptions = 
taskScheduler.resourceOffers(multiCoreWorkerOffers).flatten
         assert(1 === taskDescriptions.length)
         assert("executor0" === taskDescriptions(0).executorId)
    +    assert(!failedTaskSet)
       }
     
       test("Scheduler does not crash when tasks are not serializable") {
    -    sc = new SparkContext("local", "TaskSchedulerImplSuite")
    --- End diff --
    
    mentioned this below as well, but just to be clear -- I was mistaken, that 
bug doesn't effect the case where the tasks aren't serializable.  That still 
correctly fails with an error about serialization.  The error I was 
encountering is in a different case ("multiple CPUs per task", since there you 
never add the executors, just the hosts), and still needs a workaround for now, 
which I've added.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to