Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/4155#issuecomment-71727094
  
    > I don't exactly know how to test the full stack (as now this requires 
actual executors to throw the TaskEndReason back to the driver) while having 
the scheduler somehow submit two copies of a task in speculation-like fashion. 
Any suggestions here?
    
    This is a tricky problem, since I don't think we have any end-to-end tests 
of task speculation with non-mocked components because Spark won't schedule 
speculative tasks on the same host.  Maybe we could add a flag to disable this 
"same host exclusion" for tests.
    
    There are some tricky testing issues here and I haven't fully thought 
through them yet.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to