Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/6542#issuecomment-107242078
  
    In principle, we should probably be using a proper testrunner like `nose` 
to handle test discovery and execution.  The reason why we didn't do this 
initially is because we need some custom code in `__main__` to emulate a shared 
SparkContext fixture for the doctests, since putting SparkContext setup and 
teardown code into each doctest would be very messy and slow.
    
    In the medium-term, we're going to want to refactor `run-tests` anyways in 
order to make it easier to run subsets of the tests and python versions (see 
#4269, for example).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to