Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/3564#issuecomment-66717729
  
    I tried running `sbt/sbt core/test` with this PR but noticed some weird 
output interleaving from multiple test suites:
    
    ```
    [info] HashShuffleSuite:
    [info] ThreadingSuite:
    [info] ProactiveClosureSerializationSuite:
    [info] ShuffleNettySuite:
    [info] ExecutorAllocationManagerSuite:
    [info] FailureSuite:
    [info] FileSuite:
    [info] SparkContextSuite:
    [info] - accessing SparkContext form a different thread (9 seconds, 997 
milliseconds)
    [info] - throws expected serialization exceptions on actions (158 
milliseconds)
    [info] - mapPartitions transformations throw proactive serialization 
exceptions (29 milliseconds)
    [info] - map transformations throw proactive serialization exceptions (57 
milliseconds)
    [info] - filter transformations throw proactive serialization exceptions 
(29 milliseconds)
    [info] - flatMap transformations throw proactive serialization exceptions 
(58 milliseconds)
    [info] - groupByKey without compression (10 seconds, 994 milliseconds)
    [info] - mapPartitionsWithIndex transformations throw proactive 
serialization exceptions (165 milliseconds)
    [info] - text files (9 seconds, 609 milliseconds)
    [info] KryoSerializerDistributedSuite:
    [info] - failure in a single-stage job (11 seconds, 604 milliseconds)
    [info] - groupByKey without compression (12 seconds, 363 milliseconds)
    [info] - accessing SparkContext form multiple threads (2 seconds, 19 
milliseconds)
    [info] - verify min/max executors (16 seconds, 63 milliseconds)
    [info] - text files (compressed) (3 seconds, 731 milliseconds)
    [info] - failure in a two-stage job (3 seconds, 84 milliseconds)
    [info] - starting state (520 milliseconds)
    [info] - Only one SparkContext may be active at a time (14 seconds, 482 
milliseconds)
    [info] - accessing multi-threaded SparkContext form multiple threads (2 
seconds, 198 milliseconds)
    [info] - SequenceFiles (724 milliseconds)
    [info] - add executors (541 milliseconds)
    [info] - Can still construct a new SparkContext after failing to construct 
a previous one (583 milliseconds)
    [info] - parallel job execution (874 milliseconds)
    [info] - failure in a map stage (1 second, 388 milliseconds)
    [info] - SequenceFile (compressed) (772 milliseconds)
    [info] - add executors capped by num pending tasks (846 milliseconds)
    [info] - set local properties in different thread (822 milliseconds)
    [info] - failure because task results are not serializable (1 second, 392 
milliseconds)
    [info] - SequenceFile with writable key (1 second, 143 milliseconds)
    [info] - Check for multiple SparkContexts can be disabled via undocumented 
debug option (2 seconds, 587 milliseconds)
    [info] - set and get local properties in parent-children thread (606 
milliseconds)
    [info] - remove executors (1 second, 202 milliseconds)
    [info] - BytesWritable implicit conversion is correct (66 milliseconds)
    [info] - failure because task closure is not serializable (2 seconds, 67 
milliseconds)
    [info] - interleaving add and remove (860 milliseconds)
    [info] - SequenceFile with writable value (1 second, 489 milliseconds)
    [info] - starting/canceling add timer (483 milliseconds)
    [info] - SequenceFile with writable key and value (667 milliseconds)
    [info] - starting/canceling remove timers (850 milliseconds)
    [info] - implicit conversions in reading SequenceFiles (1 second, 191 
milliseconds)
    [...]
    ```
    
    You also see this in Jenkins.  This was the sort of output-interleaving 
problem that I mentioned on the JIRA.  I wish that there was a way to work 
around this, since this can make it really hard to debug failures.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to