Github user JoshRosen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/9263#discussion_r43189612
  
    --- Diff: python/pyspark/mllib/tests.py ---
    @@ -76,7 +76,8 @@
         pass
     
     ser = PickleSerializer()
    -sc = SparkContext('local[4]', "MLlib tests")
    +conf = SparkConf().set("spark.driver.allowMultipleContexts", "true")
    --- End diff --
    
    Yeah, you shouldn't set this. The parallelism in `dev/run-tests` will 
actually launch separate JVMs, so that's not the cause of this problem. In 
general, you should _never_ set `spark.driver.allowMultipleContexts` (it was 
only added as an escape-hatch backwards-compatibility option for a feature that 
we never properly supported).
    
    There must be some other problem in the tests, likely due to test cleanup 
or SparkContext teardown not being executed properly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to