Github user marmbrus commented on the pull request:

    https://github.com/apache/spark/pull/3121#issuecomment-62032343
  
    @JoshRosen I don't think the situation is quite a dire as you suggest 
(every line of test code?).  We can add logic to `QueryTest` and the other base 
test classes that creates a `SQLContext` per suite with whatever `SparkContext` 
you want.  We can then turn the data objects into traits that are mixed into 
the test cases they need.  As long some SQLContext is in scope, and the 
required tables are added to that context during the constructor I don't 
anticipate any major problems.
    
    Hive is going to be another story.  The whole reason for this singleton 
context pattern is that we have problems initializing more than one HiveContext 
in a single JVM.  If you try to do that all DDL operations fail with a 
mysterious `Database default does not exist` error.  We have never been able to 
figure out what sort of global state Hive relies on (though admittedly it has 
not been a very high priority since a global context with a robust `.reset()` 
has worked pretty well so far).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to