[ 
https://issues.apache.org/jira/browse/HIVE-18214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16310956#comment-16310956
 ] 

Peter Vary commented on HIVE-18214:
-----------------------------------

Hi [~stakiar],
Thanks for looking into this.
Quick question since I am not entirely comfortable with how RemoteDriver works:
- Is it possible to run into this race condition in normal use case. For 
example changing spark configuration values and issuing commands which 
reinitializes the Driver in quick successions?

If this is only a testing issue then I would be more comfortable with a 
solution where we do not expose new methods only for this purpose. If it is an 
issue which we can have in a production environment we might want to 
encapsulate it into the RemoteDriver object.

What do you think?
Peter

> Flaky test: TestSparkClient
> ---------------------------
>
>                 Key: HIVE-18214
>                 URL: https://issues.apache.org/jira/browse/HIVE-18214
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Sahil Takiar
>            Assignee: Sahil Takiar
>         Attachments: HIVE-18214.1.patch
>
>
> Looks like there is a race condition in {{TestSparkClient#runTest}}. The test 
> creates a {{RemoteDriver}} in memory, which creates a {{JavaSparkContext}}. A 
> new {{JavaSparkContext}} is created for each test that is run. There is a 
> race condition where the {{RemoteDriver}} isn't given enough time to 
> shutdown, so when the next test starts running it creates another 
> {{JavaSparkContext}} which causes an exception like 
> {{org.apache.spark.SparkException: Only one SparkContext may be running in 
> this JVM (see SPARK-2243)}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to