GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/20441
[SPARK-23275] hive/tests have been failing when run locally on the laptop
(Mac) with OOM
## What changes were proposed in this pull request?
hive tests have been failing when they are run locally (Mac Os) after a
recent change in the trunk. After running the tests for some time, the test
fails with OOM with Error: unable to create new native thread.
I noticed the thread count goes all the way up to 2000+ after which we
start getting these OOM errors. Most of the threads seem to be related to the
connection pool in hive metastore (BoneCP-xxxxx-xxxx ). This behaviour change
is happening after we made the following change to HiveClientImpl.reset()
``` SQL
def reset(): Unit = withHiveState {
try {
// code
} finally {
runSqlHive("USE default") ===> this is causing the issue
}
```
I am proposing to temporarily back-out part of a fix made to address
SPARK-23000 to resolve this issue while we work-out the exact reason for this
sudden increase in thread counts.
## How was this patch tested?
Ran hive/test multiple times in different machines.
(If this patch involves UI changes, please attach a screenshot; otherwise,
remove this)
Please review http://spark.apache.org/contributing.html before opening a
pull request.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/dilipbiswal/spark hive_tests
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/20441.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #20441
----
commit 983aa1839e6991b72f02639e0bd5355200da1d47
Author: Dilip Biswal <dbiswal@...>
Date: 2018-01-30T19:37:25Z
[SPARK-23275] hive/tests have been failing when run locally on the laptop
(Mac) with OOM
----
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]