[
https://issues.apache.org/jira/browse/HBASE-19201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16243432#comment-16243432
]
Yung-An He commented on HBASE-19201:
------------------------------------
Hi
Could you post the needed code which you copied into the scala project?
> BulkLoading in HBaseContext in hbase-spark does not close connection
> --------------------------------------------------------------------
>
> Key: HBASE-19201
> URL: https://issues.apache.org/jira/browse/HBASE-19201
> Project: HBase
> Issue Type: Bug
> Components: hbase
> Affects Versions: 1.1.12
> Environment: I was using the cdh 5.11.1 version but I checken on
> newest branch and problem persists
> Reporter: Lucas Resch
> Labels: newbie
> Original Estimate: 2h
> Remaining Estimate: 2h
>
> Within the hbase-spark module an HBaseContext exists that provides utility
> functions to do bulkLoading data in HBase. I tried using this function in a
> streaming context, but after a while Zookeeper denies further connections
> since the maximum of connections per client is exhausted.
> This issue seems to be within HBaseContext, since the functions bulkLoad and
> bulkLoadThinRows open a connection via the ConnectionFactory, but never
> closes that connection.
> I copied the needed code into a new scala project and added a conn.close() at
> the end of the function and the problem is gone.
> It seems like no one else has had this problem before. I'm guessing thats
> because almost no one uses its function within a streaming context. And a one
> time call to it with RDDs might never reach that upper limit on connections.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)