[
https://issues.apache.org/jira/browse/DRILL-4268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15139822#comment-15139822
]
Ian Maloney commented on DRILL-4268:
------------------------------------
I've upgraded to Drill 1.4 and I have not seen this issue yet. If it does not
show up in the next few weeks, I think this has been fixed in drill 1.4
> Possible resource leak leading to SocketException: Too many open files
> ----------------------------------------------------------------------
>
> Key: DRILL-4268
> URL: https://issues.apache.org/jira/browse/DRILL-4268
> Project: Apache Drill
> Issue Type: Bug
> Affects Versions: 1.2.0
> Environment: RHEL 6 running against Hive storage type
> Reporter: Ian Maloney
>
> I have a java app accessing Drill 1.2 via JDBC, which runs 100s of counts on
> various tables. No concurrency is being used. The JDBC URL uses the format:
> jdbc:drill:drillbit=a-bits-hostname
> Hanifi suggested I check for open file descriptors using:
> lsof -a -p DRILL_PID | wc -l
> which I did on the two nodes, I currently have running drill, both, before and
> after restarting.
> Node from JDBC connection string (which had been previously restarted):
> Before: 396
> After: 396
> Other node:
> Before: 14
> After: 395
> The error, "Too many open files", persists after restarting the bits.
> Opened as a result of this thread:
> http://mail-archives.apache.org/mod_mbox/drill-user/201601.mbox/browser
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)