[
https://issues.apache.org/jira/browse/FLUME-2306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13885664#comment-13885664
]
Hari Shreedharan commented on FLUME-2306:
-----------------------------------------
idleTimeout should, in fact, lose effect when the file gets closed. The idea of
idleTimeout is to close a file when data is not written to a file for a period
of time (usually this is smaller than the rollInterval). The only thing that
idleTimeout does is to close the file and remove the bucket writer reference.
Earlier, rollInterval based roll did not remove the reference, which caused a
silent leak until maxFiles was hit. We fixed that in FLUME-2265, so now the
rollInterval also removes the reference which prevents that leak.
> onIdleCallback is not canceled when stop hdfs sink
> --------------------------------------------------
>
> Key: FLUME-2306
> URL: https://issues.apache.org/jira/browse/FLUME-2306
> Project: Flume
> Issue Type: Bug
> Components: Sinks+Sources
> Affects Versions: v1.4.0
> Reporter: chenshangan
>
> the hdfs sink cached 5000 open files by default and it cost quite a lot of
> memory in total when using lzo CompressedStream. We should open the
> idleTimeout feature to resolve it. But there seems to be a bug with this
> feature. When stopping flume, HDFSWriter does not cancel the idle scheduler,
> which might cause flume not to stop. So I extend the current close() method
> in HDFSWriter as follows, and use it in HDFSEventSink when stop the sink
> component :
> /**
> * when stop flume, all schedulers should be canceled
> * @param cancelIdleCallback
> * @throws IOException
> * @throws InterruptedException
> */
> public void close(boolean cancelIdleCallback) throws IOException,
> InterruptedException{
> close();
> if(cancelIdleCallback){
> if (idleFuture != null && !idleFuture.isDone()) {
> idleFuture.cancel(false); // do not cancel myself if running!
> idleFuture = null;
> }
> }
> }
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)