[ 
https://issues.apache.org/jira/browse/FLUME-1654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13488494#comment-13488494
 ] 

Denny Ye commented on FLUME-1654:
---------------------------------

This problem was caused by FileNotFoundException from NameNode. When exception 
was happening (close method also failed), no chance to modify status 'isOpen' 
with false value.

In my opinion, each sub-class of HDFSWriter should provide 'exists' method to 
identify the file status from NameNode. If that file has been deleted or moved, 
we can reopen new target file to avoid event accumulation at Channel.

I have modified code and tested this method, it worked well.
                
> Event always stay at Channel even if HDFS Sink's file has been removed
> ----------------------------------------------------------------------
>
>                 Key: FLUME-1654
>                 URL: https://issues.apache.org/jira/browse/FLUME-1654
>             Project: Flume
>          Issue Type: Bug
>          Components: Sinks+Sources
>    Affects Versions: v1.2.0, v1.3.0
>            Reporter: Denny Ye
>
> Environment of data flow : One HDFS Sink is consuming event from Channel. The 
> destination file of that HDFS Sink has been deleted by mistake. Sink retried 
> at 'append' and got failure, also it's cannot close that file. The failure is 
> infinitely at Sink. No other ways to consume events from Channel(single 
> Channel is mapping single Sink).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to