[
https://issues.apache.org/jira/browse/FLUME-1654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13797467#comment-13797467
]
Suhas Satish commented on FLUME-1654:
-------------------------------------
I am hitting the same issue too in production environment.
Here's the stack trace -
10 Oct 2013 16:55:42,092 WARN [SinkRunner-PollingRunner-DefaultSinkProcessor]
(org.apache.flume.sink.hdfs.BucketWriter.append:430) - Caught IOException
while closing file
(maprfs:///flume_import/2013/10/16//logdata-2013-10-16-50-03.1381449008596.tmp).
Exception follows.
java.io.IOException: 2049.112.5249612
/flume_import/2013/10/16/logdata-2013-10-16-50-03.1381449008596.tmp (Stale file
handle)
WARN [SinkRunner-PollingRunner-DefaultSinkProcessor]
(org.apache.flume.sink.hdfs.HDFSEventSink.process:418) - HDFS IO error
java.io.IOException: 2049.112.5249612
The solution would be to wrap around another nested try catch block in
HDFSEventSink.java around
bucketWriter.append(event);
> Event always stay at Channel even if HDFS Sink's file has been removed
> ----------------------------------------------------------------------
>
> Key: FLUME-1654
> URL: https://issues.apache.org/jira/browse/FLUME-1654
> Project: Flume
> Issue Type: Bug
> Components: Sinks+Sources
> Affects Versions: v1.2.0, v1.3.0
> Reporter: Denny Ye
>
> Environment of data flow : One HDFS Sink is consuming event from Channel. The
> destination file of that HDFS Sink has been deleted by mistake. Sink retried
> at 'append' and got failure, also it's cannot close that file. The failure is
> infinitely at Sink. No other ways to consume events from Channel(single
> Channel is mapping single Sink).
--
This message was sent by Atlassian JIRA
(v6.1#6144)