[
https://issues.apache.org/jira/browse/FLUME-1779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553355#comment-13553355
]
Connor Woodson edited comment on FLUME-1779 at 1/15/13 12:53 AM:
-----------------------------------------------------------------
The simple way to fix this that would enable the sink to work on the fail-over
processor is to replace the return statement with throwing an exception.
However, when the sink isn't used in a fail-over processor, backoff is a much
more desirable return value, as there is a possibility that events may
eventually be writable. There could be a config setting such as
'retryBadConnection' defaulted to true that would run the return statement,
thus keeping backwards functionality, but when false would throw the exception;
is there a better way?
And I suppose I haven't fully tested when an IOException occurs; does the
bucket writer need to be removed as well?
was (Author: cd-wood):
The simple way to fix this that would enable the sink to work on the
fail-over processor is to replace the return statement with throwing an
exception. However, when the sink isn't used in a fail-over processor, backoff
is a much more desirable return value, as there is a possibility that events
may eventually be writable. There could be a config setting such as
'retryBadConnection' defaulted to true that would run the return statement,
thus keeping backwards functionality, but when false would throw the exception;
is there a better way?
> Backoff returned from HDFSEventSink after IOException
> -----------------------------------------------------
>
> Key: FLUME-1779
> URL: https://issues.apache.org/jira/browse/FLUME-1779
> Project: Flume
> Issue Type: Bug
> Reporter: Jaroslaw Grabowski
> Attachments: st
>
>
> Status.Backoff is returned from HdfsEventSink.process() in case of
> IOException. This behavior prevents FailoverSinkProcessor from pushing event
> to next sink in queue.
> In my test case, IOException is caused by serious hdfs failure, for example
> all DataNodes in cluster are dead. After such failure BucketWriter throws
> IOException and becomes unavailable - it probably should be removed from
> sfWriters map.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira