I saw many "HDFS IO Error" in Flume log when Hadoop process is restarted.
It will never recover even hadoop is restarted successfully. The only way
we can do is restarting Flume and Flume client to recover all setup.

So my question is how Flume handle such kind of failure such as Hadoop is
terminated unexpectedly? From my perspective, it will do nothing but just
report "HDFS IO Error".

Is my understanding wrong?

Thanks
Daiqian

Reply via email to