-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6578/#review10299
-----------------------------------------------------------


I haven't tested the stated problem scenario but here are some issues with this 
patch:

1. Because the timestamp is event-oriented, not local time oriented, you cannot 
just close the previous timestamped file because you are pretty much guaranteed 
to have some period where you are getting some "old" and "new" events any time 
you have a fan-in flow from upstream agents. This patch will cause tons of new 
files to be created (thrashing).
2. The LRU will eventually close the file, if you want guaranteed closing then 
specifying a rollInterval should provide this guarantee.

Are you sure the rollInterval is not respected in this case?

- Mike Percy


On Aug. 14, 2012, 6:39 p.m., Yongcheng Li wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6578/
> -----------------------------------------------------------
> 
> (Updated Aug. 14, 2012, 6:39 p.m.)
> 
> 
> Review request for Flume.
> 
> 
> Description
> -------
> 
> This resolves Flume-1350 which describes a problem of Flume where HDFS file 
> handle does not close properly when date bucketing. The fix adds a map 
> between the sink's path and its real path. Whenever a new real path is 
> generated due to data bucketing, it closes the bucketwriter associated with 
> existing real path and update the link between the sink's path and its real 
> path.
> 
> 
> This addresses bug Flume-1350.
>     https://issues.apache.org/jira/browse/Flume-1350
> 
> 
> Diffs
> -----
> 
>   
> flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/HDFSEventSink.java
>  fcb9642 
> 
> Diff: https://reviews.apache.org/r/6578/diff/
> 
> 
> Testing
> -------
> 
> The fix has been manually tested.
> 
> 
> Thanks,
> 
> Yongcheng Li
> 
>

Reply via email to