> On April 28, 2014, 6:34 a.m., Mike Percy wrote:
> > flume-ng-doc/sphinx/FlumeUserGuide.rst, line 1600
> > <https://reviews.apache.org/r/20698/diff/2/?file=568296#file568296line1600>
> >
> >     AFAICT, "Unlimited" is not actually a valid value to have in a config 
> > file, so I think people may get confused when trying to specify it. How 
> > about using 0 as the default? Of course, we'd need to indicate in the docs 
> > that 0 means it will retry forever. I'd also humbly suggest a rename of 
> > this property to "maxCloseAttempts" if we do the above, since it should be 
> > intuitive that 0 would not make sense for max # of close attempts.
> >     
> >     I do agree that conceptually, unlimited is the right default.
> >     
> >     Minor nit: end-of-line whitespace on this line

The reason I used hdfs.closeTries is that the parameter existed previously (it 
was parsed in the AbstractHDFSWriter class). Since that parameter did the same 
thing and has been removed in this patch - I want to make sure the old 
parameter works as it used to.


> On April 28, 2014, 6:34 a.m., Mike Percy wrote:
> > flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/BucketWriter.java,
> >  line 636
> > <https://reviews.apache.org/r/20698/diff/2/?file=568298#file568298line636>
> >
> >     Worth a comment that the BucketWriter may be kept alive, and that this 
> > method may be called by the scheduledClose Callable on the timedRollerPool 
> > even after the BucketWriter is closed.
> >     
> >     However, since there is an implicit ref to the BucketWriter from the 
> > Callable, I don't see the point of copying in all these args when we could 
> > get them from the BucketWriter object.

The reason is that when the BucketWriter#close method is called due to count or 
size based rotation, we re-use the same bucket writer (we close and then 
immediately open), which means that the bucketPath and targetPath fields of the 
bucketwriter instance now changes to the path and final path of the new file. 
So if an old file failed to close/rename, we need to make sure we can rename 
that even later. I will note that in a comment.


- Hari


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/20698/#review41576
-----------------------------------------------------------


On April 28, 2014, 9:36 p.m., Hari Shreedharan wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/20698/
> -----------------------------------------------------------
> 
> (Updated April 28, 2014, 9:36 p.m.)
> 
> 
> Review request for Flume.
> 
> 
> Bugs: FLUME-2357
>     https://issues.apache.org/jira/browse/FLUME-2357
> 
> 
> Repository: flume-git
> 
> 
> Description
> -------
> 
> Much of the size of the patch is due to a couple of file renames. Otherwise 
> the patch itself is pretty simple. In the Bucketwriter, if a close fails, we 
> simply reschedule the close to happen sometime later until it finally 
> succeeds or till we hit a maximum count. Added a test case too. This depends 
> on the presence of the isFileClosed method in the HDFS client API. If the 
> method is absent, reattempts are not done.
> 
> 
> Diffs
> -----
> 
>   flume-ng-doc/sphinx/FlumeUserGuide.rst 7b918ed 
>   
> flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/AbstractHDFSWriter.java
>  da0466d 
>   
> flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/BucketWriter.java
>  e82d13d 
>   
> flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/HDFSCompressedDataStream.java
>  5518547 
>   
> flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/HDFSDataStream.java
>  e20d1ee 
>   
> flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/HDFSEventSink.java
>  4ea78c1 
>   
> flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/HDFSSequenceFile.java
>  5fe9f1b 
>   
> flume-ng-sinks/flume-hdfs-sink/src/test/java/org/apache/flume/sink/hdfs/MockDataStream.java
>  PRE-CREATION 
>   
> flume-ng-sinks/flume-hdfs-sink/src/test/java/org/apache/flume/sink/hdfs/MockFileSystem.java
>  PRE-CREATION 
>   
> flume-ng-sinks/flume-hdfs-sink/src/test/java/org/apache/flume/sink/hdfs/MockFileSystemCloseRetryWrapper.java
>  b5d89e6 
>   
> flume-ng-sinks/flume-hdfs-sink/src/test/java/org/apache/flume/sink/hdfs/MockFsDataOutputStream.java
>  PRE-CREATION 
>   
> flume-ng-sinks/flume-hdfs-sink/src/test/java/org/apache/flume/sink/hdfs/MockFsDataOutputStreamCloseRetryWrapper.java
>  1d8c140 
>   
> flume-ng-sinks/flume-hdfs-sink/src/test/java/org/apache/flume/sink/hdfs/TestBucketWriter.java
>  b7cc586 
>   
> flume-ng-sinks/flume-hdfs-sink/src/test/java/org/apache/flume/sink/hdfs/TestHDFSEventSink.java
>  87918d1 
>   
> flume-ng-sinks/flume-hdfs-sink/src/test/java/org/apache/flume/sink/hdfs/TestUseRawLocalFileSystem.java
>  4476530 
>   pom.xml 2aa0ad1 
> 
> Diff: https://reviews.apache.org/r/20698/diff/
> 
> 
> Testing
> -------
> 
> Added new unit test. All current tests pass.
> 
> 
> Thanks,
> 
> Hari Shreedharan
> 
>

Reply via email to