steveloughran commented on code in PR #5477:
URL: https://github.com/apache/hadoop/pull/5477#discussion_r1143256689
##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java:
##########
@@ -1557,6 +1560,33 @@ public void testListFiles() throws IOException {
}
}
+ @Test
+ public void testListFilesRecursive() throws IOException {
+ Configuration conf = getTestConfiguration();
+
+ try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();) {
+ DistributedFileSystem fs = cluster.getFileSystem();
+
+ // Create some directories and files.
+ Path dir = new Path("/dir");
+ Path subDir1 = fs.makeQualified(new Path(dir, "subDir1"));
+ Path subDir2 = fs.makeQualified(new Path(dir, "subDir2"));
+
+ fs.mkdirs(subDir1);
Review Comment:
you don't need this...create does it for you
##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java:
##########
@@ -1557,6 +1560,33 @@ public void testListFiles() throws IOException {
}
}
+ @Test
+ public void testListFilesRecursive() throws IOException {
+ Configuration conf = getTestConfiguration();
+
+ try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();) {
+ DistributedFileSystem fs = cluster.getFileSystem();
+
+ // Create some directories and files.
+ Path dir = new Path("/dir");
+ Path subDir1 = fs.makeQualified(new Path(dir, "subDir1"));
+ Path subDir2 = fs.makeQualified(new Path(dir, "subDir2"));
+
+ fs.mkdirs(subDir1);
+ fs.mkdirs(subDir2);
+ fs.create(new Path(dir, "foo1")).close();
+ fs.create(new Path(dir, "foo2")).close();
+ fs.create(new Path(subDir1, "foo3")).close();
+ fs.create(new Path(subDir2, "foo4")).close();
+
+ // Mock the filesystem, and throw FNF when listing is triggered for the
subdirectory.
+ FileSystem mockFs = spy(fs);
+ Mockito.doThrow(new
FileNotFoundException("")).when(mockFs).listLocatedStatus(eq(subDir1));
+ List<LocatedFileStatus> str =
RemoteIterators.toList(mockFs.listFiles(dir, true));
+ Assert.assertEquals(str.toString(), 3, str.size());
Review Comment:
use AssertJ.assertThat(str).hasSize(3) for the full list dump on mismatch
##########
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:
##########
@@ -2413,8 +2413,13 @@ private void handleFileStat(LocatedFileStatus stat)
throws IOException {
if (stat.isFile()) { // file
curFile = stat;
} else if (recursive) { // directory
- itors.push(curItor);
- curItor = listLocatedStatus(stat.getPath());
+ try {
+ RemoteIterator<LocatedFileStatus> newDirItor =
listLocatedStatus(stat.getPath());
+ itors.push(curItor);
+ curItor = newDirItor;
+ } catch (FileNotFoundException ignored) {
+ LOGGER.debug("Directory {} deleted while attempting to recusive
listing", stat.getPath());
Review Comment:
nit: spelling of recursive
##########
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:
##########
@@ -2413,8 +2413,13 @@ private void handleFileStat(LocatedFileStatus stat)
throws IOException {
if (stat.isFile()) { // file
curFile = stat;
} else if (recursive) { // directory
- itors.push(curItor);
- curItor = listLocatedStatus(stat.getPath());
+ try {
+ RemoteIterator<LocatedFileStatus> newDirItor =
listLocatedStatus(stat.getPath());
Review Comment:
do you need to handle the condition where the dir has been deleted and
replaced with a file? as it is the other concurrency failure, isn't it?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]