sunchao commented on a change in pull request #29959:
URL: https://github.com/apache/spark/pull/29959#discussion_r503493043
##########
File path: core/src/main/scala/org/apache/spark/util/HadoopFSUtils.scala
##########
@@ -207,18 +166,14 @@ private[spark] object HadoopFSUtils extends Logging {
// Note that statuses only include FileStatus for the files and dirs
directly under path,
// and does not include anything else recursively.
val statuses: Array[FileStatus] = try {
- fs match {
- // DistributedFileSystem overrides listLocatedStatus to make 1 single
call to namenode
- // to retrieve the file status with the file block location. The
reason to still fallback
- // to listStatus is because the default implementation would
potentially throw a
- // FileNotFoundException which is better handled by doing the lookups
manually below.
- case (_: DistributedFileSystem | _: ViewFileSystem) if !ignoreLocality
=>
- val remoteIter = fs.listLocatedStatus(path)
- new Iterator[LocatedFileStatus]() {
- def next(): LocatedFileStatus = remoteIter.next
- def hasNext(): Boolean = remoteIter.hasNext
- }.toArray
- case _ => fs.listStatus(path)
+ if (ignoreLocality) {
+ fs.listStatus(path)
+ } else {
+ val remoteIter = fs.listLocatedStatus(path)
Review comment:
Thanks @steveloughran , yes I also think it's better to rely on the
FileSystem-specific `listLocatedStatus` impl rather than having the logic here.
However, the change seems to break a few assumptions in the test cases so I'll
isolate it into another PR.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]