steveloughran commented on a change in pull request #30019:
URL: https://github.com/apache/spark/pull/30019#discussion_r504525199
##########
File path: core/src/main/scala/org/apache/spark/util/HadoopFSUtils.scala
##########
@@ -207,18 +207,14 @@ private[spark] object HadoopFSUtils extends Logging {
// Note that statuses only include FileStatus for the files and dirs
directly under path,
// and does not include anything else recursively.
val statuses: Array[FileStatus] = try {
- fs match {
- // DistributedFileSystem overrides listLocatedStatus to make 1 single
call to namenode
- // to retrieve the file status with the file block location. The
reason to still fallback
- // to listStatus is because the default implementation would
potentially throw a
- // FileNotFoundException which is better handled by doing the lookups
manually below.
- case (_: DistributedFileSystem | _: ViewFileSystem) if !ignoreLocality
=>
- val remoteIter = fs.listLocatedStatus(path)
- new Iterator[LocatedFileStatus]() {
- def next(): LocatedFileStatus = remoteIter.next
- def hasNext(): Boolean = remoteIter.hasNext
- }.toArray
- case _ => fs.listStatus(path)
+ if (ignoreLocality) {
+ fs.listStatus(path)
+ } else {
+ val remoteIter = fs.listLocatedStatus(path)
+ new Iterator[LocatedFileStatus]() {
Review comment:
Might be good to pull this out into something reusable for any
`RemoteIterator[T]` as it gets used in a number of API calls (all because of
java's checked exceptions...)
##########
File path: core/src/main/scala/org/apache/spark/util/HadoopFSUtils.scala
##########
@@ -207,18 +207,14 @@ private[spark] object HadoopFSUtils extends Logging {
// Note that statuses only include FileStatus for the files and dirs
directly under path,
// and does not include anything else recursively.
val statuses: Array[FileStatus] = try {
- fs match {
- // DistributedFileSystem overrides listLocatedStatus to make 1 single
call to namenode
- // to retrieve the file status with the file block location. The
reason to still fallback
- // to listStatus is because the default implementation would
potentially throw a
- // FileNotFoundException which is better handled by doing the lookups
manually below.
- case (_: DistributedFileSystem | _: ViewFileSystem) if !ignoreLocality
=>
- val remoteIter = fs.listLocatedStatus(path)
- new Iterator[LocatedFileStatus]() {
- def next(): LocatedFileStatus = remoteIter.next
- def hasNext(): Boolean = remoteIter.hasNext
- }.toArray
- case _ => fs.listStatus(path)
+ if (ignoreLocality) {
+ fs.listStatus(path)
Review comment:
switch to listStatusIterator(path) and again, provide a remoteIterator.
This will give you on paged downloads on hdfs, webhdfs, async page prefetch on
latest S3A builds, and, at worst elsewhere, exactly the same performance a
listStatus
##########
File path: core/src/main/scala/org/apache/spark/util/HadoopFSUtils.scala
##########
@@ -207,18 +207,14 @@ private[spark] object HadoopFSUtils extends Logging {
// Note that statuses only include FileStatus for the files and dirs
directly under path,
// and does not include anything else recursively.
val statuses: Array[FileStatus] = try {
- fs match {
- // DistributedFileSystem overrides listLocatedStatus to make 1 single
call to namenode
- // to retrieve the file status with the file block location. The
reason to still fallback
- // to listStatus is because the default implementation would
potentially throw a
- // FileNotFoundException which is better handled by doing the lookups
manually below.
- case (_: DistributedFileSystem | _: ViewFileSystem) if !ignoreLocality
=>
- val remoteIter = fs.listLocatedStatus(path)
- new Iterator[LocatedFileStatus]() {
- def next(): LocatedFileStatus = remoteIter.next
- def hasNext(): Boolean = remoteIter.hasNext
- }.toArray
- case _ => fs.listStatus(path)
+ if (ignoreLocality) {
+ fs.listStatus(path)
+ } else {
+ val remoteIter = fs.listLocatedStatus(path)
+ new Iterator[LocatedFileStatus]() {
+ def next(): LocatedFileStatus = remoteIter.next
+ def hasNext(): Boolean = remoteIter.hasNext
+ }.toArray
Review comment:
the longer you can incrementally do per entry in the remote iterator,
the more latencies talking to the object stores can be hidden. See HADOOP-17074
and HADOOP-17023 for details; one of the PRs shows some numbers there.
If the spark API could return an iterator/yield and the processing of it
used that, a lot of that listing cost could be absorbed entirely.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]