Github user yhuai commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13463#discussion_r67955807
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
 ---
    @@ -83,40 +83,10 @@ class ListingFileCatalog(
           val statuses: Seq[FileStatus] = paths.flatMap { path =>
             val fs = path.getFileSystem(hadoopConf)
             logInfo(s"Listing $path on driver")
    -
    -        val statuses = {
    -          val stats = 
Try(fs.listStatus(path)).getOrElse(Array.empty[FileStatus])
    -          if (pathFilter != null) stats.filter(f => 
pathFilter.accept(f.getPath)) else stats
    -        }
    -
    -        statuses.map {
    -          case f: LocatedFileStatus => f
    -
    -          // NOTE:
    -          //
    -          // - Although S3/S3A/S3N file system can be quite slow for 
remote file metadata
    -          //   operations, calling `getFileBlockLocations` does no harm 
here since these file system
    -          //   implementations don't actually issue RPC for this method.
    -          //
    -          // - Here we are calling `getFileBlockLocations` in a sequential 
manner, but it should a
    -          //   a big deal since we always use to `listLeafFilesInParallel` 
when the number of paths
    -          //   exceeds threshold.
    -          case f =>
    -            HadoopFsRelation.createLocatedFileStatus(f, 
fs.getFileBlockLocations(f, 0, f.getLen))
    -        }
    -      }.filterNot { status =>
    -        val name = status.getPath.getName
    -        HadoopFsRelation.shouldFilterOut(name)
    -      }
    -
    -      val (dirs, files) = statuses.partition(_.isDirectory)
    -
    -      // It uses [[LinkedHashSet]] since the order of files can affect the 
results. (SPARK-11500)
    -      if (dirs.isEmpty) {
    -        mutable.LinkedHashSet(files: _*)
    -      } else {
    -        mutable.LinkedHashSet(files: _*) ++ 
listLeafFiles(dirs.map(_.getPath))
    +        Try(HadoopFsRelation.listLeafFiles(fs, fs.getFileStatus(path), 
pathFilter)).
    --- End diff --
    
    `HadoopFsRelation.listLeafFiles` is the version that does the file listing 
work at the driver side.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to