Github user marmbrus commented on a diff in the pull request:

    https://github.com/apache/spark/pull/12879#discussion_r61967643
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
 ---
    @@ -365,11 +390,78 @@ class HDFSFileCatalog(
         }
       }
     
    -  def allFiles(): Seq[FileStatus] = leafFiles.values.toSeq
    +  /**
    +   * Contains a set of paths that are considered as the base dirs of the 
input datasets.
    +   * The partitioning discovery logic will make sure it will stop when it 
reaches any
    +   * base path. By default, the paths of the dataset provided by users 
will be base paths.
    +   * For example, if a user uses 
`sqlContext.read.parquet("/path/something=true/")`, the base path
    +   * will be `/path/something=true/`, and the returned DataFrame will not 
contain a column of
    +   * `something`. If users want to override the basePath. They can set 
`basePath` in the options
    +   * to pass the new base path to the data source.
    +   * For the above example, if the user-provided base path is `/path/`, 
the returned
    +   * DataFrame will have the column of `something`.
    +   */
    +  private def basePaths: Set[Path] = {
    +    val userDefinedBasePath = parameters.get("basePath").map(basePath => 
Set(new Path(basePath)))
    +    userDefinedBasePath.getOrElse {
    +      // If the user does not provide basePath, we will just use paths.
    +      paths.toSet
    +    }.map { hdfsPath =>
    +      // Make the path qualified (consistent with listLeafFiles and 
listLeafFilesInParallel).
    +      val fs = hdfsPath.getFileSystem(hadoopConf)
    +      hdfsPath.makeQualified(fs.getUri, fs.getWorkingDirectory)
    +    }
    +  }
    +}
    +
    +
    +/**
    + * A file catalog that caches metadata gathered by scanning all the files 
present in `paths`
    + * recursively.
    + *
    + * @param parameters as set of options to control discovery
    + * @param paths a list of paths to scan
    + * @param partitionSchema an optional partition schema that will be use to 
provide types for the
    + *                        discovered partitions
    + */
    +class HDFSFileCatalog(
    --- End diff --
    
    existing: can you move the implementation into their own files (each 
separate).  they aren't really interfaces.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to