Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12828#discussion_r61969371
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -423,23 +423,34 @@ class HDFSFileCatalog(
/**
* Contains a set of paths that are considered as the base dirs of the
input datasets.
* The partitioning discovery logic will make sure it will stop when it
reaches any
- * base path. By default, the paths of the dataset provided by users
will be base paths.
- * For example, if a user uses
`sqlContext.read.parquet("/path/something=true/")`, the base path
- * will be `/path/something=true/`, and the returned DataFrame will not
contain a column of
- * `something`. If users want to override the basePath. They can set
`basePath` in the options
- * to pass the new base path to the data source.
- * For the above example, if the user-provided base path is `/path/`,
the returned
+ * base path.
+ *
+ * By default, the paths of the dataset provided by users will be base
paths.
+ * Below are three typical examples,
+ * Case 1) `sqlContext.read.parquet("/path/something=true/")`: the base
path will be
+ * `/path/something=true/`, and the returned DataFrame will not contain
a column of `something`.
+ * Case 2) `sqlContext.read.parquet("/path/something=true/a.parquet")`:
the base path will be
+ * still `/path/something=true/`, and the returned DataFrame will also
not contain a column of
+ * `something`.
+ * Case 3) `sqlContext.read.parquet("/path/")`: the base path will be
`/path/`, and the returned
* DataFrame will have the column of `something`.
+ *
+ * Users also can override the basePath by setting `basePath` in the
options to pass the new base
+ * path to the data source.
+ * For example, `sqlContext.read.option("basePath",
"/path/").parquet("/path/something=true/")`,
+ * and the returned DataFrame will have the column of `something`.
*/
private def basePaths: Set[Path] = {
- val userDefinedBasePath = parameters.get("basePath").map(basePath =>
Set(new Path(basePath)))
- userDefinedBasePath.getOrElse {
- // If the user does not provide basePath, we will just use paths.
- paths.toSet
- }.map { hdfsPath =>
- // Make the path qualified (consistent with listLeafFiles and
listLeafFilesInParallel).
- val fs = hdfsPath.getFileSystem(hadoopConf)
- hdfsPath.makeQualified(fs.getUri, fs.getWorkingDirectory)
+ parameters.get("basePath").map(new Path(_)) match {
+ case Some(userDefinedBasePath) =>
+ val fs = userDefinedBasePath.getFileSystem(hadoopConf)
+ if (!fs.isDirectory(userDefinedBasePath)) {
+ throw new IllegalArgumentException("Option 'basePath' must be a
directory")
+ }
+ Set(userDefinedBasePath.makeQualified(fs.getUri,
fs.getWorkingDirectory))
+
+ case None =>
+ paths.map { path => if (leafFiles.contains(path)) path.getParent
else path }.toSet
--- End diff --
Do we need to make this `path` qualified?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]