Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5708#discussion_r32718414
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -926,7 +926,9 @@ class SparkContext(config: SparkConf) extends Logging
with ExecutorAllocationCli
// The call to new NewHadoopJob automatically adds security
credentials to conf,
// so we don't need to explicitly add them ourselves
val job = new NewHadoopJob(conf)
- NewFileInputFormat.addInputPath(job, new Path(path))
+ // Use addInputPaths so that newAPIHadoopFile aligns with hadoopFile
in taking
+ // comma separated files as input. (see SPARK-7155)
+ NewFileInputFormat.addInputPaths(job, path)
--- End diff --
I don't think that was the intent of the API -- you specify the paths as an
argument, and it could be surprising to also include something that happens to
be in the existing config already.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]