Github user EugenCepoi commented on a diff in the pull request:
https://github.com/apache/spark/pull/5708#discussion_r32720782
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -926,7 +926,9 @@ class SparkContext(config: SparkConf) extends Logging
with ExecutorAllocationCli
// The call to new NewHadoopJob automatically adds security
credentials to conf,
// so we don't need to explicitly add them ourselves
val job = new NewHadoopJob(conf)
- NewFileInputFormat.addInputPath(job, new Path(path))
+ // Use addInputPaths so that newAPIHadoopFile aligns with hadoopFile
in taking
+ // comma separated files as input. (see SPARK-7155)
+ NewFileInputFormat.addInputPaths(job, path)
--- End diff --
The reason to use addInputPaths would be for preserving compatibility. I
had the luck to have some unit tests that detected this change, but others
might encounter it in production.
But as this has been already released, I guess we can stick with
`setInputPaths`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]