Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/13738
In Spark 2.0, this issue cannot happen when we use the dataframe load
method because as you mentioned, all of file-based datasource does a
`hdfsPath.getFileSystem`.
I noticed there is at least one case when this issue can happen. It's a
corner case.
When we use Spark 1.6 and `spark-csv`
(https://github.com/databricks/spark-csv), this issue can happen.
I was able to reproduce this by following code with Spark 1.6.1 and
spark-csv 1.4.0.
```
import org.apache.spark._
import org.apache.spark.sql._
object ReproduceApp2 {
def main(args: Array[String]) {
val conf = new SparkConf()
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
val input = args(0)
val output = args(1)
sqlContext
.read.format("csv")
.option("header", "true")
.load(input)
.write.format("json")
.mode(SaveMode.Overwrite)
.save(output)
}
}
```
This is because `DefaultSource` of `spark-csv` does not implement
`HadoopFsRelationProvider`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]