Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/19497
Currently, I meant `saveAsNewAPIHadoopFile` comparing to `saveAsHadoopFile`.
```
saveAsNewAPIHadoopFile[...]("") // succeeds
```
```
saveAsHadoopFile[...]("") // fails
Can not create a Path from an empty string
java.lang.IllegalArgumentException: Can not create a Path from an empty
string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
at org.apache.hadoop.fs.Path.<init>(Path.java:135)
at
org.apache.spark.internal.io.SparkHadoopWriterUtils$.createPathFromString(SparkHadoopWriterUtils.scala:54)
```
I wanted to talk about this. `saveAsHadoopFile` is being failed within
Spark side. So, I suspect `saveAsNewAPIHadoopFile` should also fail fast in
this way.
`saveAsHadoopFile` validates the path so I thought `saveAsNewAPIHadoopFile`
should also validate.
https://github.com/apache/spark/blob/3f958a99921d149fb9fdf7ba7e78957afdad1405/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala#L1004-L1008
https://github.com/apache/spark/blob/3f958a99921d149fb9fdf7ba7e78957afdad1405/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala#L983-L987
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]