Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/19497
`saveAsNewAPIHadoopFile ` simply delegates to `saveAsNewAPIHadoopDataset`
(with some options set), right ? The behavior would be similar ?
Do you mean `saveAsHadoopDataset` instead ?
I did not change behavior there - since the exception was getting raised
from within hadoop code and not from our code (when we pass invalid values),
and it is preserving behavior from earlier code.
I was focussed more on the regression introduced.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]