Github user sujith71955 commented on the issue:
https://github.com/apache/spark/pull/22572
@srowen @cloud-fan
I was testing the SparkHadoopWriter flow, with below steps and i could see
in the log with job id printed properly, so is it fine to update this flow
also with description.uuid ? Attaching the snapshot of logs based
SparkHadoopWriter flow
val
rdd=spark.sparkContext.newAPIHadoopFile("D:/data/x.csv",classOf[org.apache.hadoop.mapreduce.lib.input.NLineInputFormat],classOf[org.apache.hadoop.io.LongWritable],classOf[org.apache.hadoop.io.Text])
val hconf=spark.sparkContext.hadoopConfiguration
hconf.set("mapreduce.output.fileoutputformat.outputdir","D:/data/test")
scala> rdd.saveAsNewAPIHadoopDataset(hconf)

---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]