Github user sujith71955 commented on the issue:

    https://github.com/apache/spark/pull/22572
  
    @srowen @cloud-fan 
    I was testing the SparkHadoopWriter flow, with below steps and i could see 
in the log with  job id  printed properly, so is it fine to update this flow 
also with description.uuid ? Attaching the snapshot of logs based 
SparkHadoopWriter flow
    val 
rdd=spark.sparkContext.newAPIHadoopFile("D:/data/x.csv",classOf[org.apache.hadoop.mapreduce.lib.input.NLineInputFormat],classOf[org.apache.hadoop.io.LongWritable],classOf[org.apache.hadoop.io.Text])
    
    val hconf=spark.sparkContext.hadoopConfiguration
    
    hconf.set("mapreduce.output.fileoutputformat.outputdir","D:/data/test")
    
    scala> rdd.saveAsNewAPIHadoopDataset(hconf)
    
    
![sparkhadoopwriter](https://user-images.githubusercontent.com/12999161/46429141-59f94c00-c763-11e8-8991-fd154b8dba07.png)
    
    



---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to