You can use partition explicitly by adding "/<col_name>=<partition value>" to
the end of the path you are writing to and then use overwrite.

BTW in Spark 2.0 you just need to use:

sc.hadoopConfiguration.set("mapreduce.fileoutputcommitter.algorithm.version","2”)
and use s3a://

and you can work with regular output committer (actually
DirectParquetOutputCommitter is no longer available in Spark 2.0)

so if you are planning on upgrading this could be another motivation



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/S3-DirectParquetOutputCommitter-PartitionBy-SaveMode-Append-tp26398p27810.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to