Hi,

I am using Spark on EMR, and I was hoping to use their optimised committer,
but it looks like that, if 
"spark.sql.sources.partitionOverwriteMode": 'dynamic' then
it will not be used.

What are the best practices to use in this case?
The renaming phase in S3, is very slow, and the bottleneck in my job.

Thanks,




--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to