partitionBy is a suggestive field.
If your value is bigger then what spark calculates(based on the obvious you
stated) your value will be used.
But repartition is a forced shuffle (but give me exactly required number of
partition) operation.
You might have noticed that repartition caused a bit of delay(due to
shuffling)

..Manas



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-control-number-of-parquet-files-generated-when-using-partitionBy-tp25436p25685.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to