Hello, There is another link here that I hope will help you.

https://stackoverflow.com/questions/33831561/pyspark-repartition-vs-partitionby

In particular, when you are faced with possible data skew or have some
partitioned parameters that need to be obtained at runtime, you can refer to
this link and hope to help you.

https://software.intel.com/en-us/articles/spark-sql-adaptive-execution-at-100-tb



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to