Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/20414
@felixcheung You are right that I didn't make it clear there should be
still many shuffle blocks, and if you have the read task retried it should be
slower than using `repartition(1)` directly.
Now I tend to fix the issue following the latter fix-shuffle-fetch-order
way, since it may resolve for general cases.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]