Github user MJFND commented on the issue:
https://github.com/apache/spark/pull/14658
If "Remote Shuffle Blocks cannot be more than 2 GB" then setting up
spark.sql.shuffle.partitions=value, where value should be such that it has 2gb
per executor, like for 200GB of data, we can have 100 partitions for shuffle,
does that make sense? --- --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
