karuppayya opened a new pull request, #53028: URL: https://github.com/apache/spark/pull/53028
### What changes were proposed in this pull request? This change([design doc)](https://docs.google.com/document/d/1tuWyXAaIBR0oVD5KZwYvz7JLyn6jB55_35xeslUEu7s/edit?usp=sharing) adds support to use remote storage for shuffle data storage. The primary goal is to enhance the elasticity and resilience of Spark workloads, leading to substantial cost optimization opportunities. This is a PoC to elicit feedback from community. ### Why are the changes needed? This change decouples storage from compute, therein helping to minimize shuffle failure and also in better scaling of the cluster. ### Does this PR introduce any user-facing change? This change adds 3 SQL configs to enable the feature Remote storage location for shuffle `spark.shuffle.remote.storage.path=<remote storage path>` Config that determines if the feature needs to be used `spark.sql.shuffle.consolidation.enabled=true|false` Shuffle plugin to use when the feature is enabled(This needs to be configured currently, but we can switch to this automatically when feature is enabled TBD ) `spark.shuffle.sort.io.plugin.class=org.apache.spark.shuffle.sort.remote.HybridShuffleDataIO` - ### How was this patch tested? Manual testing. Unit test to be added. Trying to get feedback from community before writing elaborate tests. ### Was this patch authored or co-authored using generative AI tooling? No -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
