Kontinuation opened a new issue, #1607: URL: https://github.com/apache/datafusion-comet/issues/1607
### What is the problem the feature request solves? Comet native operators always write spill files into the default tmp directory, this is not always the desired behavior. There are cases where we want to write spill files into Spark local directories: 1. The default tmp directory is too small to hold spill files, a special volume is configured for Spark to write temporary files and shuffle data/index files. 2. Multiple directories are configured as Spark local directories. These directories could be backed by multiple physical devices to have a larger overall disk read/write throughput. Spark will spread spill files and shuffle data/index files evenly onto these local directories by default, comet should also do that when writing spill files. All the above mentioned cases are quite common for running Spark jobs on Kubernetes. Users usually specify persistent volume claims prefixed by `spark-local-dir-` to use allocated volumes as Spark local directories. https://spark.apache.org/docs/latest/running-on-kubernetes.html#local-storage ### Describe the potential solution Use a `DiskManagerConfig::NewSpecified(spark_local_dirs)` config instead of `DiskManagerConfig::NewOs` when building the DataFusion runtime env. The spill files created by the disk manager will be placed in spark local dirs. ### Additional context _No response_ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For additional commands, e-mail: github-h...@datafusion.apache.org