tiger3q opened a new issue, #9295: URL: https://github.com/apache/incubator-gluten/issues/9295
### Backend VL (Velox) ### Bug description Dear Gluten Community, We are currently testing Gluten(Velox) with a 3TB TPC-DS dataset but have observed only limited performance gains compared to native Spark. Dataset: We generated the dataset using the script from https://github.com/hortonworks/hive-testbench, in ORC format, with a scale of 3TB. Cluster: Our setup consists of a 6-node x86 cluster (Intel 5218R). Execution Results: We tested using both the community precompiled package (version 1.3.0) and our self-compiled package (version 1.4.0), but the performance improvements were limited. After excluding query q72, the overall performance improvement (excluding SQL execution time related to resource scheduling) was only around 20%. Additionally, some queries (such as q2, q76, q90, q91, q95, q96) showed over 30% longer execution time compared to native Spark. We would like to ask if there are any recommended configurations for running the TPC-DS 3TB dataset. Below is the configuration we used (referenced from https://github.com/apache/incubator-gluten/blob/main/tools/workload/benchmark_velox/native_sql_initialize.ipynb): --driver-memory 20g --driver-cores 4 --num-executors 24 --executor-cores 12 --executor-memory 5g --conf spark.memory.offHeap.enabled=true --conf spark.memory.offHeap.size=35g --master yarn --conf spark.task.cpus=1 --conf spark.locality.wait=0 --conf spark.network.timeout=600 --conf spark.serializer=org.apache.spark.serializer.KryoSerializer --conf spark.sql.adaptive.enabled=true --conf spark.sql.adaptive.join.enabled=true --conf spark.sql.adaptive.skewedJoin.enable=true --conf spark.sql.broadcastTimeout=600 --conf spark.executor.extraJavaOption='-XX:+UseG1GC' --conf spark.sql.codegen.wholeStage=true --conf spark.sql.adaptive.coalescePartitions.minPartitionNum=200 --conf spark.sql.execution.filterMerge.enable=true --conf spark.executorEnv.MALLOC_CONF=tcache:false --conf spark.plugins=org.apache.gluten.GlutenPlugin --conf spark.shuffle.manager=org.apache.spark.shuffle.sort.ColumnarShuffleManager --conf spark.gluten.sql.columnar.backend.lib=velox --conf spark.gluten.sql.columnar.forceShuffledHashJoin=true --conf spark.gluten.sql.columnar.force.hashagg=false --conf spark.gluten.sql.enable.native.validation=false --conf spark.executorEnv.LD_LIBRARY_PATH=/opt/velox-gluten/thirdparty/:$LD_LIBRARY_PATH --conf spark.driverEnv.LD_LIBRARY_PATH=/opt/velox-gluten/thirdparty/:$LD_LIBRARY_PATH --conf spark.dirver.extraLibraryPath='-Djava.library.path=$HADOOP_HOME/lib/native' --conf spark.executor.extraLibraryPath='-Djava.library.path=$HADOOP_HOME/lib/native' --conf spark.kryoserializer.buffer.max=2000m --conf spark.sql.files.maxPartitionBytes=4g --conf spark.gluten.sql.columnar.coalesce.batches=true --conf spark.sql.optimizer.runtime.bloomFilter.applicationSideScanSizeThreshold=0 --conf spark.sql.optimizer.runtime.bloomFilter.enabled=true --conf spark.gluten.sql.columnar.joinOptimizationLevel=18 --conf spark.gluten.sql.columnar.physicalJoinOptimizeEnable=true --conf spark.gluten.sql.columnar.physicalJoinOptimizationLevel=18 --conf spark.gluten.sql.columnar.logicalJoinOptimizeEnable=true --conf spark.gluten.sql.columnar.maxBatchSize=4096 --conf spark.sql.autoBroadcastJoinThreshold=10m --conf spark.sql.optimizer.dynamicPartitionPruning.enabled=True --conf spark.cleaner.periodicGC.interval=10s --conf spark.driver.maxResultSize=10G Thank you very much for your time and support! ### Spark version None ### Spark configurations _No response_ ### System information _No response_ ### Relevant logs ```bash ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
