flyfoxCI opened a new issue, #9266:
URL: https://github.com/apache/incubator-gluten/issues/9266
### Backend
VL (Velox)
### Bug description
running tpcds queries from q1-q100 conitinous with one session ,when arrives
at q3 throw exception and terminal
the error logs:
it seems malloc() error , gluten:1.3.0-velox spark 3.5.2 jdk: 17.0.6+9.
```
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGBUS (0x7) at pc=0x0000000000000001, pid=968528, tid=1012570
#
# JRE version: OpenJDK Runtime Environment (Red_Hat-17.0.6.0.9-0.3.ea.el8)
(17.0.6+9) (build 17.0.6-ea+9-LTS)
# Java VM: OpenJDK 64-Bit Server VM (Red_Hat-17.0.6.0.9-0.3.ea.el8)
(17.0.6-ea+9-LTS, mixed mode, tiered, compressed oops, compressed class ptrs,
g1 gc, linux-aarch64)
# Problematic frame:
# C 0x0000000000000001
#
# Core dump will be written. Default location: Core dumps may be processed
with "/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e" (or dumping to
/opt/modules/tpcds/core.968528)
#
# An error report file with more information is saved as:
# /opt/modules/tpcds/hs_err_pid968528.log
malloc(): invalid size (unsorted)
./runtpcds_gluten.sh: line 68: 968528 Aborted (core dumped)
${SPARK_HOME}/bin/spark-submit --name "tpcds" --class
com.databricks.spark.sql.perf.tpcds.RunTPCDS --master yarn --deploy-mode client
--num-executors 25 --executor-cores 5 --queue queue1 --conf
spark.driver.memory=4g --conf spark.executor.memory=2g --conf
spark.memory.offHeap.enabled=true --conf spark.memory.offHeap.size=7g --conf
spark.sql.warehouse.dir=/user/root/warehouse --conf
spark.sql.catalogImplementation=hive --conf spark.sql.shuffle.partitions=2000
--conf spark.network.timeout=1200s --conf spark.speculation=true --conf
spark.speculation.interval=1000 --conf spark.speculation.quantile=0.75 --conf
spark.speculation.multiplier=1.5 --conf spark.memory.fraction=0.75 --conf
spark.storage.storageFraction=0.4 --conf spark.sql.adaptive.enabled=true --conf
spark.dynamicAllocation.enabled=false --conf spark.gluten.enabled=true --conf
spark.gluten.loadLibFromJar=true --conf spark.plugins=org.apache.gluten.Gl
utenPlugin --conf
spark.shuffle.manager=org.apache.spark.shuffle.sort.ColumnarShuffleManager
--conf spark.gluten.sql.columnar.backend.lib=velox --conf
spark.gluten.sql.columnar.backend.velox.BHJOptimizeEnabled=true --conf
spark.driver.extraClassPath=/opt/modules/gluten-1.3.0/jar/* --conf
spark.executor.extraClassPath=/opt/modules/gluten-1.3.0/jar/* --conf
spaspark.driver.extraJavaOptions="-Dio.netty.tryReflectionSetAccessible=true"
--conf
spark.executor.extraJavaOptions="-Dio.netty.tryReflectionSetAccessible=true"
--conf spark.io.compression.codec=lz4 --conf
spark.sql.adaptive.localShuffleReader.enabled=true --conf
spark.sql.adaptive.advisoryPartitionSizeInBytes=133M --conf
spark.gluten.sql.columnar.backend.velox.IOThreads=700 --conf
spark.sql.cbo.enabled=true --conf spark.sql.cbo.joinReorder.enabled=true --conf
spark.sql.cbo.planStats.enabled=true --conf
spark.sql.cbo.starSchemaDetection=true --conf
spark.sql.cbo.joinReorder.card.weight=0.6 --conf
spark.sql.optimizer.runtime.bloomF
ilter.enabled=true --conf
spark.sql.optimizer.runtimeFilter.semiJoinReduction.enabled=true --jars
/opt/modules/gluten-1.3.0/jar/gluten-velox-bundle-spark3.5_2.12-centos_8_aarch_64-1.3.0.jar,/opt/modules/gluten-1.3.0/jar/gluten-thirdparty-lib-centos-8-aarch64.jar
/opt/modules/spark-sql-perf/spark-sql-perf-assembly-0.5.1-SNAPSHOT.jar
--scaleFactor 1000 --location /home/tpcds-performance-data --format parquet
--dbPrefix tpcds1000g_ -i 1 -q $queries
```
### Spark version
None
### Spark configurations
_No response_
### System information
_No response_
### Relevant logs
```bash
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]