KAlbert2333 opened a new issue, #10441:
URL: https://github.com/apache/incubator-gluten/issues/10441
### Backend
VL (Velox)
### Bug description
25/08/14 08:02:29 WARN GlutenFallbackReporter: Validation failed for plan:
Scan orc
spark_catalog.tpcds_bin_partitioned_varchar_orc_1000.date_dim[QueryId=1], due
to: Found unsupported data type in OrcReadFormat: Some(varchar(16))(force
fallback), Some(varchar(9))(force fallback), Some(varchar(6))(force fallback),
Some(varchar(1))(force fallback), Some(varchar(1))(force fallback),
Some(varchar(1))(force fallback), Some(varchar(1))(force fallback),
Some(varchar(1))(force fallback), Some(varchar(1))(force fallback),
Some(varchar(1))(force fallback), Some(varchar(1))(force fallback)..
25/08/14 08:02:29 WARN GlutenFallbackReporter: Validation failed for plan:
ColumnarToRow[QueryId=1], due to: Found unsupported data type in OrcReadFormat:
Some(varchar(16))(force fallback), Some(varchar(9))(force fallback),
Some(varchar(6))(force fallback), Some(varchar(1))(force fallback),
Some(varchar(1))(force fallback), Some(varchar(1))(force fallback),
Some(varchar(1))(force fallback), Some(varchar(1))(force fallback),
Some(varchar(1))(force fallback), Some(varchar(1))(force fallback),
Some(varchar(1))(force fallback)..
25/08/14 08:02:29 WARN GlutenFallbackReporter: Validation failed for plan:
Scan orc
spark_catalog.tpcds_bin_partitioned_varchar_orc_1000.date_dim[QueryId=1], due
to: Found unsupported data type in OrcReadFormat: Some(varchar(16))(force
fallback), Some(varchar(9))(force fallback), Some(varchar(6))(force fallback),
Some(varchar(1))(force fallback), Some(varchar(1))(force fallback),
Some(varchar(1))(force fallback), Some(varchar(1))(force fallback),
Some(varchar(1))(force fallback), Some(varchar(1))(force fallback),
Some(varchar(1))(force fallback), Some(varchar(1))(force fallback)..
25/08/14 08:02:29 WARN GlutenFallbackReporter: Validation failed for plan:
ColumnarToRow[QueryId=1], due to: Found unsupported data type in OrcReadFormat:
Some(varchar(16))(force fallback), Some(varchar(9))(force fallback),
Some(varchar(6))(force fallback), Some(varchar(1))(force fallback),
Some(varchar(1))(force fallback), Some(varchar(1))(force fallback),
Some(varchar(1))(force fallback), Some(varchar(1))(force fallback),
Some(varchar(1))(force fallback), Some(varchar(1))(force fallback),
Some(varchar(1))(force fallback)..
### Gluten version
Gluten-1.3
### Spark version
Spark-3.4.x
### Spark configurations
${SPARK_HOME}/bin/spark-sql \
--master yarn \
--driver-memory 20g \
--driver-cores 1 \
--executor-memory 16g \
--executor-cores 8 \
--num-executors 19 \
--database tpcds_bin_partitioned_varchar_orc_1000 \
--conf spark.sql.shuffle.partitions=320 \
--conf spark.default.parallelism=320 \
--conf spark.sql.adaptive.enabled=false \
--conf spark.plugins=org.apache.gluten.GlutenPlugin \
--conf spark.memory.offHeap.enabled=true \
--conf spark.memory.offHeap.size=10G \
--conf
spark.shuffle.manager=org.apache.spark.shuffle.sort.ColumnarShuffleManager \
--conf
spark.driver.extraClassPath=/home/velox-test/incubator-gluten/package/target/gluten-velox-bundle-spark3.4_2.12-debian_11_aarch_64-1.3.0.jar
\
--conf
spark.executor.extraClassPath=/home/velox-test/incubator-gluten/package/target/gluten-velox-bundle-spark3.4_2.12-debian_11_aarch_64-1.3.0.jar
\
--conf
spark.executorEnv.LIBHDFS3_CONF="/home/velox-test/hadoop-3.3.1/etc/hadoop/hdfs-client.xml"
\
--conf spark.driver.extraJavaOptions="
-XX:+UseG1GC
-XX:MaxGCPauseMillis=200
-XX:ConcGCThreads=10
-XX:ParallelGCThreads=20
-Xlog:gc*=info:file=${LOG_DIR}/driver_gc.log:time,pid,tags
" \
--conf spark.sql.warehouse.dir=hdfs:///user/hive/warehouse \
-f ../tpcds/spark-sql/all.sql \
2>&1 | tee ${LOG_DIR}/tpcds.log
### System information
_No response_
### Relevant logs
```bash
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]