loudongfeng opened a new issue, #6935:
URL: https://github.com/apache/incubator-gluten/issues/6935

   ### Backend
   
   CH (ClickHouse)
   
   ### Bug description
   
   when grace hash not enable by default , and then set grace_hash in spark-sql 
by 
   ``` sql
   set 
spark.gluten.sql.columnar.backend.ch.runtime_settings.join_algorithm=grace_hash;
   ```
   The following query will fail
   
   ``` sql
   CREATE TABLE my_customer (
     c_customer_sk INT)
   USING orc;
   CREATE TABLE my_store_sales (
     ss_sold_date_sk INT,
     ss_customer_sk INT)
    USING orc;
   CREATE TABLE my_date_dim (
     d_date_sk INT,
     d_year INT,
     d_qoy INT)
   USING orc;
   insert into my_customer values (1), (2), (3), (4);
   insert into my_store_sales values (1, 1), (2, 2), (3, 3), (4, 4);
   insert into my_date_dim values (1, 2002, 1), (2, 2002, 2);
   set spark.sql.autoBroadcastJoinThreshold=-1;
   SELECT
     count(*) cnt1
   FROM
     my_customer c
   WHERE
       exists(SELECT *
              FROM my_store_sales, my_date_dim
              WHERE c.c_customer_sk = ss_customer_sk AND
                ss_sold_date_sk = d_date_sk AND
                d_year = 2002 AND
                d_qoy < 4)
   LIMIT 100;
   ```
   
   error log
   
   ```
   org.apache.gluten.exception.GlutenException: 
org.apache.gluten.exception.GlutenException: Not found column 
CAST(col_0,Nullable(I_1) in block. There are only columns: right_2.col_0: While 
executing FillingRightJoinSide
   0. Poco::Exception::Exception(String const&, int) @ 0x0000000011febf39
   1. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 
0x000000000b1cf059
   2. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x0000000006358d8c
   3. DB::Exception::Exception<String const&, String>(int, 
FormatStringHelperImpl<std::type_identity<String const&>::type, 
std::type_identity<String>::type>, String const&, String&&) @ 0x00000000063c6c6b
   4. DB::Block::getByName(String const&, bool) const @ 0x000000000db899f1
   5. DB::JoinCommon::scatterBlockByHash(std::vector<String, 
std::allocator<String>> const&, DB::Block const&, unsigned long) @ 
0x000000000e7d5bdc
   6. DB::GraceHashJoin::addBlockToJoinImpl(DB::Block) @ 0x000000000e61349c
   7. DB::GraceHashJoin::addBlockToJoin(DB::Block const&, bool) @ 
0x000000000e61312d
   8. DB::FillingRightJoinSideTransform::work() @ 0x000000000fb883a6
   9. DB::ExecutionThreadContext::executeTask() @ 0x000000000fa8c922
   10. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) 
@ 0x000000000fa82f90
   11. DB::PipelineExecutor::executeStep(std::atomic<bool>*) @ 
0x000000000fa82b69
   12. DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x000000000fa92664
   13. DB::PullingPipelineExecutor::pull(DB::Block&) @ 0x000000000fa927b9
   14. local_engine::LocalExecutor::hasNext() @ 0x000000000b561e49
   15. Java_org_apache_gluten_vectorized_BatchIterator_nativeHasNext @ 
0x00000000063402d7
   ```
   
   
   ### Spark version
   
   Spark-3.3.x
   
   ### Spark configurations
   
   _No response_
   
   ### System information
   
   _No response_
   
   ### Relevant logs
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to