lgbo-ustc opened a new issue, #5497:
URL: https://github.com/apache/incubator-gluten/issues/5497

   ### Backend
   
   CH (ClickHouse)
   
   ### Bug description
   
   Run following sql
   ```sql
   select t1.n_nationkey as x, t1.n_regionkey, t2.n_nationkey as y  from 
     tpch_pq.nation as t1 left join 
     default.nation as t2 
   on t1.n_nationkey = t2.n_nationkey 
   where  t2.n_nationkey > 1 order by x;
   ```
   
   We have following plan
   ```
   == Physical Plan ==
   CHNativeColumnarToRow
   +- ^(61) SortExecTransformer [x#251L ASC NULLS FIRST], true, 0
      +- ^(61) InputIteratorTransformer[x#251L, n_regionkey#4L, y#252L]
         +- ^(61) InputAdapter
            +- ^(61) ColumnarExchange rangepartitioning(x#251L ASC NULLS FIRST, 
5), ENSURE_REQUIREMENTS, [plan_id=2232], [id=#2232], [OUTPUT] List(x:LongType, 
n_regionkey:LongType, y:LongType), [OUTPUT] List(x:LongType, 
n_regionkey:LongType, y:LongType)
               +- ^(60) ProjectExecTransformer [n_nationkey#2L AS x#251L, 
n_regionkey#4L, n_nationkey#172L AS y#252L]
                  +- ^(60) CHShuffledHashJoinExecTransformer [n_nationkey#2L], 
[n_nationkey#172L], Inner, BuildRight
                     :- ^(60) InputIteratorTransformer[n_nationkey#2L, 
n_regionkey#4L]
                     :  +- ^(60) InputAdapter
                     :     +- ^(60) ColumnarExchange 
hashpartitioning(n_nationkey#2L, 5), ENSURE_REQUIREMENTS, [plan_id=2222], 
[id=#2222], [OUTPUT] List(n_nationkey:LongType, n_regionkey:LongType), [OUTPUT] 
List(n_nationkey:LongType, n_regionkey:LongType)
                     :        +- ^(58) FilterExecTransformer (((n_nationkey#2L 
> 1) AND isnotnull(n_nationkey#2L)) AND might_contain(Subquery 
scalar-subquery#258, [id=#2138], xxhash64(n_nationkey#2L, 42)))
                     :           :  +- Subquery scalar-subquery#258, [id=#2138]
                     :           :     +- CHNativeColumnarToRow
                     :           :        +- ^(57) 
HashAggregateTransformer(keys=[], 
functions=[bloom_filter_agg(xxhash64(n_nationkey#172L, 42), 1000000, 8388608, 
0, 0)], output=[bloomFilter#257])
                     :           :           +- ^(57) 
InputIteratorTransformer[buf#260]
                     :           :              +- ^(57) InputAdapter
                     :           :                 +- ^(57) ColumnarExchange 
SinglePartition, ENSURE_REQUIREMENTS, [plan_id=2132], [id=#2132], [OUTPUT] 
List(buf:BinaryType), [OUTPUT] List(buf:BinaryType)
                     :           :                    +- ^(56) 
HashAggregateTransformer(keys=[], 
functions=[partial_bloom_filter_agg(_pre_6#261L, 1000000, 8388608, 0, 0)], 
output=[buf#260])
                     :           :                       +- ^(56) 
ProjectExecTransformer [n_nationkey#172L, xxhash64(n_nationkey#172L, 42) AS 
_pre_6#261L]
                     :           :                          +- ^(56) 
FilterExecTransformer (isnotnull(n_nationkey#172L) AND (n_nationkey#172L > 1))
                     :           :                             +- ^(56) 
NativeFileScan parquet default.nation[n_nationkey#172L] Batched: true, 
DataFilters: [isnotnull(n_nationkey#172L), (n_nationkey#172L > 1)], Format: 
Parquet, Location: InMemoryFileIndex(1 
paths)[file:/data3/liangjiabiao/docker/local_gluten/spark-3.3.2-bin-hadoop3/s...,
 PartitionFilters: [], PushedFilters: [IsNotNull(n_nationkey), 
GreaterThan(n_nationkey,1)], ReadSchema: struct<n_nationkey:bigint>
                     :           +- ^(58) NativeFileScan parquet 
tpch_pq.nation[n_nationkey#2L,n_regionkey#4L] Batched: true, DataFilters: 
[(n_nationkey#2L > 1), isnotnull(n_nationkey#2L)], Format: Parquet, Location: 
InMemoryFileIndex(1 
paths)[file:/home/liangjiabiao/workspace/docker/local_gluten/tpch_pq_data/nat...,
 PartitionFilters: [], PushedFilters: [GreaterThan(n_nationkey,1), 
IsNotNull(n_nationkey)], ReadSchema: 
struct<n_nationkey:bigint,n_regionkey:bigint>
                     +- ^(60) InputIteratorTransformer[n_nationkey#172L]
                        +- ^(60) InputAdapter
                           +- ^(60) ColumnarExchange 
hashpartitioning(n_nationkey#172L, 5), ENSURE_REQUIREMENTS, [plan_id=2226], 
[id=#2226], [OUTPUT] List(n_nationkey:LongType), [OUTPUT] 
List(n_nationkey:LongType)
                              +- ^(59) FilterExecTransformer 
(isnotnull(n_nationkey#172L) AND (n_nationkey#172L > 1))
                                 +- ^(59) NativeFileScan parquet 
default.nation[n_nationkey#172L] Batched: true, DataFilters: 
[isnotnull(n_nationkey#172L), (n_nationkey#172L > 1)], Format: Parquet, 
Location: InMemoryFileIndex(1 
paths)[file:/data3/liangjiabiao/docker/local_gluten/spark-3.3.2-bin-hadoop3/s...,
 PartitionFilters: [], PushedFilters: [IsNotNull(n_nationkey), 
GreaterThan(n_nationkey,1)], ReadSchema: struct<n_nationkey:bigint>
   ```
   
   the fallback stages are
   ```
                     :        +- ^(58) FilterExecTransformer (((n_nationkey#2L 
> 1) AND isnotnull(n_nationkey#2L)) AND might_contain(Subquery 
scalar-subquery#258, [id=#2138], xxhash64(n_nationkey#2L, 42)))
                     :           :  +- Subquery scalar-subquery#258, [id=#2138]
                     :           :     +- CHNativeColumnarToRow
                     :           :        +- ^(57) 
HashAggregateTransformer(keys=[], 
functions=[bloom_filter_agg(xxhash64(n_nationkey#172L, 42), 1000000, 8388608, 
0, 0)], output=[bloomFilter#257])
                     :           :           +- ^(57) 
InputIteratorTransformer[buf#260]
   ```
   
   ### Spark version
   
   None
   
   ### Spark configurations
   
   _No response_
   
   ### System information
   
   _No response_
   
   ### Relevant logs
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to