liuneng1994 commented on PR #6517:
URL: 
https://github.com/apache/incubator-gluten/pull/6517#issuecomment-2328380724

   I encountered the same problem. In delta, input_file_name and 
monotonically_increasing_id are used at the same time. 
Monotonically_increasing_id is a state function, which is not easy to support 
natively. The existing logic will lose the fallback tag of the child of 
input_file_name, resulting in incorrect fallback.
   
   example plan
   ```
   BroadcastHashJoin [l_orderkey#363L], [l_orderkey#906L], Inner, BuildRight, 
false
   :- Project [l_orderkey#363L]
   :  +- Filter isnotnull(l_orderkey#363L)
   :     +- Filter UDF()
   :        +- Scan ExistingRDD 
mergeMaterializedSource[l_orderkey#363L,l_partkey#364L,l_suppkey#365L,l_linenumber#366L,l_quantity#367,l_extendedprice#368,l_discount#369,l_tax#370,l_returnflag#903,l_linestatus#372,l_shipdate#373,l_commitdate#374,l_receiptdate#375,l_shipinstruct#376,l_shipmode#377,l_comment#378]
   +- BroadcastExchange HashedRelationBroadcastMode(List(input[0, bigint, 
false]),false), [plan_id=1263]
      +- Filter isnotnull(l_orderkey#906L)
         +- Project [l_orderkey#906L, _row_id_#1183L, input_file_name() AS 
_file_name_#1201]
            +- Project [l_orderkey#906L, monotonically_increasing_id() AS 
_row_id_#1183L]
               +- FileScan mergetree [l_orderkey#906L] Batched: true, 
DataFilters: [], Format: MergeTree, Location: TahoeBatchFileIndex(1 
paths)[file:/home/admin1/github/gazelle-jni/backends-clickhouse/target/scal..., 
PartitionFilters: [], PushedFilters: [], ReadSchema: struct<l_orderkey:bigint>
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to