KevinyhZou commented on issue #7220:
URL:
https://github.com/apache/incubator-gluten/issues/7220#issuecomment-2348000056
```
CHNativeColumnarToRow
+- ^(2) ProjectExecTransformer [2024-08-26 AS day#0, id#12L, name#13]
+- ^(2) HashAggregateTransformer(keys=[id#12L, name#13,
spark_grouping_id#11L], functions=[], isStreamingAgg=false)
+- ^(2) InputIteratorTransformer[id#12L, name#13,
spark_grouping_id#11L]
+- ColumnarExchange hashpartitioning(id#12L, name#13,
spark_grouping_id#11L, 1), ENSURE_REQUIREMENTS, [plan_id=112],
[shuffle_writer_type=hash], [OUTPUT] List(id:LongType, name:StringType,
spark_grouping_id:LongType)
+- ^(1) HashAggregateTransformer(keys=[id#12L, name#13,
spark_grouping_id#11L], functions=[], isStreamingAgg=false)
+- ^(1) FilterExecTransformer (isnotnull(name#13) AND
(name#13 = a124))
+- ^(1) ExpandExecTransformer [[id#6L, null, 1], [id#6L,
name#7, 0]], [id#12L, name#13, spark_grouping_id#11L]
+- ^(1) ProjectExecTransformer [id#6L, name#7]
+- ^(1) NativeFileScan parquet
default.test_tbl2[id#6L,name#7,day#8] Batched: true, DataFilters: [], Format:
Parquet, Location: CatalogFileIndex(1
paths)[hdfs://testcluster/user/hive/warehouse/test_tbl2], PartitionFilters: [],
PushedFilters: [], ReadSchema: struct<id:bigint,name:string>
```
分别需要针对[id#6L, null, 1], [id#6L, name#7, 0]], [id#12L, name#13,
spark_grouping_id#11L] 三组fields 进行expand,按照`ExpandTransform` 逻辑,首先对[id#6L,
null, 1]执行expand,数据到下游后,被filter `where name = 'a124'`, 无行数输出,导致ISource中误认为数据已经结束

从而导致提前结束expand,其他两组fields 的expand 未执行,从而导致无结果输出。
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]