mbutrovich commented on issue #3607: URL: https://github.com/apache/datafusion-comet/issues/3607#issuecomment-3992086481
Dug into this a bit with Claude and I'm pleased with the investigation: The zero-column guard in `adapt_batch_with_expressions` (lines 358-365) is necessary and correct. **Root cause**: When Spark plans a query that only needs row counts (e.g., a scalar subquery like `SELECT (SELECT max(id) FROM t) ...`, or internal sub-plans of MOR delete/update/merge operations), the `IcebergScanExec` receives a target schema with zero fields. iceberg-rust returns batches with zero columns but non-zero row counts. `RecordBatch::try_new` fails in this case because with zero columns it can't infer the row count — it needs `try_new_with_options` with an explicit `with_row_count`. **Evidence**: Instrumentation of the failing `TestViews.createViewWithSubqueryExpressionInQueryThatIsRewritten` test confirmed: batch rows=1 cols=0 file_schema_fields=[] target_schema_fields=[] batch rows=2 cols=0 file_schema_fields=[] target_schema_fields=[] Both the file schema and target schema have zero fields. The batches carry only row counts and no column data. Why this appeared in the DF 52 migration: The old DF 51 `SchemaMapper::map_batch` always used `RecordBatch::try_new_with_options` with an explicit row count (with the comment "Necessary to handle empty batches"). The DF 52 migration to `PhysicalExprAdapter` moved from batch-level mapping to expression-level rewriting, which lost that implicit handling. The guard re-adds it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
