rspears74 opened a new issue, #8975: URL: https://github.com/apache/arrow-datafusion/issues/8975
### Describe the bug I have defined a custom UDAF to calculate the [t-digest](https://github.com/tdunning/t-digest) for columns and output a `List(Struct)` column. I'm able to successfully run this in Datafusion 33. When I update to Datafusion 34, I get the following error: ``` Error: External error: Arrow error: Invalid argument error: column types must match schema types, expected List(Field { name: "item", data_type: Float64, nullable: true, dict_id: 0, dict_is_ordered: false, metadata: {} }) but found List(Field { name: "item", data_type: Float64, nullable: false, dict_id: 0, dict_is_ordered: false, metadata: {} }) at column index 7 ``` This is definitely due to the fact that in my `state_type` arg to `create_udaf(..)` is different from the way I'm serializing the values in my `Accumulator` implementation's `state` method. I'm not sure why Datafusion 33 doesn't care about this, but Datafusion 34 does. When I get my state schema into parity in these two different locations, either with `nullable: true` or `nullable: false`, I get the following error: ``` Error: Internal error: Empty iterator passed to ScalarValue::iter_to_array. This was likely caused by a bug in DataFusion's code and we would welcome that you file an bug report in our issue tracker ``` ### To Reproduce My use case is relatively complex, but this can potentially be reproduced by defining a UDAF that aggregates a column (potentially a column with some null values?), and aggregating this column into a `List(Struct)` column. ### Expected behavior Successful completion of the aggregation. ### Additional context I get the same behavior using Datafusion pulled from the current main branch. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
