neilconway commented on code in PR #21154:
URL: https://github.com/apache/datafusion/pull/21154#discussion_r2990698697


##########
datafusion/functions-aggregate/src/string_agg.rs:
##########
@@ -315,10 +323,134 @@ fn filter_index<T: Clone>(values: &[T], index: usize) -> 
Vec<T> {
         .collect::<Vec<_>>()
 }
 
-/// StringAgg accumulator for the simple case (no order or distinct specified)
-/// This accumulator is more efficient than `StringAggAccumulator`
-/// because it accumulates the string directly,
-/// whereas `StringAggAccumulator` uses `ArrayAggAccumulator`.
+/// GroupsAccumulator for `string_agg` without DISTINCT or ORDER BY.
+#[derive(Debug)]
+struct StringAggGroupsAccumulator {
+    /// The delimiter placed between concatenated values.
+    delimiter: String,
+    /// Accumulated string per group. `None` means no values have been seen
+    /// (the group's output will be NULL).
+    values: Vec<Option<String>>,

Review Comment:
   Thanks for the suggestion! This could work, although it ends up making the 
partial-emit / space reclamation logic a lot more complicated.
   
   If we're going to take on more complexity, we could go further and avoid 
copying the input string during `update_batch`; just bump the Arc refcount on 
the input batch and keep `<group_id, batch_id, row_id>` triples. Then assemble 
the actual results in `evaluate()` (this is similar to #20504 for `array_agg`). 
This would be quite a bit more complicated than this PR, but it could be worth 
it to reduce the amount of data being copied. I opened #21156 for this idea.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to