ethan-tyler commented on code in PR #19252:
URL: https://github.com/apache/datafusion/pull/19252#discussion_r2615410711


##########
datafusion/physical-plan/src/aggregates/no_grouping.rs:
##########
@@ -249,12 +250,30 @@ fn scalar_cmp_null_short_circuit(
     }
 }
 
+/// Prepend the grouping ID column to the output columns if present.
+///
+/// For GROUPING SETS with no GROUP BY expressions, the schema includes a 
`__grouping_id`
+/// column that must be present in the output. This function inserts it at the 
beginning
+/// of the columns array to maintain schema alignment.
+fn prepend_grouping_id_column(
+    mut columns: Vec<Arc<dyn arrow::array::Array>>,
+    grouping_id: Option<&ScalarValue>,
+) -> Result<Vec<Arc<dyn arrow::array::Array>>> {
+    if let Some(id) = grouping_id {
+        let num_rows = columns.first().map(|array| array.len()).unwrap_or(1);
+        let grouping_ids = id.to_array_of_size(num_rows)?;
+        columns.insert(0, grouping_ids);
+    }
+    Ok(columns)
+}
+
 impl AggregateStream {
     /// Create a new AggregateStream
     pub fn new(
         agg: &AggregateExec,
         context: &Arc<TaskContext>,
         partition: usize,
+        grouping_id: Option<ScalarValue>,

Review Comment:
   Parameter's gone from the signature, but prepend_grouping_id_column is still 
   being called with hardcoded None, which makes the whole .and_then() block 
   a no-op. 
   
   Not blocking, but could remove the function and that call entirely since 
   GROUPING SETS(()) goes through the grouped path anyway. Can be a follow-up 
   cleanup if you'd rather not touch it now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to