gaogaotiantian commented on code in PR #54285:
URL: https://github.com/apache/spark/pull/54285#discussion_r2801212587


##########
python/pyspark/pandas/groupby.py:
##########
@@ -2248,14 +2285,15 @@ def pandas_filter(pdf: pd.DataFrame) -> pd.DataFrame:
     @staticmethod
     def _prepare_group_map_apply(
         psdf: DataFrame, groupkeys: List[Series], agg_columns: List[Series]
-    ) -> Tuple[DataFrame, List[Label], List[str]]:
+    ) -> Tuple[DataFrame, List[Label], List[str], List[str]]:
         groupkey_labels: List[Label] = [
             verify_temp_column_name(psdf, "__groupkey_{}__".format(i))
             for i in range(len(groupkeys))
         ]
         psdf = psdf[[s.rename(label) for s, label in zip(groupkeys, 
groupkey_labels)] + agg_columns]
         groupkey_names = [label if len(label) > 1 else label[0] for label in 
groupkey_labels]
-        return DataFrame(psdf._internal.resolved_copy), groupkey_labels, 
groupkey_names  # type: ignore[return-value]
+        groupkey_psser_names = [psser.name for psser in groupkeys]

Review Comment:
   I did not fully understand why we need to add `groupkey_psser_names` here as 
a return value. It seems like it was used once somewhere. All the other places 
it 's just thrown away right? Why can't we just calculate it once when we 
needed?
   
   Also it seems like even there the old way to iterate through 
`self._groupkey` should be equivalent? I might've missed something.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to