Kimahriman commented on code in PR #48038:
URL: https://github.com/apache/spark/pull/48038#discussion_r1774927552
##########
python/pyspark/worker.py:
##########
@@ -333,17 +336,38 @@ def wrap_cogrouped_map_arrow_udf(f, return_type, argspec,
runner_conf):
(col.name, to_arrow_type(col.dataType)) for col in
return_type.fields
]
- def wrapped(left_key_table, left_value_table, right_key_table,
right_value_table):
- if len(argspec.args) == 2:
- result = f(left_value_table, right_value_table)
- elif len(argspec.args) == 3:
- key_table = left_key_table if left_key_table.num_rows > 0 else
right_key_table
- key = tuple(c[0] for c in key_table.columns)
- result = f(key, left_value_table, right_value_table)
-
- verify_arrow_result(result, _assign_cols_by_name,
expected_cols_and_types)
+ def wrapped(left_key_batch, left_value_batches, right_key_batch,
right_value_batches):
+ if is_generator:
+ if len(argspec.args) == 2:
+ result = f(left_value_batches, right_value_batches)
+ elif len(argspec.args) == 3:
+ key_batch = left_key_batch if left_key_batch.num_rows > 0 else
right_key_batch
+ key = tuple(c[0] for c in key_batch.columns)
+ result = f(key, left_value_batches, right_value_batches)
+
+ def verify_element(batch):
+ verify_arrow_batch(batch, _assign_cols_by_name,
expected_cols_and_types)
+ return batch
+
+ yield from map(verify_element, result)
+ # Make sure both iterators are fully consumed
Review Comment:
It would be great if we could but I don't really think it's possible. The
iterators just come from a stream of bytes over a channel, there's no way to
tell the upstream that it's done with this single arrow stream. If someone
comes up with a way in the future it could always be improved later.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]