allisonwang-db commented on code in PR #42520:
URL: https://github.com/apache/spark/pull/42520#discussion_r1296461894
##########
python/pyspark/worker.py:
##########
@@ -628,9 +628,36 @@ def verify_result(result):
)
return result
- return lambda *a, **kw: map(
- lambda res: (res, arrow_return_type), map(verify_result, f(*a,
**kw))
- )
+ def evaluate(*args: pd.Series, **kwargs: pd.Series):
+ try:
+ if len(args) == 0 and len(kwargs) == 0:
+ yield verify_result(pd.DataFrame(f())),
arrow_return_type
Review Comment:
I see we have a try catch statement for the entire evaluate function. Can
we actually only wrap the invocation of `f()`? This is to separate the user's
code error from the error thrown by the worker code.
##########
python/pyspark/worker.py:
##########
@@ -628,9 +628,36 @@ def verify_result(result):
)
return result
- return lambda *a, **kw: map(
- lambda res: (res, arrow_return_type), map(verify_result, f(*a,
**kw))
- )
+ def evaluate(*args: pd.Series, **kwargs: pd.Series):
+ try:
+ if len(args) == 0 and len(kwargs) == 0:
+ yield verify_result(pd.DataFrame(f())),
arrow_return_type
+ else:
+ # Create tuples from the input pandas Series, each
tuple
+ # represents a row across all Series.
+ keys = list(kwargs.keys())
+ len_args = len(args)
+ row_tuples = zip(*args, *[kwargs[key] for key in keys])
+ for row in row_tuples:
+ res = f(
+ *row[:len_args],
+ **{key: row[len_args + i] for i, key in
enumerate(keys)},
+ )
Review Comment:
ditto
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]