amaliujia commented on code in PR #38393:
URL: https://github.com/apache/spark/pull/38393#discussion_r1006436736
##########
python/pyspark/sql/connect/dataframe.py:
##########
@@ -218,8 +218,13 @@ def head(self, n: int) -> Optional["pandas.DataFrame"]:
self.limit(n)
return self.toPandas()
- # TODO(martin.grund) fix mypu
- def join(self, other: "DataFrame", on: Any, how: Optional[str] = None) ->
"DataFrame":
+ # TODO: extend `on` to also be type List[ColumnRef].
Review Comment:
Join condition could be something like `[df.name=df2.name, df.age=df2.age]`
which will be mapped to List[ColumnRef].
Current expression system in Connect is not perfect that we cannot convert
the expression above into a simple expression. There are mixed ColumnRef,
String, Expression, List, etc. and we need to figure out a way to unify all of
those into ultimately one single expression. If that is too hard then at least
we need to change proto to allow accepting a list of Expression.
I will follow up on the expression support in the Connect.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]