grundprinzip commented on code in PR #38793:
URL: https://github.com/apache/spark/pull/38793#discussion_r1032061331
##########
connector/connect/src/main/protobuf/spark/connect/relations.proto:
##########
@@ -457,3 +458,16 @@ message RenameColumnsByNameToNameMap {
// duplicated B are not allowed.
map<string, string> rename_columns_map = 2;
}
+
+// Adding columns or replacing the existing columns that has the same names.
+message WithColumns {
+ // (Required) The input relation.
+ Relation input = 1;
+
+ // (Required)
+ //
+ // Given a column name, apply corresponding expression on the column. If
column
+ // name exists in the input relation, then replacing the column. if column
name
+ // does not exist in the input relation, then adding the column.
+ map<string, Expression> cols_map = 2;
Review Comment:
+1 on using a repeated tuple here. The reason is that the behavior should be
defined by the API not by the language implementation. The reason we have
consistent behavior in Spark today is that Python converts to Scala before even
calling the method so there is only one client implementation.
My suggestion would be to change this to:
```
repeated Expression.Alias col_map
```
The Alias has a reference to an arbitrary expression and a name which is
exactly what we want.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]