hvanhovell commented on code in PR #38793:
URL: https://github.com/apache/spark/pull/38793#discussion_r1031929419
##########
connector/connect/src/main/protobuf/spark/connect/relations.proto:
##########
@@ -457,3 +458,16 @@ message RenameColumnsByNameToNameMap {
// duplicated B are not allowed.
map<string, string> rename_columns_map = 2;
}
+
+// Adding columns or replacing the existing columns that has the same names.
+message WithColumns {
+ // (Required) The input relation.
+ Relation input = 1;
+
+ // (Required)
+ //
+ // Given a column name, apply corresponding expression on the column. If
column
+ // name exists in the input relation, then replacing the column. if column
name
+ // does not exist in the input relation, then adding the column.
+ map<string, Expression> cols_map = 2;
Review Comment:
There is nothing we can do about the current APIs. I just think there is a
very real chance that we will end up with a version of withColumns that takes
an ordered collection of columns. A downside of the current APIs is that when
you change platform versions you might end up with a different ordering.
One of the weirder things here will be is that the order you see on the
client side (by iterating over the map), does not have to be the same order
that is used by the server. This is because the map implementations used on
both ends are different (e.g. use different hashes). This will be confusing to
end users. I think that alone is a very strong reason to change this to a list
of name-expression pairs.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]