cloud-fan commented on code in PR #38192:
URL: https://github.com/apache/spark/pull/38192#discussion_r997102043
##########
connector/connect/src/main/protobuf/spark/connect/commands.proto:
##########
@@ -62,3 +65,39 @@ message CreateScalarFunction {
FUNCTION_LANGUAGE_SCALA = 3;
}
}
+
+// As writes are not directly handled during analysis and planning, they are
modeled as commands.
+message WriteOperation {
+ // The output of the `input` relation will be persisted according to the
options.
+ Relation input = 1;
+ // Format value according to the Spark documentation. Examples are: text,
parquet, delta.
+ string source = 2;
+ // The destination of the write operation must be either a path or a table.
Review Comment:
in DF API, people can do `df.write.format("jdbc").option("table",
...).save()` , so the destination is neither path nor table. I think an
optional table name is sufficient. If table name is not given, the destination
will be figured out from write options (path is just one write option).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]