HyukjinKwon commented on code in PR #38270:
URL: https://github.com/apache/spark/pull/38270#discussion_r996537113
##########
connector/connect/src/main/scala/org/apache/spark/sql/connect/dsl/package.scala:
##########
@@ -64,6 +64,36 @@ package object dsl {
.build()
}
+ /**
+ * Create an unresolved function from name parts.
+ *
+ * @param nameParts
+ * @param args
+ * @return Expression wrapping the unresolved function.
+ */
+ def fun(nameParts: Seq[String], args: Seq[proto.Expression]):
proto.Expression = {
Review Comment:
For a bit of more context, one thing is that we use `snake_case` to match w/
SQL function names (see `Column` or `functions.scala`). This kind of naming
rule is already mixed in our existing SQL DSL (see also
`org.apache.spark.sql.catalyst.package`). Should probably pick one and stick to
that.
In the past, we followed `camelCase` in both DSL, `Column` and
`functions.scala`. After that, we renamed them all to `snake_case` for SQL
compatibility in `Column` and `functions.scala` (so the new DSL added follows
`snake_case` at `org.apache.spark.sql.catalyst.package`)
Therefore, I tend to use `snake_case` in this DSL case too but I don't
object if others (or you) feel this is better.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]