deshanxiao commented on code in PR #37336:
URL: https://github.com/apache/spark/pull/37336#discussion_r934075866
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/util/SchemaUtils.scala:
##########
@@ -297,4 +296,117 @@ private[spark] object SchemaUtils {
.replaceAll("\u000B", "\\\\v")
.replaceAll("\u0007", "\\\\a")
}
+
+ /**
+ * Check whether the given schema contains a column of the required data
type.
Review Comment:
> I don't have a strong preference, but it seems a bit weird to define
methods in SQL but only call them in MLlib.
Yes, but it seems more weird to have a SchemaUtils in the MLiib too. Do you
have any good advice?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]