Github user tarekauel commented on a diff in the pull request:
https://github.com/apache/spark/pull/7208#discussion_r34865919
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringOperations.scala
---
@@ -593,6 +593,33 @@ case class Levenshtein(left: Expression, right:
Expression) extends BinaryExpres
}
/**
+ * Returns string, with the first letter of each word in uppercase,
+ * all other letters in lowercase. Words are delimited by whitespace.
+ */
+case class InitCap(child: Expression) extends UnaryExpression with
ExpectsInputTypes {
+ override def dataType: DataType = StringType
+
+ override def inputTypes: Seq[DataType] = Seq(StringType)
+
+ override def nullSafeEval(string: Any): Any = {
+ if (string.asInstanceOf[UTF8String].getBytes.length == 0) {
+ return string
+ }
+ else {
+ val sb = new StringBuffer()
+ sb.append(string)
+ sb.setCharAt(0, sb.charAt(0).toUpper)
+ for (i <- 1 until sb.length) {
+ if (sb.charAt(i - 1).equals(' ')) {
+ sb.setCharAt(i, sb.charAt(i).toUpper)
+ }
+ }
+ UTF8String.fromString(sb.toString)
--- End diff --
I think we should consider implement all of this on bytes directly. The
conversion to `Char` isn't safe. I'm not sure, what happens if a character
doesn't fit into `Char`. Using the assumption that a lower case and a upper
case character have always the same number of bytes, we could easily use
`Array[Byte]`. Even tough this isn't guaranteed by Unicode it seems to be true
(maybe we could propose this to Unicode). But we can do this in a follow up PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]