dbatomic commented on code in PR #45816:
URL: https://github.com/apache/spark/pull/45816#discussion_r1549967291


##########
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java:
##########
@@ -447,6 +447,37 @@ private UTF8String toUpperCaseSlow() {
     return fromString(toString().toUpperCase());
   }
 
+  /**
+   * Optimized lowercase comparison for UTF8_BINARY_LCASE collation
+   */
+  public int compareLowercase(UTF8String other) {
+    int curr;
+    for (curr = 0; curr < numBytes && curr < other.numBytes; curr++) {
+      byte left = getByte(curr);
+      byte right = other.getByte(curr);
+      if (numBytesForFirstByte(left) != 1 || numBytesForFirstByte(right) != 1) 
{
+        return compareLowercaseSuffixSlow(other, curr);
+      }
+      int lowerLeft = Character.toLowerCase(left);
+      int lowerRight = Character.toLowerCase(right);
+      if (lowerLeft > 127 || lowerRight > 127) {

Review Comment:
   I see that you are not introducing anything new here and that `numBytes != 1 
&& codePoint < 127` is already used in `toUpperCase`. But I don't really 
understand this logic.
   Why can't we take multibyte codepoints? I see that `Character.ToLowerCase` 
accepts an integer specifying code point, that we can decode from UTF8 binary. 
What is the reason why we can't use this for any code point?  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to