sarutak commented on a change in pull request #33287:
URL: https://github.com/apache/spark/pull/33287#discussion_r667512892



##########
File path: 
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java
##########
@@ -574,14 +574,14 @@ public UTF8String trim() {
   public UTF8String trimAll() {
     int s = 0;
     // skip all of the whitespaces (<=0x20) in the left side
-    while (s < this.numBytes && Character.isWhitespace(getByte(s))) s++;
+    while (s < this.numBytes && getByte(s) <= 0x20) s++;

Review comment:
       > Looking at #29375 , it seems like the change was at least partly on 
purpose to catch 'whitespace' that isn't ASCII 32 or less
   
   I think, the purpose of that change was to handle code points which is >= 
0x80 (non-ASCII).
   For example, `あ` is `00 81 82` in hex in `UTF-8`.
   Before #29375, `getByte` returns `-127` for `0x81` so only checking `<= 
0x20` is not enough.
   I think this is the problem #29375 originally aimed to resolve.
   
   But it should have checked whether a `byte` data is in the range of `0` and 
`0x20` to avoid breaking the compatibility.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to