maropu commented on a change in pull request #29259:
URL: https://github.com/apache/spark/pull/29259#discussion_r461302879



##########
File path: 
sql/core/src/test/java/test/org/apache/spark/sql/execution/sort/RecordBinaryComparatorSuite.java
##########
@@ -261,40 +263,58 @@ public void testBinaryComparatorForNullColumns() throws 
Exception {
   public void testBinaryComparatorWhenSubtractionIsDivisibleByMaxIntValue() 
throws Exception {
     int numFields = 1;
 
+    long row1Data = 11L;
+    long row2Data = 11L + Integer.MAX_VALUE;
+
+    // BinaryComparator compares longs in big-endian byte order.
+    if (ByteOrder.nativeOrder().equals(ByteOrder.LITTLE_ENDIAN)) {
+      row1Data = Long.reverseBytes(row1Data);
+      row2Data = Long.reverseBytes(row2Data);
+    }
+
     UnsafeRow row1 = new UnsafeRow(numFields);
     byte[] data1 = new byte[100];
     row1.pointTo(data1, computeSizeInBytes(numFields * 8));
-    row1.setLong(0, 11);
+    row1.setLong(0, row1Data);
 
     UnsafeRow row2 = new UnsafeRow(numFields);
     byte[] data2 = new byte[100];
     row2.pointTo(data2, computeSizeInBytes(numFields * 8));
-    row2.setLong(0, 11L + Integer.MAX_VALUE);
+    row2.setLong(0, row2Data);
 
     insertRow(row1);
     insertRow(row2);
 
-    Assert.assertTrue(compare(0, 1) > 0);
+    Assert.assertTrue(compare(0, 1) < 0);
   }
 
   @Test
   public void testBinaryComparatorWhenSubtractionCanOverflowLongValue() throws 
Exception {
     int numFields = 1;
 
+    long row1Data = Long.MIN_VALUE;
+    long row2Data = 1;
+
+    // BinaryComparator compares longs in big-endian byte order.
+    if (ByteOrder.nativeOrder().equals(ByteOrder.LITTLE_ENDIAN)) {

Review comment:
       > For example, one of the tests does a comparison between
   Long.MIN_VALUE and 1 in order to trigger an overflow condition that
   existed in the past (i.e. Long.MIN_VALUE - 1). These constants
   correspond to the values 0x80..00 and 0x00..01. However on a
   little-endian machine the bytes in these values are now swapped
   before they are compared. This means that we will now be comparing
   0x00..80 with 0x01..00. 0x00..80 - 0x01..00 does not overflow
   therefore missing the original purpose of the test.
   
   I'm a bit confused by the PR description; I checked the original PR that 
added this test case and it seems like the overflow in the test title comes 
from the old code: 
https://github.com/apache/spark/pull/22101/files#diff-4ec35a60ad6a3f3f60f4d5ce91f59933L61-L63
 To keep the original intention, why do you think we need to update the 
existing test in little-endian cases?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to