srowen commented on a change in pull request #29259:
URL: https://github.com/apache/spark/pull/29259#discussion_r461119322
##########
File path:
sql/core/src/test/java/test/org/apache/spark/sql/execution/sort/RecordBinaryComparatorSuite.java
##########
@@ -261,40 +263,58 @@ public void testBinaryComparatorForNullColumns() throws
Exception {
public void testBinaryComparatorWhenSubtractionIsDivisibleByMaxIntValue()
throws Exception {
int numFields = 1;
+ long row1Data = 11L;
+ long row2Data = 11L + Integer.MAX_VALUE;
+
+ // BinaryComparator compares longs in big-endian byte order.
Review comment:
I don't even think that's quite true. The comparison isn't endian at all
as it is byte-by-byte. But the point here is to write bytes in a certain order
for the test, for sure.
##########
File path:
sql/core/src/test/java/test/org/apache/spark/sql/execution/sort/RecordBinaryComparatorSuite.java
##########
@@ -261,40 +263,58 @@ public void testBinaryComparatorForNullColumns() throws
Exception {
public void testBinaryComparatorWhenSubtractionIsDivisibleByMaxIntValue()
throws Exception {
int numFields = 1;
+ long row1Data = 11L;
+ long row2Data = 11L + Integer.MAX_VALUE;
+
+ // BinaryComparator compares longs in big-endian byte order.
Review comment:
I don't think overflow was the issue per se; signed vs unsigned bytes
were, for sure, in the original issue. But not so much here in this test case.
##########
File path:
sql/core/src/test/java/test/org/apache/spark/sql/execution/sort/RecordBinaryComparatorSuite.java
##########
@@ -261,40 +263,58 @@ public void testBinaryComparatorForNullColumns() throws
Exception {
public void testBinaryComparatorWhenSubtractionIsDivisibleByMaxIntValue()
throws Exception {
int numFields = 1;
+ long row1Data = 11L;
+ long row2Data = 11L + Integer.MAX_VALUE;
+
+ // BinaryComparator compares longs in big-endian byte order.
+ if (ByteOrder.nativeOrder().equals(ByteOrder.LITTLE_ENDIAN)) {
+ row1Data = Long.reverseBytes(row1Data);
+ row2Data = Long.reverseBytes(row2Data);
+ }
+
UnsafeRow row1 = new UnsafeRow(numFields);
byte[] data1 = new byte[100];
row1.pointTo(data1, computeSizeInBytes(numFields * 8));
- row1.setLong(0, 11);
+ row1.setLong(0, row1Data);
UnsafeRow row2 = new UnsafeRow(numFields);
byte[] data2 = new byte[100];
row2.pointTo(data2, computeSizeInBytes(numFields * 8));
- row2.setLong(0, 11L + Integer.MAX_VALUE);
+ row2.setLong(0, row2Data);
insertRow(row1);
insertRow(row2);
- Assert.assertTrue(compare(0, 1) > 0);
+ Assert.assertTrue(compare(0, 1) < 0);
}
@Test
public void testBinaryComparatorWhenSubtractionCanOverflowLongValue() throws
Exception {
int numFields = 1;
+ long row1Data = Long.MIN_VALUE;
+ long row2Data = 1;
+
+ // BinaryComparator compares longs in big-endian byte order.
+ if (ByteOrder.nativeOrder().equals(ByteOrder.LITTLE_ENDIAN)) {
Review comment:
If you mean, this is kind of a different issue -- yes. Should be a new
JIRA. I'd summarize this as: the bytes that this test sets up and asserts about
are different on big-endian. It creates the wrong test.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]