[GitHub] [spark] JoshRosen commented on a change in pull request #27246: [SPARK-30536][CORE][SQL] Sort-merge join operator spilling performance improvements

2020-01-22 Thread GitBox
JoshRosen commented on a change in pull request #27246: 
[SPARK-30536][CORE][SQL] Sort-merge join operator spilling performance 
improvements
URL: https://github.com/apache/spark/pull/27246#discussion_r369918625
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/ExternalAppendOnlyUnsafeRowArray.scala
 ##
 @@ -106,6 +107,8 @@ private[sql] class ExternalAppendOnlyUnsafeRowArray(
   def add(unsafeRow: UnsafeRow): Unit = {
 if (numRows < numRowsInMemoryBufferThreshold) {
   inMemoryBuffer += unsafeRow.copy()
+  numRows += 1
 
 Review comment:
   I think that we need to increment `numRows` and `modificationsCount` in both 
branches, not just this one: if we only update it in this branch then 
`length()` will return the wrong result.
   
   If you want to know how many rows are in the in-memory buffer then I'd 
define a separate `numRowBufferedInMemory` variable; that will also help to 
make the code in `MergerIterator` easier to understand.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] JoshRosen commented on a change in pull request #27246: [SPARK-30536][CORE][SQL] Sort-merge join operator spilling performance improvements

2020-01-22 Thread GitBox
JoshRosen commented on a change in pull request #27246: 
[SPARK-30536][CORE][SQL] Sort-merge join operator spilling performance 
improvements
URL: https://github.com/apache/spark/pull/27246#discussion_r369918361
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/ExternalAppendOnlyUnsafeRowArray.scala
 ##
 @@ -204,22 +195,44 @@ private[sql] class ExternalAppendOnlyUnsafeRowArray(
 }
   }
 
-  private[this] class SpillableArrayIterator(
+  private[this] class MergerIterator(
   iterator: UnsafeSorterIterator,
-  numFieldPerRow: Int)
+  numFieldPerRow: Int,
+  startIndex: Int)
 extends ExternalAppendOnlyUnsafeRowArrayIterator {
 
-private val currentRow = new UnsafeRow(numFieldPerRow)
+private var currentIndex = startIndex
 
-override def hasNext(): Boolean = !isModified() && iterator.hasNext
+private val currentRow = {
+  if (startIndex < numRows) {
+inMemoryBuffer(currentIndex)
+  } else {
+new UnsafeRow(numFieldPerRow)
+  }
+}
+
+override def hasNext(): Boolean = {
+  if (currentIndex < numRows) {
+!isModified()
+  } else {
+!isModified() && iterator.hasNext
+  }
+}
 
 override def next(): UnsafeRow = {
   throwExceptionIfModified()
-  iterator.loadNext()
-  currentRow.pointTo(iterator.getBaseObject, iterator.getBaseOffset, 
iterator.getRecordLength)
-  currentRow
+  if (currentIndex < numRows) {
+val result = inMemoryBuffer(currentIndex)
+currentIndex += 1
+result
+  } else {
+iterator.loadNext()
+currentRow.pointTo(iterator.getBaseObject, iterator.getBaseOffset, 
iterator.getRecordLength)
 
 Review comment:
   If `startIndex < numRows` then `currentRow` will be an `UnsafeRow` in 
`inMemoryBuffer`, so when we then roll past the `numRows` boundary then we'd 
start mutating that `UnsafeRow`'s state (which would corrupt the data in case 
we iterated over this multiple times).
   
   As a result, I think it's safer to leave `currentRow` as it was defined 
before, but rename it to, say, `currentSorterRow` to make it clear that it's 
only used for pointing at spilled records.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org