liran-funaro commented on a change in pull request #10593:
URL: https://github.com/apache/druid/pull/10593#discussion_r533174614



##########
File path: 
processing/src/main/java/org/apache/druid/segment/incremental/OffheapIncrementalIndex.java
##########
@@ -150,18 +150,13 @@ protected AddToFactsResult addToFacts(
       boolean skipMaxRowsInMemoryCheck // ignored, we always want to check 
this for offheap
   ) throws IndexSizeExceededException
   {
-    ByteBuffer aggBuffer;
-    int bufferIndex;
-    int bufferOffset;
-
     synchronized (this) {
       final AggregatorFactory[] metrics = getMetrics();
       final int priorIndex = facts.getPriorIndex(key);
       if (IncrementalIndexRow.EMPTY_ROW_INDEX != priorIndex) {
         final int[] indexAndOffset = indexAndOffsets.get(priorIndex);
-        bufferIndex = indexAndOffset[0];
-        bufferOffset = indexAndOffset[1];
-        aggBuffer = aggBuffers.get(bufferIndex).get();
+        ByteBuffer aggBuffer = aggBuffers.get(indexAndOffset[0]).get();

Review comment:
       Before this change, the code that responsible for the aggregation ran 
after a new row was inserted to `indexAndOffsets` (see line 209 below). This 
means that the new row was visible before any data was aggregated to it.
   This does not correspond with the on-heap index behavior, which first 
aggregates the data, then inserts the row to the index.
   According to `IncrementalIndexIngestionTest.testMultithreadAddFacts()`, the 
on-heap behavior is the correct one, so I changed it accordingly so the test 
will pass for this index as well.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to