Akshat-Jain commented on code in PR #17038:
URL: https://github.com/apache/druid/pull/17038#discussion_r1774508005
##########
extensions-core/multi-stage-query/src/main/java/org/apache/druid/msq/querykit/WindowOperatorQueryFrameProcessor.java:
##########
@@ -154,150 +130,60 @@ public List<WritableFrameChannel> outputChannels()
@Override
public ReturnOrAwait<Object> runIncrementally(IntSet readableInputs)
{
- /*
- There are 2 scenarios:
-
- *** Scenario 1: Query has atleast one window function with an OVER()
clause without a PARTITION BY ***
-
- In this scenario, we add all the RACs to a single RowsAndColumns to be
processed. We do it via ConcatRowsAndColumns, and run all the operators on the
ConcatRowsAndColumns.
- This is done because we anyway need to run the operators on the entire
set of rows when we have an OVER() clause without a PARTITION BY.
- This scenario corresponds to partitionColumnNames.isEmpty()=true code
flow.
-
- *** Scenario 2: All window functions in the query have OVER() clause with
a PARTITION BY ***
-
- In this scenario, we need to process rows for each PARTITION BY group
together, but we can batch multiple PARTITION BY keys into the same RAC before
passing it to the operators for processing.
- Batching is fine since the operators list would have the required
NaivePartitioningOperatorFactory to segregate each PARTITION BY group during
the processing.
-
- The flow for this scenario can be summarised as following:
- 1. Frame Reading and Cursor Initialization: We start by reading a frame
from the inputChannel and initializing frameCursor to iterate over the rows in
that frame.
- 2. Row Comparison: For each row in the frame, we decide whether it
belongs to the same PARTITION BY group as the previous row.
- This is determined by comparePartitionKeys() method.
- Please refer to the Javadoc of that method for further
details and an example illustration.
- 2.1. If the PARTITION BY columns of current row matches the PARTITION
BY columns of the previous row,
- they belong to the same PARTITION BY group, and gets added to
rowsToProcess.
- If the number of total rows materialized exceed
maxRowsMaterialized, we process the pending batch via
processRowsUpToLastPartition() method.
- 2.2. If they don't match, then we have reached a partition boundary.
- In this case, we update the value for lastPartitionIndex.
- 3. End of Input: If the input channel is finished, any remaining rows in
rowsToProcess are processed.
-
- *Illustration of Row Comparison step*
+ if (inputChannel.canRead()) {
+ final Frame frame = inputChannel.read();
+ convertRowFrameToRowsAndColumns(frame);
- Let's say we have window_function() OVER (PARTITION BY A ORDER BY B) in
our query, and we get 3 frames in the input channel to process.
-
- Frame 1
- A, B
- 1, 2
- 1, 3
- 2, 1 --> PARTITION BY key (column A) changed from 1 to 2.
- 2, 2
-
- Frame 2
- A, B
- 3, 1 --> PARTITION BY key (column A) changed from 2 to 3.
- 3, 2
- 3, 3
- 3, 4
-
- Frame 3
- A, B
- 3, 5
- 3, 6
- 4, 1 --> PARTITION BY key (column A) changed from 3 to 4.
- 4, 2
-
- *Why batching?*
- We batch multiple PARTITION BY keys for processing together to avoid the
overhead of creating different RACs for each PARTITION BY keys, as that would
be unnecessary in scenarios where we have a large number of PARTITION BY keys,
but each key having a single row.
-
- *Future thoughts: https://github.com/apache/druid/issues/16126*
- Current approach with R&C and operators materialize a single R&C for
processing. In case of data with low cardinality a single R&C might be too big
to consume. Same for the case of empty OVER() clause.
- Most of the window operations like SUM(), RANK(), RANGE() etc. can be
made with 2 passes of the data. We might think to reimplement them in the MSQ
way so that we do not have to materialize so much data.
- */
-
- if (partitionColumnNames.isEmpty()) {
- // Scenario 1: Query has atleast one window function with an OVER()
clause without a PARTITION BY.
- if (inputChannel.canRead()) {
- final Frame frame = inputChannel.read();
- convertRowFrameToRowsAndColumns(frame);
- return ReturnOrAwait.runAgain();
- } else if (inputChannel.isFinished()) {
- runAllOpsOnMultipleRac(frameRowsAndCols);
- return ReturnOrAwait.returnObject(Unit.instance());
- } else {
- return ReturnOrAwait.awaitAll(inputChannels().size());
- }
- }
-
- // Scenario 2: All window functions in the query have OVER() clause with a
PARTITION BY
- if (frameCursor == null || frameCursor.isDone()) {
- if (readableInputs.isEmpty()) {
- return ReturnOrAwait.awaitAll(1);
- } else if (inputChannel.canRead()) {
- final Frame frame = inputChannel.read();
- frameCursor = FrameProcessors.makeCursor(frame, frameReader);
- makeRowSupplierFromFrameCursor();
- } else if (inputChannel.isFinished()) {
- // Handle any remaining data.
- lastPartitionIndex = rowsToProcess.size() - 1;
- processRowsUpToLastPartition();
- return ReturnOrAwait.returnObject(Unit.instance());
- } else {
- return ReturnOrAwait.runAgain();
- }
- }
-
- while (!frameCursor.isDone()) {
- final ResultRow currentRow = rowSupplierFromFrameCursor.get();
- if (outputRow == null) {
- outputRow = currentRow;
- rowsToProcess.add(currentRow);
- } else if (comparePartitionKeys(outputRow, currentRow,
partitionColumnNames)) {
- // Add current row to the same batch of rows for processing.
- rowsToProcess.add(currentRow);
- if (rowsToProcess.size() > maxRowsMaterialized) {
- // We don't want to materialize more than maxRowsMaterialized rows
at any point in time, so process the pending batch.
- processRowsUpToLastPartition();
+ if (needToProcessBatch()) {
+ runAllOpsOnBatch();
+ try {
+ flushAllRowsAndCols(resultRowAndCols);
+ }
+ catch (IOException e) {
+ throw new RuntimeException(e);
}
- ensureMaxRowsInAWindowConstraint(rowsToProcess.size());
- } else {
- lastPartitionIndex = rowsToProcess.size() - 1;
- outputRow = currentRow.copy();
- return ReturnOrAwait.runAgain();
}
- frameCursor.advance();
+ return ReturnOrAwait.runAgain();
+ } else if (inputChannel.isFinished()) {
+ runAllOpsOnBatch();
+ return ReturnOrAwait.returnObject(Unit.instance());
+ } else {
+ return ReturnOrAwait.awaitAll(inputChannels().size());
}
- return ReturnOrAwait.runAgain();
}
- /**
- * @param listOfRacs Concat this list of {@link RowsAndColumns} to a {@link
ConcatRowsAndColumns} to use as a single input for the operators to be run
- */
- private void runAllOpsOnMultipleRac(ArrayList<RowsAndColumns> listOfRacs)
+ private void initialiseOperator()
{
- Operator op = new Operator()
+ op = new Operator()
{
@Nullable
@Override
public Closeable goOrContinue(Closeable continuationObject, Receiver
receiver)
{
- RowsAndColumns rac = new ConcatRowsAndColumns(listOfRacs);
+ RowsAndColumns rac = new ConcatRowsAndColumns(new
ArrayList<>(frameRowsAndCols));
+ frameRowsAndCols.clear();
+ numRowsInFrameRowsAndCols = 0;
ensureMaxRowsInAWindowConstraint(rac.numRows());
receiver.push(rac);
- receiver.completed();
- return null;
+
+ if (inputChannel.isFinished()) {
+ // Only call completed() when the input channel is finished.
+ receiver.completed();
+ return null; // Signal that the operator has completed its work
+ }
+
+ // Return a non-null continuation object to indicate that we want to
continue processing.
+ return () -> {};
}
};
- runOperatorsAfterThis(op);
- }
-
- /**
- * @param op Base operator for the operators to be run. Other operators are
wrapped under this to run
- */
- private void runOperatorsAfterThis(Operator op)
- {
for (OperatorFactory of : operatorFactoryList) {
op = of.wrap(op);
}
- Operator.go(op, new Operator.Receiver()
+ }
+
+ private void runAllOpsOnBatch()
+ {
+ op.goOrContinue(null, new Operator.Receiver()
Review Comment:
I originally tried putting `flushAllRowsAndCols(resultRowAndCols)` inside
`runAllOpsOnBatch() -> push()` method, like following:
```java
private void runAllOpsOnBatch()
{
op.goOrContinue(null, new Operator.Receiver()
{
@Override
public Operator.Signal push(RowsAndColumns rac)
{
resultRowAndCols.add(rac);
try{
flushAllRowsAndCols(resultRowAndCols);
}
catch (IOException e) {
throw new RuntimeException(e);
}
return Operator.Signal.GO;
}
@Override
public void completed()
{
try {
flushAllRowsAndCols(resultRowAndCols);
}
catch (IOException e) {
throw new RuntimeException(e);
}
}
});
}
```
But this runs into the following error:
```java
java.lang.RuntimeException: org.apache.druid.java.util.common.ISE: Channel
has no capacity
```
When I dug further into this, it seems that the contract of
`runIncrementally()` doesn't allow writing multiple frames in a single
iteration of `runIncrementally()`, which ends up happening in this case as the
`push()` can get called multiple times within a single iteration of
`runIncrementally()`.
I also stumbled upon this PR from Gian where he had to handle this
in`ScanQueryFrameProcessor` as well: https://github.com/apache/druid/pull/13036
I'm not sure of the historic reason behind this design. But because of these
restrictions, I had to move `flushAllRowsAndCols(resultRowAndCols)` into
`runIncrementally()`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]