2010YOUY01 opened a new pull request, #16996:
URL: https://github.com/apache/datafusion/pull/16996
## Which issue does this PR close?
<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases. You can
link an issue to this PR using the GitHub syntax. For example `Closes #123`
indicates that this PR will close issue #123.
-->
- Closes #.
## Rationale for this change
<!--
Why are you proposing this change? If this is already explained clearly in
the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand your
changes and offer better suggestions for fixes.
-->
# Summary
This PR rewrites the NLJ operator from scratch with a different approach, to
limit the extra intermediate data overhead to only 1 `RecordBatch`, and
eliminate other redundant conversions in the old implementation.
Using the NLJ micro-bench introduced in
https://github.com/apache/datafusion/pull/16819, this PR can introduce up to
**3.5X speed-up**, and **uses only 1% memory** in extreme cases.
### Speed benchmark
Note: have to cherry-pick the micro-bench PR
https://github.com/apache/datafusion/pull/16819 that has not merged.
```
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Query ┃ pr-16819 ┃ nlj-rewrite ┃ Change ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ QQuery 1 │ 218.38 ms │ 86.36 ms │ +2.53x faster │
│ QQuery 2 │ 258.65 ms │ 116.88 ms │ +2.21x faster │
│ QQuery 3 │ 335.23 ms │ 146.01 ms │ +2.30x faster │
│ QQuery 4 │ 900.49 ms │ 371.38 ms │ +2.42x faster │
│ QQuery 5 │ 634.22 ms │ 240.42 ms │ +2.64x faster │
│ QQuery 6 │ 5897.75 ms │ 1694.10 ms │ +3.48x faster │
│ QQuery 7 │ 617.83 ms │ 248.83 ms │ +2.48x faster │
│ QQuery 8 │ 5712.21 ms │ 1693.09 ms │ +3.37x faster │
│ QQuery 9 │ 671.34 ms │ 269.08 ms │ +2.49x faster │
│ QQuery 10 │ 1731.51 ms │ 488.54 ms │ +3.54x faster │
└──────────────┴────────────┴─────────────┴───────────────┘
```
### Memory Usage Benchmark
TODO
@ding-young could you help to re-run it? I have pushed one change to the
micro-bench PR to fix the memory issue in
https://github.com/apache/datafusion/pull/16889#issuecomment-3121126110
### Next Steps
I think it is ready for review, the major potential optimizations have all
been done. Though there is one minor chore to be done:
- [ ] Add join metrics tracking
# Why a Rewrite?
(TLDR: it's the easiest way to address the existing problem)
The original implementation performs a Cartesian product of
(all-left-batches x right-batch), materializes that intermediate result for
predicate evaluation, and then materializes the (potentially very large) final
result all at once. This design is inherently inefficient, and although many
patches have attempted to alleviate the problem, the fundamental issue remains.
A key challenge is that the original design and the ideal design (i.e., one
that produces small intermediates during execution) are fundamentally
different. As a result, it's practically impossible to make small incremental
changes that fully address the inefficiency. These patches may also increase
code complexity, making long-term maintenance more difficult.
### Example of Prior Work
Here's a recent example of a small patch intended to improve the situation:
https://github.com/apache/datafusion/pull/16443
Even with careful engineering, I still feel the entropy in the code
increases.
Since NLJ is a relatively straightforward operator, a full rewrite seemed
worthwhile. This allows for a clean, simplified design focused on current
goals—performance and memory efficiency—without being constrained by the legacy
implementation.
## What changes are included in this PR?
<!--
There is no need to duplicate the description in the issue here but it is
sometimes worth providing a summary of the individual changes in this PR.
-->
### Implementation
The implementation/design doc can be found in the source code.
A brief comparison between old implementation and this PR:
#### Old implementation
1. Do a Cartesian produce of (`all_buffered_left_batch x one_right_batch`)
to calculate indices
2. Construct intermediate batch and output it chunk by chunk
#### This PR
- For the inner loop, only perform (`one_left_row x one_right_batch`), and
do filtering, and output construction directly on this small intermediate.
- Eagerly yield output when the output buffer has reached `batch_size`
threshold
#### Old v.s. PR
- The old implementation requires multiple conversions between `indices <-->
batch`, and this PR can use the right batch directly. This avoids unnecessary
transformations and make the implementation more cache-friendly
- The old implementation has extra memory overhead of
`left_buffered_batches_total_rows * right_batch_size (default 8192) * 12 Bytes
(left and right indices are represented by uint64 and uint32)`, can be
significant for large dataset.
## Are these changes tested?
<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
4. Serve as another way to document the expected behavior of the code
If tests are not included in your PR, please explain why (for example, are
they covered by existing tests)?
-->
Existing tests
## Are there any user-facing changes?
<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->
<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
The old implementation can maintain right input order in certain cases, this
PR's new design is not able to maintain that property -- preserving it would
require a different design, and it has significant performance and memory
overhead.
If this property is important to some user, we can keep the old
implementation (maybe rename to `RightOrderPreservingNLJ` and use a
configuration to control it)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]