mhilton opened a new pull request, #12634:
URL: https://github.com/apache/datafusion/pull/12634
## Which issue does this PR close?
<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases. You can
link an issue to this PR using the GitHub syntax. For example `Closes #123`
indicates that this PR will close issue #123.
-->
Closes #12633.
## Rationale for this change
<!--
Why are you proposing this change? If this is already explained clearly in
the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand your
changes and offer better suggestions for fixes.
-->
Some joins use an excessive amount of memory due to creating very large
record batches. This will reduce that memory use.
## What changes are included in this PR?
<!--
There is no need to duplicate the description in the issue here but it is
sometimes worth providing a summary of the individual changes in this PR.
-->
From the high level there are two changes introduced by this PR.
The first is to process the probe-side input batches in smaller sizes. The
processing loop only processes as many rows of the probe-side input that are
likely to fit in a record batch. This is somewhat pessimistic and assumes that
for each probe-side row there will be one output row per build-side row (INNER
joins excepted). It is possible that this could be tuned in the future to
balance processing speed with memory use. In order to make progress at least
one probe-side row will be processed on each loop.
The second change is to introduce an output buffer. This is used to
consolidate small record batches where the JOIN condition has low selectivity.
If the join condition has a high selectivity and therefore produces large
batches the output buffer breaks these into smaller batches for further
processing. The output buffer will always produce one batch, even if that batch
is empty.
## Are these changes tested?
<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code
If tests are not included in your PR, please explain why (for example, are
they covered by existing tests)?
-->
There is a new test that ensures the output batches from
`NestedLoopJoinExec` are no bigger than the configured batch size.
Existing tests are assumed to be sufficient to show that the behaviour
hasn't changed.
Repeated the example from #12633 gives:
```
> SHOW datafusion.execution.batch_size;
+---------------------------------+-------+
| name | value |
+---------------------------------+-------+
| datafusion.execution.batch_size | 8192 |
+---------------------------------+-------+
1 row(s) fetched.
Elapsed 0.039 seconds.
> CREATE TABLE test AS VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8),
(9);
0 row(s) fetched.
Elapsed 0.010 seconds.
> EXPLAIN ANALYZE WITH test_t AS (SELECT concat(t1.column1, t2.column1,
t3.column1, t4.column1, t5.column1) AS v FROM test t1, test t2, test t3, test
t4, test t5) SELECT * FROM test_t tt1 FULL OUTER JOIN test_t tt2 ON
tt1.v<>tt2.v;
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| plan_type | plan
|
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Plan with Metrics | NestedLoopJoinExec: join_type=Full, filter=v@0 != v@1,
metrics=[output_rows=9999900000, build_input_batches=10000,
build_input_rows=100000, input_batches=10000, input_rows=100000,
output_batches=1300010, build_mem_used=2492500, build_time=170.59826ms,
join_time=309.369402772s] |
| | CoalescePartitionsExec, metrics=[output_rows=100000,
elapsed_compute=30.75µs]
|
| | ProjectionExec: expr=[concat(CAST(column1@1 AS
Utf8), CAST(column1@2 AS Utf8), CAST(column1@3 AS Utf8), CAST(column1@4 AS
Utf8), CAST(column1@0 AS Utf8)) as v], metrics=[output_rows=100000,
elapsed_compute=67.949286ms]
|
| | CrossJoinExec, metrics=[output_rows=100000,
build_input_batches=1, build_input_rows=10, input_batches=1000,
input_rows=10000, output_batches=10000, build_mem_used=224,
build_time=139.458µs, join_time=8.338651ms]
|
| | MemoryExec: partitions=1, partition_sizes=[1],
metrics=[]
|
| | RepartitionExec:
partitioning=RoundRobinBatch(10), input_partitions=1,
metrics=[fetch_time=1.661829ms, repartition_time=1ns, send_time=10.136821ms]
|
| | ProjectionExec: expr=[column1@1 as column1,
column1@2 as column1, column1@3 as column1, column1@0 as column1],
metrics=[output_rows=10000, elapsed_compute=348.255µs]
|
| | CrossJoinExec, metrics=[output_rows=10000,
build_input_batches=1, build_input_rows=10, input_batches=100, input_rows=1000,
output_batches=1000, build_mem_used=224, build_time=9.917µs,
join_time=464.211µs]
|
| | MemoryExec: partitions=1,
partition_sizes=[1], metrics=[]
|
| | ProjectionExec: expr=[column1@1 as
column1, column1@2 as column1, column1@0 as column1],
metrics=[output_rows=1000, elapsed_compute=33.044µs]
|
| | CrossJoinExec,
metrics=[output_rows=1000, build_input_batches=1, build_input_rows=10,
input_batches=10, input_rows=100, output_batches=100, build_mem_used=224,
build_time=1.375µs, join_time=53.299µs]
|
| | MemoryExec: partitions=1,
partition_sizes=[1], metrics=[]
|
| | CrossJoinExec,
metrics=[output_rows=100, build_input_batches=1, build_input_rows=10,
input_batches=1, input_rows=10, output_batches=10, build_mem_used=224,
build_time=1.083µs, join_time=244.708µs]
|
| | MemoryExec: partitions=1,
partition_sizes=[1], metrics=[]
|
| | MemoryExec: partitions=1,
partition_sizes=[1], metrics=[]
|
| | ProjectionExec: expr=[concat(CAST(column1@1 AS
Utf8), CAST(column1@2 AS Utf8), CAST(column1@3 AS Utf8), CAST(column1@4 AS
Utf8), CAST(column1@0 AS Utf8)) as v], metrics=[output_rows=100000,
elapsed_compute=262.67843ms]
|
| | CrossJoinExec, metrics=[output_rows=100000,
build_input_batches=1, build_input_rows=10, input_batches=1000,
input_rows=10000, output_batches=10000, build_mem_used=224, build_time=5.916µs,
join_time=60.39301ms]
|
| | MemoryExec: partitions=1, partition_sizes=[1],
metrics=[]
|
| | RepartitionExec:
partitioning=RoundRobinBatch(10), input_partitions=1,
metrics=[fetch_time=1.857489ms, repartition_time=1ns, send_time=31.12258491s]
|
| | ProjectionExec: expr=[column1@1 as column1,
column1@2 as column1, column1@3 as column1, column1@0 as column1],
metrics=[output_rows=10000, elapsed_compute=408.628µs]
|
| | CrossJoinExec, metrics=[output_rows=10000,
build_input_batches=1, build_input_rows=10, input_batches=100, input_rows=1000,
output_batches=1000, build_mem_used=224, build_time=792ns, join_time=926.525µs]
|
| | MemoryExec: partitions=1,
partition_sizes=[1], metrics=[]
|
| | ProjectionExec: expr=[column1@1 as
column1, column1@2 as column1, column1@0 as column1],
metrics=[output_rows=1000, elapsed_compute=44.416µs]
|
| | CrossJoinExec,
metrics=[output_rows=1000, build_input_batches=1, build_input_rows=10,
input_batches=10, input_rows=100, output_batches=100, build_mem_used=224,
build_time=416ns, join_time=95.039µs]
|
| | MemoryExec: partitions=1,
partition_sizes=[1], metrics=[]
|
| | CrossJoinExec,
metrics=[output_rows=100, build_input_batches=1, build_input_rows=10,
input_batches=1, input_rows=10, output_batches=10, build_mem_used=224,
build_time=417ns, join_time=4.499µs]
|
| | MemoryExec: partitions=1,
partition_sizes=[1], metrics=[]
|
| | MemoryExec: partitions=1,
partition_sizes=[1], metrics=[]
|
| |
|
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row(s) fetched.
Elapsed 32.145 seconds.
```
Which is a mean batch size of 7692.17 rows.
## Are there any user-facing changes?
<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->
No users should not notice any bahvioural difference.
<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]