github-actions[bot] commented on code in PR #37471:
URL: https://github.com/apache/doris/pull/37471#discussion_r1668284776
##########
be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:
##########
@@ -100,30 +100,33 @@ size_t
PartitionedHashJoinSinkLocalState::revocable_mem_size(RuntimeState* state
}
Status
PartitionedHashJoinSinkLocalState::_revoke_unpartitioned_block(RuntimeState*
state) {
Review Comment:
warning: function '_revoke_unpartitioned_block' has cognitive complexity of
52 (threshold 50) [readability-function-cognitive-complexity]
```cpp
Status
PartitionedHashJoinSinkLocalState::_revoke_unpartitioned_block(RuntimeState*
state) {
^
```
<details>
<summary>Additional context</summary>
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:108:** +1,
including nesting penalty of 0, nesting level increased to 1
```cpp
if (inner_sink_state_) {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:113:** +1,
including nesting penalty of 0, nesting level increased to 1
```cpp
if (build_block.rows() <= 1) {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:119:** +1,
including nesting penalty of 0, nesting level increased to 1
```cpp
if (build_block.columns() > num_slots) {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:123:**
nesting level increased to 1
```cpp
auto spill_func = [build_block = std::move(build_block), state, this]()
mutable {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:130:**
nesting level increased to 2
```cpp
[](std::vector<uint32_t>& indices) {
indices.reserve(reserved_size); });
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:132:**
nesting level increased to 2
```cpp
auto flush_rows = [&state,
this](std::unique_ptr<vectorized::MutableBlock>& partition_block,
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:137:** +3,
including nesting penalty of 2, nesting level increased to 3
```cpp
if (!status.ok()) {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:149:** +2,
including nesting penalty of 1, nesting level increased to 2
```cpp
while (offset < total_rows) {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:153:** +3,
including nesting penalty of 2, nesting level increased to 3
```cpp
for (size_t i = 0; i != build_block.columns(); ++i) {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:167:** +3,
including nesting penalty of 2, nesting level increased to 3
```cpp
for (size_t i = 0; i != sub_block.rows(); ++i) {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:171:** +3,
including nesting penalty of 2, nesting level increased to 3
```cpp
for (uint32_t partition_idx = 0; partition_idx !=
p._partition_count; ++partition_idx) {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:177:** +4,
including nesting penalty of 3, nesting level increased to 4
```cpp
if (UNLIKELY(!partition_block)) {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:185:** +4,
including nesting penalty of 3, nesting level increased to 4
```cpp
if (!st.ok()) {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:195:** +4,
including nesting penalty of 3, nesting level increased to 4
```cpp
if (partition_block->rows() >= reserved_size ||
is_last_block) {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:195:** +1
```cpp
if (partition_block->rows() >= reserved_size ||
is_last_block) {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:196:** +5,
including nesting penalty of 4, nesting level increased to 5
```cpp
if (!flush_rows(partition_block, spilling_stream)) {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:208:**
nesting level increased to 1
```cpp
auto exception_catch_func = [spill_func, this]() mutable {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:209:**
nesting level increased to 2
```cpp
auto status = [&]() {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:210:** +3,
including nesting penalty of 2, nesting level increased to 3
```cpp
RETURN_IF_CATCH_EXCEPTION(spill_func());
^
```
**be/src/common/exception.h:89:** expanded from macro
'RETURN_IF_CATCH_EXCEPTION'
```cpp
do {
\
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:210:** +4,
including nesting penalty of 3, nesting level increased to 4
```cpp
RETURN_IF_CATCH_EXCEPTION(spill_func());
^
```
**be/src/common/exception.h:94:** expanded from macro
'RETURN_IF_CATCH_EXCEPTION'
```cpp
} catch (const doris::Exception& e) {
\
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:210:** +5,
including nesting penalty of 4, nesting level increased to 5
```cpp
RETURN_IF_CATCH_EXCEPTION(spill_func());
^
```
**be/src/common/exception.h:95:** expanded from macro
'RETURN_IF_CATCH_EXCEPTION'
```cpp
if (e.code() == doris::ErrorCode::MEM_ALLOC_FAILED) {
\
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:214:** +2,
including nesting penalty of 1, nesting level increased to 2
```cpp
if (!status.ok()) {
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:228:** +1,
including nesting penalty of 0, nesting level increased to 1
```cpp
DBUG_EXECUTE_IF(
^
```
**be/src/util/debug_points.h:36:** expanded from macro 'DBUG_EXECUTE_IF'
```cpp
if (UNLIKELY(config::enable_debug_points)) {
\
^
```
**be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp:228:** +2,
including nesting penalty of 1, nesting level increased to 2
```cpp
DBUG_EXECUTE_IF(
^
```
**be/src/util/debug_points.h:38:** expanded from macro 'DBUG_EXECUTE_IF'
```cpp
if (dp) {
\
^
```
</details>
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]