This is an automated email from the ASF dual-hosted git repository.
mrhhsg pushed a commit to branch spill_and_reserve
in repository https://gitbox.apache.org/repos/asf/doris.git
The following commit(s) were added to refs/heads/spill_and_reserve by this push:
new 9a81b8d78ed fix some bugs (#45532)
9a81b8d78ed is described below
commit 9a81b8d78ed87c8dbf74608b31e50d57ebd72574
Author: Jerry Hu <[email protected]>
AuthorDate: Wed Dec 18 15:23:01 2024 +0800
fix some bugs (#45532)
### What problem does this PR solve?
Issue Number: close #xxx
Related PR: #xxx
Problem Summary:
### Release note
None
### Check List (For Author)
- Test <!-- At least one of them must be included. -->
- [ ] Regression test
- [ ] Unit Test
- [ ] Manual test (add detailed scripts or steps below)
- [ ] No need to test or manual test. Explain why:
- [ ] This is a refactor/code format and no logic has been changed.
- [ ] Previous test can cover this change.
- [ ] No code files have been changed.
- [ ] Other reason <!-- Add your reason? -->
- Behavior changed:
- [ ] No.
- [ ] Yes. <!-- Explain the behavior change -->
- Does this need documentation?
- [ ] No.
- [ ] Yes. <!-- Add document PR link here. eg:
https://github.com/apache/doris-website/pull/1214 -->
### Check List (For Reviewer who merge this PR)
- [ ] Confirm the release note
- [ ] Confirm test cases
- [ ] Confirm document
- [ ] Add branch pick label <!-- Add branch pick label that this PR
should merge into -->
---
be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp | 1 +
be/src/runtime/workload_group/workload_group_manager.cpp | 3 ++-
be/src/vec/core/block.cpp | 1 +
3 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp
b/be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp
index 6cf9e658a60..95675004c70 100644
--- a/be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp
+++ b/be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp
@@ -568,6 +568,7 @@ Status PartitionedHashJoinSinkOperatorX::sink(RuntimeState*
state, vectorized::B
if (UNLIKELY(!local_state._shared_state->inner_runtime_state))
{
RETURN_IF_ERROR(_setup_internal_operator(state));
}
+
DBUG_EXECUTE_IF("fault_inject::partitioned_hash_join_sink::sink_eos", {
return Status::Error<INTERNAL_ERROR>(
"fault_inject partitioned_hash_join_sink "
diff --git a/be/src/runtime/workload_group/workload_group_manager.cpp
b/be/src/runtime/workload_group/workload_group_manager.cpp
index 4b61cd7892d..ade4f228850 100644
--- a/be/src/runtime/workload_group/workload_group_manager.cpp
+++ b/be/src/runtime/workload_group/workload_group_manager.cpp
@@ -718,7 +718,8 @@ bool WorkloadGroupMgr::handle_single_query_(const
std::shared_ptr<QueryContext>&
// Should not consider about process memory. For example, the
query's limit is 100g, workload
// group's memlimit is 10g, process memory is 20g. The query
reserve will always failed in wg
// limit, and process is always have memory, so that it will
resume and failed reserve again.
- if (!GlobalMemoryArbitrator::is_exceed_hard_mem_limit()) {
+ const size_t test_memory_size = std::max<size_t>(size_to_reserve,
32L * 1024 * 1024);
+ if
(!GlobalMemoryArbitrator::is_exceed_soft_mem_limit(test_memory_size)) {
LOG(INFO) << "Query: " << query_id
<< ", process limit not exceeded now, resume this
query"
<< ", process memory info: "
diff --git a/be/src/vec/core/block.cpp b/be/src/vec/core/block.cpp
index 38185ded5fb..5d029db6ace 100644
--- a/be/src/vec/core/block.cpp
+++ b/be/src/vec/core/block.cpp
@@ -735,6 +735,7 @@ void Block::clear_column_data(int64_t column_size) noexcept
{
for (auto& d : data) {
if (d.column) {
// Temporarily disable reference count check because a column
might be referenced multiple times within a block.
+ // Queries like this: `select c, c from t1;`
// DCHECK_EQ(d.column->use_count(), 1) << " " << print_use_count();
(*std::move(d.column)).assume_mutable()->clear();
}
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]