github-actions[bot] commented on code in PR #61495:
URL: https://github.com/apache/doris/pull/61495#discussion_r3015985239


##########
fe/fe-core/src/main/java/org/apache/doris/nereids/rules/implementation/SplitAggWithoutDistinct.java:
##########
@@ -165,6 +183,285 @@ public Void visitSessionVarGuardExpr(SessionVarGuardExpr 
expr, Map<String, Strin
                 aggregate.getLogicalProperties(), localAgg));
     }
 
+    /**
+     * Implements bucketed hash aggregation for single-BE deployments.
+     * Fuses two-phase aggregation into a single PhysicalBucketedHashAggregate 
operator,
+     * eliminating exchange overhead and serialization/deserialization costs.
+     *
+     * Only generated when:
+     * 1. enable_bucketed_hash_agg session variable is true
+     * 2. Cluster has exactly one alive BE
+     * 3. Aggregate has GROUP BY keys (no without-key aggregation)
+     * 4. Aggregate functions support two-phase execution
+     * 5. Data volume checks pass (min input rows, max group keys)
+     */
+    private List<Plan> implementBucketedPhase(LogicalAggregate<? extends Plan> 
aggregate, ConnectContext ctx) {
+        if (!ctx.getSessionVariable().enableBucketedHashAgg) {
+            return ImmutableList.of();
+        }

Review Comment:
   This starts generating a brand-new `BUCKETED_AGGREGATION_NODE` whenever FE 
sees exactly one alive BE, but there is no BE-version/capability check anywhere 
in the selection path. During a rolling upgrade, an upgraded FE can plan this 
node against an older BE that does not recognize the new thrift enum/operator 
yet, and otherwise valid aggregate queries will fail. Please gate the 
optimization on BE capability / exec version (or keep it disabled until 
mixed-version support exists).



##########
fe/fe-core/src/main/java/org/apache/doris/nereids/glue/translator/PhysicalPlanTranslator.java:
##########
@@ -1336,6 +1321,77 @@ public PlanFragment visitPhysicalHashAggregate(
         return inputPlanFragment;
     }
 
+    @Override
+    public PlanFragment visitPhysicalBucketedHashAggregate(
+            PhysicalBucketedHashAggregate<? extends Plan> aggregate,
+            PlanTranslatorContext context) {
+
+        PlanFragment inputPlanFragment = aggregate.child(0).accept(this, 
context);
+
+        List<Expression> groupByExpressions = 
aggregate.getGroupByExpressions();
+        List<NamedExpression> outputExpressions = 
aggregate.getOutputExpressions();
+
+        // 1. generate slot reference for each group expression
+        List<SlotReference> groupSlots = 
collectGroupBySlots(groupByExpressions, outputExpressions);
+        ArrayList<Expr> execGroupingExpressions = 
translateGroupByExprs(groupByExpressions, context);
+
+        // 2. collect agg expressions
+        List<Slot> aggFunctionOutput = Lists.newArrayList();
+        ArrayList<FunctionCallExpr> execAggregateFunctions = 
Lists.newArrayListWithCapacity(outputExpressions.size());
+        Set<AggregateExpression> processedAggregateExpressions = 
Sets.newIdentityHashSet();
+        for (NamedExpression o : outputExpressions) {
+            if (o.containsType(AggregateExpression.class)) {

Review Comment:
   `visitPhysicalHashAggregate()` has an explicit `SessionVarGuardExpr` branch 
here, but the new bucketed path only recognizes bare `AggregateExpression`s. If 
an aggregate output is wrapped by `SessionVarGuardExpr` (for example after 
`NormalizeAggregate` preserves session-sensitive decimal/type behavior), this 
code translates the inner `AggregateExpression` directly and drops the guard. 
That means the bucketed path can produce a different FE->BE aggregate signature 
from the regular hash-agg path under non-default session variables. Please 
mirror the guard-handling logic from `visitPhysicalHashAggregate()` here.



##########
be/src/exec/operator/bucketed_aggregation_sink_operator.cpp:
##########
@@ -0,0 +1,519 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#include "exec/operator/bucketed_aggregation_sink_operator.h"
+
+#include <memory>
+#include <string>
+
+#include "common/status.h"
+#include "exec/common/hash_table/hash.h"
+#include "exec/operator/operator.h"
+#include "exprs/vectorized_agg_fn.h"
+#include "runtime/runtime_profile.h"
+#include "runtime/thread_context.h"
+
+namespace doris {
+#include "common/compile_check_begin.h"
+
+BucketedAggSinkLocalState::BucketedAggSinkLocalState(DataSinkOperatorXBase* 
parent,
+                                                     RuntimeState* state)
+        : Base(parent, state) {}
+
+Status BucketedAggSinkLocalState::init(RuntimeState* state, 
LocalSinkStateInfo& info) {
+    RETURN_IF_ERROR(Base::init(state, info));
+    SCOPED_TIMER(Base::exec_time_counter());
+    SCOPED_TIMER(Base::_init_timer);
+
+    _instance_idx = info.task_idx;
+
+    // Sink dependencies start as ready=false by default. We must explicitly
+    // set them to ready so the pipeline task can execute (call sink()).
+    // This follows the same pattern as HashJoinBuildSinkLocalState::init().
+    _dependency->set_ready();
+
+    _hash_table_size_counter = ADD_COUNTER(custom_profile(), "HashTableSize", 
TUnit::UNIT);
+    _hash_table_memory_usage =
+            ADD_COUNTER_WITH_LEVEL(Base::custom_profile(), 
"MemoryUsageHashTable", TUnit::BYTES, 1);
+
+    _build_timer = ADD_TIMER(Base::custom_profile(), "BuildTime");
+    _expr_timer = ADD_TIMER(Base::custom_profile(), "ExprTime");
+    _hash_table_compute_timer = ADD_TIMER(Base::custom_profile(), 
"HashTableComputeTime");
+    _hash_table_emplace_timer = ADD_TIMER(Base::custom_profile(), 
"HashTableEmplaceTime");
+    _hash_table_input_counter =
+            ADD_COUNTER(Base::custom_profile(), "HashTableInputCount", 
TUnit::UNIT);
+    _memory_usage_arena = ADD_COUNTER(custom_profile(), "MemoryUsageArena", 
TUnit::BYTES);
+
+    return Status::OK();
+}
+
+Status BucketedAggSinkLocalState::open(RuntimeState* state) {
+    SCOPED_TIMER(Base::exec_time_counter());
+    SCOPED_TIMER(Base::_open_timer);
+    RETURN_IF_ERROR(Base::open(state));
+
+    auto& p = Base::_parent->template cast<BucketedAggSinkOperatorX>();
+    auto& shared_state = *Base::_shared_state;
+
+    // Initialize per-instance data and shared metadata. Multiple sink 
instances call open()
+    // concurrently, so all shared-state writes must be inside call_once to 
avoid data races.
+    Status init_status;
+    shared_state.init_instances(state->task_num(), [&]() {

Review Comment:
   `init_instances()` is protected by `std::call_once`, but failures are 
returned through the outer local variable `init_status`. If the first sink 
thread hits a clone error inside this lambda, only that thread observes 
`RETURN_IF_ERROR(init_status)`. Later sink instances skip the `call_once`, keep 
their own local `init_status == OK`, and continue with partially initialized 
shared state. This turns a deterministic open failure into later undefined 
behavior. The init status needs to live in `BucketedAggSharedState` (or the 
lambda should throw) so every instance fails consistently.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to