fanlongmengjd opened a new issue #3603:
URL: https://github.com/apache/incubator-doris/issues/3603
**Describe the bug**
Create routine load job with desired_concurrent_number =5 , running for a
while , and there is only 1 -2 running tasks.
**To Reproduce**
Steps to reproduce the behavior:
1. Create routine load job with desired_concurrent_number =5 ,
2. go to web console and see routine load
**Expected behavior**
there are 5 running tasks
**Screenshots**
**Desktop (please complete the following information):**
**Smartphone (please complete the following information):**
- Version 0.11.33.1
**Additional context**
we checked the JRF
[fe.jfr.tar.gz](https://github.com/apache/incubator-doris/files/4636825/fe.jfr.tar.gz)
sounds like DescriptorTable cost most of time and DescriptorTable be
increased by the method plan in (
fe/src/main/java/org/apache/doris/planner/StreamLoadPlanner.java) every time
`
// create the plan. the plan's query id and load id are same, using the
parameter 'loadId'
public TExecPlanFragmentParams plan(TUniqueId loadId) throws
UserException {
// construct tuple descriptor, used for scanNode and dataSink
TupleDescriptor tupleDesc =
descTable.createTupleDescriptor("DstTableTuple");
boolean negative = streamLoadTask.getNegative();
// here we should be full schema to fill the descriptor table
for (Column col : destTable.getFullSchema()) {
SlotDescriptor slotDesc = descTable.addSlotDescriptor(tupleDesc);
slotDesc.setIsMaterialized(true);
slotDesc.setColumn(col);
slotDesc.setIsNullable(col.isAllowNull());
if (negative && !col.isKey() && col.getAggregationType() !=
AggregateType.SUM) {
throw new DdlException("Column is not SUM AggreateType.
column:" + col.getName());
}
}
`
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]