PliskinZhang commented on issue #24889:
URL: https://github.com/apache/doris/issues/24889#issuecomment-1734776926

    i deploy the test env is 1fe+1be. i found when i increasing 
max_batch_interval=60(max),it's looks like be nomal. but consume very slow , 
about 400 row per times.
   but i don't know why
   `I0926 11:37:36.238286 172134 data_consumer.cpp:234] kafka consume timeout: 
63480f09a42d6165-591140e9b9db3fae
   I0926 11:37:37.984771 172134 data_consumer.cpp:234] kafka consume timeout: 
63480f09a42d6165-591140e9b9db3fae
   I0926 11:37:39.251384 172134 data_consumer.cpp:234] kafka consume timeout: 
63480f09a42d6165-591140e9b9db3fae
   I0926 11:37:40.251472 172134 data_consumer.cpp:234] kafka consume timeout: 
63480f09a42d6165-591140e9b9db3fae
   I0926 11:37:40.251492 172134 data_consumer.cpp:257] kafka consumer done: 
63480f09a42d6165-591140e9b9db3fae, grp: a648ee7bb83177ec-d82f3befe86d3cb3. 
cancelled: 0, left time(ms): -776, total cost(ms): 60776, consume cost(ms): 
60774, received rows: 390, put rows: 390
   I0926 11:37:40.251507 172134 data_consumer_group.cpp:87] all consumers are 
finished. shutdown queue. group id: a648ee7bb83177ec-d82f3befe86d3cb3
   I0926 11:37:40.251523 160128 data_consumer_group.cpp:131] consumer group 
done: a648ee7bb83177ec-d82f3befe86d3cb3. consume time(ms)=60776, received 
rows=390, received bytes=42380, eos: 1, left_time: -776, left_rows: 299610, 
left_bytes: 209672820, blocking get time(us): 60749899, blocking put time(us): 
156, id=f46cc7a6026a448f-b398133ce6e0eed1, job_id=44616, txn_id=77200, 
label=perf-44616-f46cc7a6026a448f-b398133ce6e0eed1-77200, elapse(s)=60
   `


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to