QingdongZeng3 opened a new issue #10497:
URL: https://github.com/apache/druid/issues/10497
Please provide a detailed title (e.g. "Broker crashes when using TopN query
with Bound filter" instead of just "Broker crashes").
### Affected Version
0.16.0
The Druid version where the problem was encountered.
### Description
Please include as much detailed information about the problem as possible.
- Cluster size
12 nodes
- Configurations in use
indexer.runner.type = httpRemote
storage metadata with mysql , store segments in hdfs.
segmentGranularity = 1 Hour
TaskDuration = 1 Hour
lateMessageRejectionPeriod = 1800s
earlyMessageRejectionPeriod = 1800s
- Steps to reproduce the problem
1. Use my configurations ,run a datasource , produce some data , general
some segments .(It's very nomal)
2 . In a whole index Task running lifecycle (start new index task --> finish
index task,in my configuration,its 1 Hour),just
produce a few data ,all of those data are out of time .( they will be thrown
away,and no new segments generated,so partation-offset metadata will not
update in mysql,but update in overlord)
3. After task in step 2 finished,and new task is running.(New task will use
new checkpoint from overlord to general segments ,but when it try to publish
segment,it will faied because of begin-checkPoint not equals with mysq)
4.ERROR when update metadata
https://github.com/apache/druid/blob/master/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java#L1135
- The error message or stack traces encountered. Providing more context,
such as nearby log messages or even entire logs, can be helpful.
- Any debugging that you have already done
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]