uditsharma opened a new issue, #14548:
URL: https://github.com/apache/druid/issues/14548
Segments becoming frequently unavailable when replica = 1 for large
datasource
### Affected Version
26.0.0
### Description
We have noticed that one of the data source which has 3 TB of data having
30K segments is having frequently unavailable segments. From the finding it
looks to me, it is a coordinator balancing issue, where coordinator load a
segment to new historical and after loading on new one it ends up dropping from
both place.
- Cluster size:
- 12 historical
- 3 broker
- 6 MM
-
- Configurations in use
> coordinator config
druid.service=druid/coordinator
druid.plaintextPort=8081
druid.indexer.logs.kill.enabled=true
druid.indexer.logs.kill.durationToRetain=259200000
druid.indexer.logs.kill.delay=21600000
druid.extensions.loadList=["druid-google-extensions",
"postgresql-metadata-storage", "druid-kafka-indexing-service",
"druid-datasketches", "kafka-emitter","druid-multi-stage-query"]
druid.coordinator.loadqueuepeon.type=curator
druid.serverview.type=batch
druid.coordinator.startDelay=PT10S
druid.coordinator.period=PT200S
druid.coordinator.period.indexingPeriod=PT180S
- Steps to reproduce the problem
> Not sure if i have any steps to reproduce it. As this happens when
coordinator does the re-balancing.
- Finding
This is what we have observed in the logs for a specific segment. Let me
know if complete logs needed i will try to get it.
> coordinator asks a new historical to load the segment.
> next it ask the same new historical to drop the segment which it just
loaded because it sees that replica =2.
> next it ask the older historical to drop the data, as i am assuming
some callback went in saying that new node has loaded the segment so it should
also drop.
>
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]