Github user leventov commented on the issue:

    https://github.com/apache/curator/pull/282
  
    There is no peer evidence here, because we are on the optimization 
forefront. See 
https://github.com/apache/incubator-druid/pull/6677#discussion_r237182258 and 
https://lists.apache.org/thread.html/1aff123193cec5c385821b2d745a4e846a8a5786146c047acbdf8ea3@%3Cdev.druid.apache.org%3E.
    
    I've seen a Druid heap with more than 10k finalizable Deflater objects, 
about 8k of which were already dead, awaiting in the finalization queue. They 
come from `GzipCompressionProvider`.
    
    Historically Druid uses Zookeeper somewhat wrong (not for what Zookeeper 
was designed): it announces data segment placement using Zookeeper, that leads 
to creation of a lot of new nodes in Zookeeper every second. It means that by 
accident, Druid is a good stress test for Zookeeper (and consequently for 
Curator), and we run probably the largest Druid cluster.


---

Reply via email to