Did you ever figure out what to do here?
On Mon, Jul 4, 2016 at 12:19 AM Sathyakumar Seshachalam <
> My another followup question is that Am I right in assuming that per topic
> retention minutes or clean up policy, they all have any effect only when
> you log.cleaner.enable-false ?
> So In other words, I choose to truncate topic data, then __consumer_offsets
> topic will also be either deleted or compacted ?
> On Mon, Jul 4, 2016 at 11:38 AM, Sathyakumar Seshachalam <
> sathyakumar_seshacha...@trimble.com> wrote:
> > Ok, Am in situation where all kafka nodes are going to run out of space.
> > This is because I had been running uncompacted __consumer_offset topic
> > everything retained topics .
> > Now at a place, where I can afford to compact __consumer_offset topic and
> > also delete certain topics. I would like to know the right process to do
> > this.
> > Now since I am having close 1.8T of data on __consumer_offset topic and
> > more in the topics data, any log compaction and log deletion/trunction is
> > going to take time. Should I do this node by node. Will Kafka's
> > come in the way. (I have read that uncompacted data from the leader is
> > to the followers.)
> > Is there a clean process for this for a 3 node Kafka cluster ? Last time
> > triggered a log compaction in all the 3 node simultaneously, all
> > broke (I raised this in the same email group and got an answer to
> > the memory). Eventually they self-healed, but this caused some serious
> > disruption to the service, so before trying I want to make sure, there
> is a
> > cleaner process here.
> > Any help/pointers will be greatly appreciated.
> > Thanks,
> > Sathya
Mario Ricci | Software Engineer | Trimble Navigation Limited | VirtualSite
Solutions | Office: +1 (303) 635-8604 / x228604