Re: Problem in Deleting Kafka Logs

2016-10-17 Thread Kunal Gupta
log.retention.bytes=5 Does it not mean that when log size reaches to 5TB then it will discard log What you have specified is when disk space of a machine left with 5.5TB then it will discard the data *Thanks, Kunal* *+91-9958189589* *Data Analyst* *First Paper Publication :

Re: Problem in Deleting Kafka Logs

2016-10-17 Thread Ben Davison
Here's an example from our server.properties log.segment.bytes=1073741824 message.max.bytes=5242880 num.partitions=3 log.retention.bytes=5 num.network.threads=6 num.io.threads=16 replica.fetch.max.bytes=6242880 default.replication.factor=3 zookeeper.connection.timeout.ms=6

Re: Problem in Deleting Kafka Logs

2016-10-17 Thread Kunal Gupta
I didn't get it ... Can you explain me in form of example or something for which you are feasible On Oct 17, 2016 2:08 PM, "Ben Davison" wrote: > We have it setup so that both log ms is set to 7 days and log delete > bytes(can't remember exactly what the setting is

Re: Problem in Deleting Kafka Logs

2016-10-17 Thread Ben Davison
We have it setup so that both log ms is set to 7 days and log delete bytes(can't remember exactly what the setting is called. So we never run out of space (don't set the value to something like 99% of your disk, as the log cleaner thread might not kick in time, we leave it at 90% of disks space)

Re: Problem in Deleting Kafka Logs

2016-10-16 Thread Kunal Gupta
Please help me :( *Thanks, Kunal* *+91-9958189589* *Data Analyst* *First Paper Publication : **http://dl.acm.org/citation.cfm?id=2790798 * *Blog:- **http://learnhardwithkunalgupta.blogspot.in * On Sun,

Problem in Deleting Kafka Logs

2016-10-15 Thread Kunal Gupta
In my organisation I have 3 machine cluster of Kafka and each topic assigned two machine for storing there data. There is one topic for which I get lot of data from clients thats data exceeds my disk space in one machine because that machine is a leader of that topic, when I look into kafka-logs