Re: Hitting integer limit when setting log segment.bytes

2015-05-14 Thread Lance Laursen
Great, thanks for the link Mike. >From what I can tell, the only time opening of a segment file would be slow is in the event of unclean shutdown, where a segment file may not have been fsync'd and Kafka needs to CRC it and rebuild its index. This should really only be a problem for the "newest" l

Re: Hitting integer limit when setting log segment.bytes

2015-05-13 Thread Mike Axiak
Jay Kreps has commented on this before: https://issues.apache.org/jira/browse/KAFKA-1670?focusedCommentId=14161185&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14161185 Basically, you can always have more segment files. Having too large of segment files will signif

Re: Hitting integer limit when setting log segment.bytes

2015-05-13 Thread Mayuresh Gharat
I suppose it is way log management works in kafka. I am not sure the exact reason for this. Also the index files that are constructed have a mapping of relative offset to the base offset of log file to the real offset. The key value in index file is of the form . Thanks, Mayuresh On Wed, May 1

Re: Hitting integer limit when setting log segment.bytes

2015-05-13 Thread Lance Laursen
Hey folks, Any update on this? On Thu, Apr 30, 2015 at 5:34 PM, Lance Laursen wrote: > Hey all, > > I am attempting to create a topic which uses 8GB log segment sizes, like > so: > ./kafka-topics.sh --zookeeper localhost:2181 --create --topic perftest6p2r > --partitions 6 --replication-factor 2