Hey all,

I am attempting to create a topic which uses 8GB log segment sizes, like so:
./kafka-topics.sh --zookeeper localhost:2181 --create --topic perftest6p2r
--partitions 6 --replication-factor 2 --config max.message.bytes=655360
--config segment.bytes=8589934592

And am getting the following error:
Error while executing topic command For input string: "8589934592"
java.lang.NumberFormatException: For input string: "8589934592"
at
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:583)
...
...

Upon further testing with --alter topic, it would appear that segment.bytes
will not accept a value higher than 2,147,483,647, which is the upper limit
for a signed 32bit int. This then restricts log segment size to an upper
limit of ~2GB.

We run Kafka on hard drive dense machines, each with 10gbit uplinks. We can
set ulimits higher in order to deal with all the open file handles (since
Kafka keeps all log segment file handles open), but it would be preferable
to minimize this number, as well as minimize the amount of log segment
rollover experienced at high traffic (ie: a rollover every 1-2 seconds or
so when saturating 10gbe).

Is there a reason (performance or otherwise) that a 32 bit integer is used
rather than something larger?

Thanks,
-Lance

Reply via email to