You can use more brokers. Another thing is to enable compression in the producer, if you haven't done so.
Thanks, Jun On Wed, Dec 11, 2013 at 11:42 PM, xingcan <xingc...@gmail.com> wrote: > Guozhang, > > Thanks for your prompt replay. I got two 300GB SAS disks for each borker. > At peak time, the produce speed for each broker is about 70MB/s. > Apparently, > this speed is already restricted by network. While, the consume speed is > lower > for some topics are consumed by more than one group. Under this > circumstance, > if the peak time lasts for hours, my disks will be fully used. > > > On Thu, Dec 12, 2013 at 2:00 PM, Guozhang Wang <wangg...@gmail.com> wrote: > > > One possible approach is to change the retention policy on broker. > > > > How large your messages can accumulate on brokers at peak time? > > > > Guozhang > > > > > > On Wed, Dec 11, 2013 at 9:09 PM, xingcan <xingc...@gmail.com> wrote: > > > > > Hi, > > > > > > In my application, the produce speed could be very high at some > specific > > > time in a day while return > > > to a low speed at the rest of time. Frequently, my data logs are > flushed > > > away before they are being > > > consumed by clients due to lacking disk space during the busy times. > > > > > > Increasing consume speed seems to be difficult and adding space for log > > > files are also not a ultimate solution. Any suggestions for this > problem? > > > Are there any speed control mechanism in Kafka? > > > > > > Thanks. > > > > > > -- > > > *Xingcan* > > > > > > > > > > > -- > > -- Guozhang > > > > > > -- > *Xingcan* >