When thinking about memory for the broker, the only thing you should
consider is the filesystem cache. The further behind production you're
consuming, the more memory matters (eg keeping your cache window larger
than the gap between production and consumption).

As a caveat, this is only important if you're really thrashing the brokers
hard, we don't even see a blip if we consume from disk and we're pushing
them as hard as we can :-).



On Tue, Oct 30, 2012 at 8:21 AM, Jun Rao <jun...@gmail.com> wrote:

> Yes.
>
> Thanks,
>
> Jun
>
> On Mon, Oct 29, 2012 at 9:34 PM, howard chen <howac...@gmail.com> wrote:
>
> > We only a few internal consumers, so assume they should be fine?
> >
> >
> > On Tue, Oct 30, 2012 at 12:27 PM, Jun Rao <jun...@gmail.com> wrote:
> > > Kafka broker typically doesn't need a lot memory. So 2GB is fine. ZK
> > memory
> > > depends on # consumers. More consumers mean more offsets written to ZK.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Mon, Oct 29, 2012 at 9:07 PM, howard chen <howac...@gmail.com>
> wrote:
> > >
> > >> I understand the main limitation of Kafka deployment is the disk
> space,
> > >>
> > >> E.g.
> > >>
> > >> If I generate 10GB message per day, and I have 2 nodes, and I need to
> > >> keep for 10 days, then I need
> > >>
> > >> 10GB * 10 / 2 = 50GB per node (of course there are overhead, but the
> > >> requirement is somehow proportional.)
> > >>
> > >> So If I deploy machinesusing the following setup, do you think they
> > >> are reasonable?
> > >>
> > >> 2 x Kafka (100GB, 2xCPU x 2GB RAM)
> > >> 3 x Zookeeper ( 10GB, 1x CPU, 512MB RAM)
> > >>
> > >> do you think it is okay?
> > >>
> >
>



-- 
Matthew Rathbone
Foursquare | Software Engineer | Server Engineering Team
matt...@foursquare.com | @rathboma <http://twitter.com/rathboma> |
4sq<http://foursquare.com/rathboma>

Reply via email to