We only a few internal consumers, so assume they should be fine?

On Tue, Oct 30, 2012 at 12:27 PM, Jun Rao <jun...@gmail.com> wrote:
> Kafka broker typically doesn't need a lot memory. So 2GB is fine. ZK memory
> depends on # consumers. More consumers mean more offsets written to ZK.
>
> Thanks,
>
> Jun
>
> On Mon, Oct 29, 2012 at 9:07 PM, howard chen <howac...@gmail.com> wrote:
>
>> I understand the main limitation of Kafka deployment is the disk space,
>>
>> E.g.
>>
>> If I generate 10GB message per day, and I have 2 nodes, and I need to
>> keep for 10 days, then I need
>>
>> 10GB * 10 / 2 = 50GB per node (of course there are overhead, but the
>> requirement is somehow proportional.)
>>
>> So If I deploy machinesusing the following setup, do you think they
>> are reasonable?
>>
>> 2 x Kafka (100GB, 2xCPU x 2GB RAM)
>> 3 x Zookeeper ( 10GB, 1x CPU, 512MB RAM)
>>
>> do you think it is okay?
>>

Reply via email to