Kafka broker and producer max message default config

2015-05-12 Thread Rendy Bambang Junior
Hi,

I see configuration for broker max.message.bytes 1,000,000
and configuration for producer max.request.size 1,048,576

Why is default config for broker is less than producer? If that is the case
then there will be message sent by producer which is bigger than what
broker could receive.

Could anyone please clarify my understanding?

Rendy


Re: Kafka broker and producer max message default config

2015-05-12 Thread Rendy Bambang Junior
Thanks, I get the difference now. This is assuming request to be sent
contains at least 1 messages. Isn't it?

Rendy
On May 13, 2015 3:55 AM, Ewen Cheslack-Postava e...@confluent.io wrote:

 The max.request.size effectively caps the largest size message the producer
 will send, but the actual purpose is, as the name implies, to limit the
 size of a request, which could potentially include many messages. This
 keeps the producer from sending very large requests to the broker. The
 limitation on message size is just a side effect.


 On Tue, May 12, 2015 at 12:33 AM, Rendy Bambang Junior 
 rendy.b.jun...@gmail.com wrote:

  Hi,
 
  I see configuration for broker max.message.bytes 1,000,000
  and configuration for producer max.request.size 1,048,576
 
  Why is default config for broker is less than producer? If that is the
 case
  then there will be message sent by producer which is bigger than what
  broker could receive.
 
  Could anyone please clarify my understanding?
 
  Rendy
 



 --
 Thanks,
 Ewen



Differences between new and legacy scala producer API

2015-05-07 Thread Rendy Bambang Junior
Hi

- Legacy scala api for producer is having keyed message along with topic,
key, partkey, and message. Meanwhile new api has no partkey. Whats the
difference between key and partkey?
- In javadoc, new producer api send method is always async, does
producer.type properties overriden?
- Will scala legacy api be deprecated any time soon?

Rendy


Re: 2 kafka cluster sharing same ZK Ensemble.

2015-03-27 Thread Rendy Bambang Junior
Based on documentation, as long as you define different folder zookeeper
chroot at broker configuration, it should be OK. Cmiiw.

Disclaimer: myself never tried this scheme.

Rendy
On Mar 28, 2015 2:14 AM, Shrikant Patel spa...@pdxinc.com wrote:

 Can 2 separate kafka cluster share same ZK Ensemble??
 If yes, how does ZK deal with that 2 clusters having brokers with same id.

 Thanks,
 Shri


 
 This message and its contents (to include attachments) are the property of
 National Health Systems, Inc. and may contain confidential and proprietary
 information. This email and any files transmitted with it are intended
 solely for the use of the individual or entity to whom they are addressed.
 You are hereby notified that any unauthorized disclosure, copying, or
 distribution of this message, or the taking of any unauthorized action
 based on information contained herein is strictly prohibited. Unauthorized
 use of information contained herein may subject you to civil and criminal
 prosecution and penalties. If you are not the intended recipient, you
 should delete this message immediately and notify the sender immediately by
 telephone or by replying to this transmission.



Common Form of Data Written to Kafka for Data Ingestion

2015-03-24 Thread Rendy Bambang Junior
Hi,

I'm a new Kafka user. I'm planning to send web usage data from application
to S3 for EMR and MongoDB using Kafka.

What is common form to write as message in Kafka for data ingestion use
case? I am doing a little homework and find Avro as one of the options.

Thanks.

Rendy