Hi,
Started to dig into that new producer and have a few questions:
1. what part (if any) of the old producer config still apply to the new
producer or is it just what is specified on New Producer Configs?
2. how do you specify a partitioner to the new producer? if no such option,
what usage is
Rajiv,
So, any time a broker's disk fills up, it will shut itself down immediately
(it will do this in response to any IO error on writing to disk).
Unfortunately, this means that the node will not be able to do any
housecleaning before shutdown, which is an 'unclean' shutdown. This means
that
Thanks every one. Ill try to clean up the disk space and try again.
On Sun, Nov 23, 2014 at 8:47 AM, Jason Rosenberg j...@squareup.com wrote:
Rajiv,
So, any time a broker's disk fills up, it will shut itself down immediately
(it will do this in response to any IO error on writing to disk).
Hi all,
Basically I used a lot of codes from this project
https://github.com/stealthly/scala-kafka , my idea is to sent a key/value pair
to Kafka, so that I can design a partition function in the further.
I checked the document and seems I should create a ProducerRecord, then I can
specify