unexpected consumer rebalance 0.9.0.1

2016-08-23 Thread Franco Giacosa
Hi I am experiencing the following issue in kafka 0.9.0.1, I have a consumer, that is in a consumer group alone processing and commiting the offsets and at one point the group does a rebalance (I don't know why) and removes the group. The weird situation is that it seems that the consumer is

Re: problem deleting topic

2016-02-24 Thread Franco Giacosa
_topics, > your deleted topic should be there. It takes awhile before Kafka actually > deletes it. > > Here's some discussions around it > > http://stackoverflow.com/questions/23976670/when-how-does-a-topic-marked-for-deletion-get-finally-removed > > Best, > Leo > > On Tue,

Re: property block.on.buffer.full default value

2016-02-23 Thread Franco Giacosa
a/clients/producer/ProducerConfig.java#L232 > > On Tue, 23 Feb 2016 at 21:14 Franco Giacosa <fgiac...@gmail.com> wrote: > > > Hi Guys, > > > > I was going over the producer kafka configuration, and the > > property block.on.buffer.full in the documentation says: > &g

property block.on.buffer.full default value

2016-02-23 Thread Franco Giacosa
Hi Guys, I was going over the producer kafka configuration, and the property block.on.buffer.full in the documentation says: "When our memory buffer is exhausted we must either stop accepting new records (block) or throw errors. *By default this setting is true* and we block, however in some

problem deleting topic

2016-02-23 Thread Franco Giacosa
Hello, I am having the following problem trying to delete a topic. The topic was auto-created with a default.replication.factor = 1, but my test cluster has only 1 machine, so now when I start kafka I get this error ERROR [KafkaApi-0] error when handling request Name: TopicMetadataRequest;

kafka helth check

2016-02-15 Thread Franco Giacosa
Hi, To ping kafka for a health check, what are my options if I am using the java client 0.9.0? I know that the confluent plataform has an Api Proxy, but it needs the schema registry (which I am not running) (also I don't know if the schema registry is a dependency if I use only the health check

Re: Callback Record Key

2016-02-11 Thread Franco Giacosa
Thanks Damian. 2016-02-11 12:01 GMT+01:00 Damian Guy <damian@gmail.com>: > Hi, > Pass the key into the callback you provide to kafka. You then have it > available when the callback is invoked. > > Cheers, > Damian > > On 11 February 2016 at 10:59, Franco Giac

Callback Record Key

2016-02-11 Thread Franco Giacosa
Hi, Is there a way to get the record key on the callback of the send() for a record? I would like to be able to identify for which of the Records that I have sent is the callback so I can ACK on the db that the record landed successfully in kafka. I am using 0.9.0. Thanks.

Re: at-least-once delivery

2016-02-02 Thread Franco Giacosa
block for X amount of ms, can someone tell me what a good value will be for this property in order to mimic the behaviour of block.on.buffer.full? Thanks 2016-01-31 6:09 GMT+01:00 James Cheng <jch...@tivo.com>: > > > On Jan 30, 2016, at 4:21 AM, Franco Giacosa <fgiac..

Re: at-least-once delivery

2016-01-30 Thread Franco Giacosa
ially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first." 2016-01-30 13:18 GMT+01:00 Franco Giacosa <fgiac...@gmail.com>: > Hi, > > T

at-least-once delivery

2016-01-30 Thread Franco Giacosa
Hi, The at-least-once delivery is generated in part by the network fails and the retries (that may generate duplicates) right? In the event of a duplicated (there was an error but the first message landed ok on the partition P1) the producer will recalculate the partition on the retry? is this

Re: kafka 0.9.0 java version

2016-01-28 Thread Franco Giacosa
check http://kafka.apache.org/documentation.html#java 2016-01-28 16:58 GMT+01:00 Muresanu A.V. (Andrei Valentin) < andrei.mures...@ing.ro>: > Hi all, > > what is the oracle jdk version that is "supported" by kafka 0.9.0 ? > > 6/7/8... > >

re-consuming last offset

2016-01-25 Thread Franco Giacosa
Hi, I am facing the following issue: When I start my consumer I get that the offset for one of the partitions is going to be reset to the last committed offset 14:55:28.788 [pool-1-thread-1] DEBUG o.a.k.c.consumer.internals.Fetcher - Resetting offset for partition t1-4 to the committed offset

Re: re-consuming last offset

2016-01-25 Thread Franco Giacosa
When doing poll() when there is no current position on the consumer, the position returned is the one of the last offset then? (I though that it will return that position + 1 because it was already commited) 2016-01-25 15:07 GMT+01:00 Franco Giacosa <fgiac...@gmail.com>: > Hi,

Re: commitSync CommitFailedException

2016-01-21 Thread Franco Giacosa
er blocking operation > such as commitSync(). > 2) If all consumers in the group die, the coordinator doesn't really do > anything other than clean up some group state. In particular, it does not > remove offset commits. > > -Jason > > On Sun, Jan 17, 2016 at 11:03 AM, Franco Giacosa

Re: Create Kafka Topic Programatically

2016-01-20 Thread Franco Giacosa
Hi Joe, There is an option in the producer called auto.create.topics.enable, so the producer can just start sending data to a topic and the topic will be created with the default values. 2016-01-20 13:19 GMT+01:00 Joe San : > Kafka Users, > > How can I create a kafka

Re: commitSync CommitFailedException

2016-01-17 Thread Franco Giacosa
ing to add a new configuration > "max.poll.records" to set an upper limit on the number of messages returned > from poll() (assuming that KIP-41 is approved). This can make it easier to > limit the message processing time so that there is less risk of running > over the ses

Re: Partitions and consumer assignment

2016-01-16 Thread Franco Giacosa
1 consumer group can have many partitions, if the consumer group has 1 consumer and there are N partitions, it will consume from N, if you have a spike you can add up to N more consumers to that consumer group. 2016-01-16 11:32 GMT+01:00 Jason Williams : > Thanks Jens!

Re: Partitions and consumer assignment

2016-01-16 Thread Franco Giacosa
nsumers when I'm already at N will just add idle consumers? > > -J > > > Sent via iPhone > > > On Jan 16, 2016, at 03:21, Franco Giacosa <fgiac...@gmail.com> wrote: > > > > 1 consumer group can have many partitions, if the consumer group has 1 > > consume

commitSync CommitFailedException

2016-01-15 Thread Franco Giacosa
Hi, on the documentation for commitSync it says the following about the CommitFailedException * @throws org.apache.kafka.clients.consumer.CommitFailedException if the commit failed and cannot be retried. * This can only occur if you are using automatic group management with {@link

Re: consuming 0 records

2016-01-07 Thread Franco Giacosa
fetch.max.wait.ms. So you shouldn't expect to poll 500x. > > > > I'd suggest using a small, but non-zero timeout when polling. 100ms is > used > > in the docs quite a bit. > > > > -Dana > > > > On Wed, Dec 30, 2015 at 10:03 AM, Franco Giacosa <fgiac..

Re: How to reset a consumer-group's offset in kafka 0.9?

2015-12-29 Thread Franco Giacosa
Hi, If you want to reset the consumer offset (I am not sure about the group's offset) you can use this property in 0.9.0 props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); 2015-12-29 16:35 GMT+01:00 Han JU : > Hi Stevo, > > But by deleting and recreating