Hi I am experiencing the following issue in kafka 0.9.0.1,
I have a consumer, that is in a consumer group alone processing and
commiting the offsets and at one point the group does a rebalance (I don't
know why) and removes the group.
The weird situation is that it seems that the consumer is
_topics,
> your deleted topic should be there. It takes awhile before Kafka actually
> deletes it.
>
> Here's some discussions around it
>
> http://stackoverflow.com/questions/23976670/when-how-does-a-topic-marked-for-deletion-get-finally-removed
>
> Best,
> Leo
>
> On Tue,
a/clients/producer/ProducerConfig.java#L232
>
> On Tue, 23 Feb 2016 at 21:14 Franco Giacosa <fgiac...@gmail.com> wrote:
>
> > Hi Guys,
> >
> > I was going over the producer kafka configuration, and the
> > property block.on.buffer.full in the documentation says:
> &g
Hi Guys,
I was going over the producer kafka configuration, and the
property block.on.buffer.full in the documentation says:
"When our memory buffer is exhausted we must either stop accepting new
records (block) or throw errors. *By default this setting is true* and we
block, however in some
Hello,
I am having the following problem trying to delete a topic.
The topic was auto-created with a default.replication.factor = 1, but my
test cluster has only 1 machine, so now when I start kafka I get this error
ERROR [KafkaApi-0] error when handling request Name: TopicMetadataRequest;
Hi,
To ping kafka for a health check, what are my options if I am using the
java client 0.9.0?
I know that the confluent plataform has an Api Proxy, but it needs the
schema registry (which I am not running) (also I don't know if the schema
registry is a dependency if I use only the health check
Thanks Damian.
2016-02-11 12:01 GMT+01:00 Damian Guy <damian@gmail.com>:
> Hi,
> Pass the key into the callback you provide to kafka. You then have it
> available when the callback is invoked.
>
> Cheers,
> Damian
>
> On 11 February 2016 at 10:59, Franco Giac
Hi,
Is there a way to get the record key on the callback of the send() for a
record? I would like to be able to identify for which of the Records that I
have sent is the callback so I can ACK on the db that the record landed
successfully in kafka.
I am using 0.9.0.
Thanks.
block for X amount of ms, can someone tell me what a good value will
be for this property in order to mimic the behaviour of
block.on.buffer.full?
Thanks
2016-01-31 6:09 GMT+01:00 James Cheng <jch...@tivo.com>:
>
> > On Jan 30, 2016, at 4:21 AM, Franco Giacosa <fgiac..
ially change the ordering of records because if two records are sent
to a single partition, and the first fails and is retried but the second
succeeds, then the second record may appear first."
2016-01-30 13:18 GMT+01:00 Franco Giacosa <fgiac...@gmail.com>:
> Hi,
>
> T
Hi,
The at-least-once delivery is generated in part by the network fails and
the retries (that may generate duplicates) right?
In the event of a duplicated (there was an error but the first message
landed ok on the partition P1) the producer will recalculate the partition
on the retry? is this
check http://kafka.apache.org/documentation.html#java
2016-01-28 16:58 GMT+01:00 Muresanu A.V. (Andrei Valentin) <
andrei.mures...@ing.ro>:
> Hi all,
>
> what is the oracle jdk version that is "supported" by kafka 0.9.0 ?
>
> 6/7/8...
>
>
Hi,
I am facing the following issue:
When I start my consumer I get that the offset for one of the partitions is
going to be reset to the last committed offset
14:55:28.788 [pool-1-thread-1] DEBUG o.a.k.c.consumer.internals.Fetcher -
Resetting offset for partition t1-4 to the committed offset
When doing poll() when there is no current position on the consumer, the
position returned is the one of the last offset then? (I though that it
will return that position + 1 because it was already commited)
2016-01-25 15:07 GMT+01:00 Franco Giacosa <fgiac...@gmail.com>:
> Hi,
er blocking operation
> such as commitSync().
> 2) If all consumers in the group die, the coordinator doesn't really do
> anything other than clean up some group state. In particular, it does not
> remove offset commits.
>
> -Jason
>
> On Sun, Jan 17, 2016 at 11:03 AM, Franco Giacosa
Hi Joe,
There is an option in the producer called auto.create.topics.enable, so the
producer can just start sending data to a topic and the topic will be
created with the default values.
2016-01-20 13:19 GMT+01:00 Joe San :
> Kafka Users,
>
> How can I create a kafka
ing to add a new configuration
> "max.poll.records" to set an upper limit on the number of messages returned
> from poll() (assuming that KIP-41 is approved). This can make it easier to
> limit the message processing time so that there is less risk of running
> over the ses
1 consumer group can have many partitions, if the consumer group has 1
consumer and there are N partitions, it will consume from N, if you have a
spike you can add up to N more consumers to that consumer group.
2016-01-16 11:32 GMT+01:00 Jason Williams :
> Thanks Jens!
nsumers when I'm already at N will just add idle consumers?
>
> -J
>
>
> Sent via iPhone
>
> > On Jan 16, 2016, at 03:21, Franco Giacosa <fgiac...@gmail.com> wrote:
> >
> > 1 consumer group can have many partitions, if the consumer group has 1
> > consume
Hi,
on the documentation for commitSync it says the following about the
CommitFailedException
* @throws org.apache.kafka.clients.consumer.CommitFailedException if the
commit failed and cannot be retried.
* This can only occur if you are using automatic group management with
{@link
fetch.max.wait.ms. So you shouldn't expect to poll 500x.
> >
> > I'd suggest using a small, but non-zero timeout when polling. 100ms is
> used
> > in the docs quite a bit.
> >
> > -Dana
> >
> > On Wed, Dec 30, 2015 at 10:03 AM, Franco Giacosa <fgiac..
Hi,
If you want to reset the consumer offset (I am not sure about the group's
offset) you can use this property in 0.9.0
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
2015-12-29 16:35 GMT+01:00 Han JU :
> Hi Stevo,
>
> But by deleting and recreating
22 matches
Mail list logo