Hey Mohit,
Unfortunately, I don't think there's any such configuration.
By the way, there are some pretty cool things you can do with keys in Kafka
(such as semantic partitioning and log compaction). I don't know if they
would help in your use case, but it might be worth checking out
Hi Srividhya,
I'm a little confused about your setup. You have both clusters pointed to
the same zookeeper, right? You don't appear to be using the zookeeper
chroot option, so I think they would just form a single cluster.
-Jason
On Mon, Jun 22, 2015 at 3:50 PM, Srividhya Anantharamakrishnan
.
Datacenter B has the same set up.
Now, I am trying to publish message from one of the nodes in A to the ZK in
A and make one of the nodes in B consume the message by connecting to A's
ZK.
On Mon, Jun 22, 2015 at 4:25 PM, Jason Gustafson ja...@confluent.io
wrote:
Hi Srividhya,
I'm
Hi Shushant,
Write throughput on zoookeeper can be a problem depending on your commit
policy. Typically you can handle this by just committing less frequently
(with the obvious tradeoff). The consumer also supports storing offsets in
Kafka itself through the offsets.storage option (see
We have a couple open tickets to address these issues (see KAFKA-1894 and
KAFKA-2168). It's definitely something we want to fix.
On Wed, Jun 17, 2015 at 4:21 AM, Jan Stette jan.ste...@gmail.com wrote:
Adding some more details to the previous question:
The indefinite wait doesn't happen on
It looks like you might have bootstrap servers pointed to zookeeper. It
should point to the brokers instead since the new consumer doesn't use
zookeeper.
As for the hanging, it is a known bug that we're still working on.
-Jason
On Tue, Aug 18, 2015 at 3:02 AM, Krogh-Moe, Espen
Hey Kashif, to subscribe, send a message to users-subscr...@kafka.apache.org
.
-Jason
On Tue, Jun 30, 2015 at 1:16 AM, Kashif Hussain kash.t...@gmail.com wrote:
Hi,
I want to subscribe Kafka users mailing list.
Regards,
Kashif
Hey Rajasekar,
Are you updating zookeeper itself or just the image? Either way, it's
probably best to preserve the data if possible. Usually people update
zookeeper using a rolling reboot to make sure no data is lost. You just
have to make sure you give the rebooted host has enough time to rejoin
Hi Bhavesh,
I'm not totally sure I understand the expected behavior, but I think this
can work. Instead of seeking to the start of the range before the poll
loop, you should probably provide a ConsumerRebalanceCallback to get
notifications when group assignment has changed (e.g. when one of your
Hey Valibhav,
With only one partition, all of the consumers will end up hitting a single
broker (since partitions cannot be split). Whether it is possible to get
that number of consumers on a single broker may depend on the message load
through the topic. I think there has been some interest in
I couldn't find a jira for this, so I added KAFKA-2403.
-Jason
On Tue, Aug 4, 2015 at 9:36 AM, Jay Kreps j...@confluent.io wrote:
Hey James,
You are right the intended use of that was to have a way to capture some
very small metadata about your state at the time of offset commit in an
Hey Simon,
The new consumer has the ability to forego group management and assign
partitions directly. Once assigned, you can seek to any offset you want.
-Jason
On Tue, Aug 4, 2015 at 5:08 AM, Simon Cooper
simon.coo...@featurespace.co.uk wrote:
Reading on the consumer docs, there's no
Hey Neville, I tried just now and the artifact seems accessible. Perhaps
you can post your full pom to the mailing list that Grant linked to above
and we can investigate a bit more?
-Jason
On Wed, Aug 5, 2015 at 3:36 PM, Grant Henke ghe...@cloudera.com wrote:
It looks like your usage lines up
Hey Stevo,
That's a good point. I think the javadoc is pretty clear that this could
return no partitions when the consumer has no active assignment, but it may
be a little unintuitive to have to call poll() after subscribing before you
can get the assigned partitions. I can't think of a strong
Hey Stevo,
I agree that it's a little unintuitive that what you are committing is the
next offset that should be read from and not the one that has already been
read. We're probably constrained in that we already have a consumer which
implements this behavior. Would it help if we added a method
Hey Stevo,
I think ConsumerRecords only contains the partitions which had messages.
Would you mind creating a jira for the feature request? You're welcome to
submit a patch as well.
-Jason
On Tue, Jul 21, 2015 at 2:27 AM, Stevo Slavić ssla...@gmail.com wrote:
Hello Apache Kafka community,
Hey Stevo,
Thanks for the early testing on the new consumer! This might be a bug. I
wonder if it could also be explained by partition rebalancing. In the
current implementation, a rebalance will clear the old positions (including
those that were seeked to). I think it's debatable whether this
Hey Stefan,
I only see a commit in the failure case. Were you planning to use
auto-commits otherwise? You'd probably want to handle all commits directly
or you'd always be left guessing. But even if you did, I think the main
problem is that your process could fail before a needed commit is sent
Hey Stevo,
The new consumer doesn't have any threads of its own, so I think
construction should be fairly cheap.
-Jason
On Sun, Jul 19, 2015 at 2:13 PM, Stevo Slavić ssla...@gmail.com wrote:
Hello Guozhang,
It would be enough if consumer group could, besides at construction time,
be set
Hey Zhuo,
I suspect the authorization errors are occurring when the producer tries to
fetch topic metadata. Since authorization wasn't supported in 0.8.2, it
probably ignores the errors silently and retries. I think this has been
fixed in the 0.9.0 branch if you want to give it a try.
Thanks,
Hey Luke,
I agree the null check seems questionable. I went ahead and created
https://issues.apache.org/jira/browse/KAFKA-2805. At least we should have a
comment clarifying why the check is correct.
-Jason
On Tue, Nov 10, 2015 at 2:15 PM, Luke Steensen <
luke.steen...@braintreepayments.com>
I added KAFKA-2691 as well, which improves client handling of authorization
errors.
-Jason
On Mon, Nov 2, 2015 at 10:25 AM, Becket Qin wrote:
> Hi Jun,
>
> I added KAFKA-2722 as a blocker for 0.9. It fixes the ISR propagation
> scalability issue we saw.
>
> Thanks,
>
>
Hi Siyuan,
Your understanding about assign/subscribe is correct. We think of topic
subscription as enabling automatic assignment as opposed to doing manual
assignment through assign(). We don't currently them to be mixed.
Can you elaborate on your findings with respect to using one thread per
Hi Martin,
Thanks for reporting this problem. I think maybe we're just not doing a
very good job of handling auto-commit errors internally and they end up
spilling into user logs. I added a JIRA to address this issue:
https://issues.apache.org/jira/browse/KAFKA-2860.
-Jason
On Wed, Nov 18, 2015
separate threads(consuming from 2
> different brokers concurrently). That seems a more optimal solution than
> another, right?
>
> On Tue, Nov 17, 2015 at 2:54 PM, Jason Gustafson <ja...@confluent.io>
> wrote:
>
> > Hi Siyuan,
> >
> > Your understanding about
Hey Hema,
I'm not too familiar with ZkClient, but I took a look at the code and it
seems like there may still be a race condition around reconnects which
could cause the NPE you're seeing. I left a comment on the github issue on
the slim chance I'm not wrong:
t;
> So what is the temporary workaround for this until its fixed? For now, we
> just restart the app server having this issue, but we keep seeing this
> issue time and again.
>
>
> -Original Message-
> From: Jason Gustafson [mailto:ja...@confluent.io]
> Sent: Thursday, Sep
The major changes in 0.9 are for the new consumer. At the moment, the
design is spread across a couple documents:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Consumer+Rewrite+Design
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal
I'm trying
Looks like you need to use a different MessageFormatter class, since it was
renamed in 0.9. Instead use something like
"kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter".
-Jason
On Wed, Dec 2, 2015 at 10:57 AM, Dhyan Muralidharan <
d.muralidha...@yottaa.com> wrote:
> I have this
:06 AM, Jason Gustafson <ja...@confluent.io> wrote:
> Hi Martin,
>
> I'm also not sure why the poll timeout would affect this. Perhaps the
> handler is still doing work (e.g. sending requests) when the record set is
> empty?
>
> As a general rule, I would recommend
Hey Tao, other than high latency between the brokers and the consumer, I'm
not sure what would cause this. Can you turn on debug logging and run
again? I'm looking for any connection problems or metadata/fetch request
errors. And I have to ask a dumb question, how do you know that more
messages
ding some threading to my consumer and add
> more partitions to my topics.
>
> That is all fine, but it doesn't really explain why increasing poll timeout
> made the problem go away :-/
>
> Martin
>
> On 30 November 2015 at 19:30, Jason Gustafson <ja...@confluent.io> wr
sages to consume. BTW I commit
> offset manually so the lag should accurately reflect how many messages
> remaining.
>
> I will turn on debug logging and test again.
>
> On Wed, 2 Dec 2015 at 07:17 Jason Gustafson <ja...@confluent.io> wrote:
>
> > Hey Tao, oth
Hi Li,
I think reducing the client's complexity and improving performance were two
of the main reasons for the change. The rebalance protocol on top of
Zookeeper was difficult to implement correctly, and I think a number of
Kafka clients never actually got it working. Removing it as a dependence
group id do I?
>
> On Thu, Dec 10, 2015, 2:37 PM Jason Gustafson <ja...@confluent.io> wrote:
>
> > And just to be clear, the broker is on 0.9? Perhaps you can enable debug
> > logging and send a snippet?
> >
> > -Jason
> >
> > On Thu, Dec 10, 2015 at 12
broker list to consume.
>
> On Thu, Dec 10, 2015, 2:18 PM Jason Gustafson <ja...@confluent.io> wrote:
>
> > Hi Kevin,
> >
> > At the moment, the timeout parameter in poll() really only applies when
> the
> > consumer has an active partition assignment. In particular,
your response. See replies inline:
>
> On Tuesday, December 15, 2015, Jason Gustafson <ja...@confluent.io> wrote:
>
> > Hey Jens,
> >
> > I'm not sure I understand why increasing the session timeout is not an
> > option. Is the issue that there's too much un
olutions really seem like
> the absolute best solution to our problem as long we can overcome this
> issue.
>
> Thanks,
> Jens
>
> On Tuesday, December 15, 2015, Jason Gustafson <ja...@confluent.io
> <javascript:_e(%7B%7D,'cvml','ja...@confluent.io');>>
Hey Rajiv,
I agree the Set/List inconsistency is a little unfortunate (another
annoying one is pause() which uses a vararg). I think we should probably
add the following variants:
assign(Collection)
subscribe(Collection)
pause(Collection)
I can open a JIRA to fix this. As for returning the
; List wayTooManyCopies = new ArrayList<>(yetAnotherCopy);
> consumer.assign(wayTooManyCopies);
>
> Thanks,
> Rajiv
>
>
> On Tue, Dec 15, 2015 at 2:35 PM, Jason Gustafson <ja...@confluent.io>
> wrote:
>
> > Hey Rajiv,
> >
> > I agree the Set/List inco
to set the session timeout according to
the expected time to handle a single message. It'd be a bit more work to
implement this, but if the use case is common enough, it might be
worthwhile.
-Jason
On Tue, Dec 15, 2015 at 10:31 AM, Jason Gustafson <ja...@confluent.io>
wrote:
> Hey Jens,
At the moment, there is no direct way to do this, but you could use the
commit API to include metadata with each committed offset:
public void commitSync(final Map
offsets);
public OffsetAndMetadata committed(TopicPartition partition);
The OffsetAndMetadata
Hey Brian,
I think we've made these methods public again in trunk, but that won't help
you with 0.9. Another option would be to write a parser yourself since the
format is fairly straightforward. This would let you remove a dependence on
Kafka internals which probably doesn't have strong
or the particular partitions and close the consumer. Is this solution
> viable?
>
> On Tue, 5 Jan 2016 at 09:56 Jason Gustafson <ja...@confluent.io> wrote:
>
> > Hey Tao,
> >
> > Interesting that you're seeing a lot of overhead constructing the new
> > consumer in
Hey Tao,
Interesting that you're seeing a lot of overhead constructing the new
consumer instance each time. Granted it does have to fetch topic metadata
and lookup the coordinator, but I wouldn't have expected that to be a big
problem. How long is it typically taking?
-Jason
On Mon, Jan 4, 2016
t;
> The reason we put the reset offset outside of the consumer process is that
> we can keep the consumer code as generic as possible since the offset reset
> process is not needed for all consumer logics.
>
> On Tue, 5 Jan 2016 at 11:18 Jason Gustafson <ja...@confluent.io> w
Hi Rajiv,
Answers below:
i) How do I get the last log offset from the Kafka consumer?
To get the last offset, first call seekToEnd() and then use position().
ii) If I ask the consumer to seek to the beginning via the consumer
> .seekToBeginning(newTopicPartition) call, will it handle the
be by passing a timeout parameter. I only
> use manual assignments so I am hoping that there is no consequence of
> infrequent heart beats etc through poll starvation.
>
> Thanks,
> Rajiv
>
>
>
> On Wed, Jan 6, 2016 at 1:58 PM, Jason Gustafson <ja...@confluent.io>
Hi Ben,
The new consumer is single-threaded, so each instance should be given a
dedicated thread. Using multiple consumers in the same thread won't really
work as expected because poll() blocks while the group is rebalancing. If
both consumers aren't actively call poll(), then they won't be both
artitions
>
> I experienced this yesterday and was wondering why Kafka allows commits to
> partitions from other consumers than the assigned one. Does any one know of
> the reasoning behind this?
>
> Martin
> On 5 Jan 2016 18:29, "Jason Gustafson" <ja...@confluent.io>
> > LinkedIn <https://www.linkedin.com/in/runets> Twitter
> > <https://twitter.com/Areian>
> > *Copenhagen*
> > Falcon Social
> > H.C. Andersens Blvd. 27
> > 1553 Copenhagen
> > *Budapest*
> > Falcon Social
> > Colabs Startup Center
Hey Pradeep,
Can you include the output from one of the ConsumerDemo runs?
-Jason
On Mon, Dec 21, 2015 at 9:47 PM, pradeep kumar
wrote:
> Can someone please help me on this.
>
>
I took your demo code and ran it locally. So far I haven't seen any
duplicates. In addition to the output showing duplicates, it might be
helpful to include your producer code.
Thanks,
Jason
On Tue, Dec 22, 2015 at 11:02 AM, Jason Gustafson <ja...@confluent.io>
wrote:
> Hey Pradeep,
&g
The consumer metadata request was renamed to group coordinator request
since the coordinator plays a larger role in 0.9 for managing groups, but
its protocol format is exactly the same on the wire.
As Gwen suggested, I would recommend trying the new consumer API which
saves the trouble of
Can you provide some more detail? What version of Kafka are you using?
Which consumer are you using? Are you getting errors in the consumer logs?
It would probably be helpful to see your consumer configuration as well.
-Jason
On Tue, Nov 24, 2015 at 7:18 AM, Kudumula, Surender <
Hey Martin,
At a glance, it looks like your consumer's session timeout is expiring.
This shouldn't happen unless there is a delay between successive calls to
poll which is longer than the session timeout. It might help if you include
a snippet of your poll loop and your configuration (i.e. any
Hey Siyuan,
The commit API should work the same regardless whether subscribe() or
assign() was used. Does this not appear to be working?
Thanks,
Jason
On Wed, Nov 18, 2015 at 4:40 PM, hsy...@gmail.com wrote:
> In the new API, the explicit commit offset method call only works
of
that group.
-Jason
On Fri, Nov 20, 2015 at 3:41 PM, Jason Gustafson <ja...@confluent.io> wrote:
> Hey Siyuan,
>
> The commit API should work the same regardless whether subscribe() or
> assign() was used. Does this not appear to be working?
>
> Thanks,
> Jason
>
> On
Hi Anatoly,
I spent a little time this afternoon updating the request types and error
codes. This wiki is getting a little difficult to manage, especially in
regard to error codes, so I opened KAFKA-2865 to hopefully improve the
situation. Probably we need to pull this documentation into the
Hey Richard,
Yeah, I think you're right. I think this is the same issue from KAFKA-2478,
which appears to have been forgotten about. I'll see if we can get the
patch merged.
-Jason
On Mon, Jan 11, 2016 at 4:27 PM, Richard Lee wrote:
> Apologies if this has been discussed
Looks like you might have bootstrap.servers pointed at Zookeeper. It should
point to the Kafka brokers instead. The behavior of poll() currently is to
block until the group's coordinator is found, but sending the wrong kind of
request to Zookeeper probably results in a server-side disconnect. In
FIG,
> "org.apache.kafka.common.serialization.StringDeserializer");
> consumer = new KafkaConsumer<>(props);
>
>
>
>
> Thanks.
>
> Howard
>
> On 1/11/16, 12:55 PM, "Jason Gustafson" <ja...@confluent.io> wrote:
>
> >Sorry, w
Sorry, wrong property, I meant enable.auto.commit.
-Jason
On Mon, Jan 11, 2016 at 9:52 AM, Jason Gustafson <ja...@confluent.io> wrote:
> Hi Howard,
>
> The offsets are persisted in the __consumer_offsets topic indefinitely.
> Since you're using manual commit
Hi Howard,
The offsets are persisted in the __consumer_offsets topic indefinitely.
Since you're using manual commit, have you ensured that auto.offset.reset
is disabled? It might also help if you provide a little more detail on how
you're verifying that offsets were lost.
-Jason
On Mon, Jan 11,
Hi Franco,
The new consumer combines the functionality of the older simple and
high-level consumers. When used in simple mode, you have to assign the
partitions that you want to read from using assign(). In this case, the
consumer works alone and not in a group. Alternatively, if you use the
Hey Jens,
The heartbeat response is used by the coordinator to tell group members
that the group needs to rebalance. For example, if a new member joins the
consumer group, then the coordinator will wait for the heartbeat from each
member and set a REBALANCE_NEEDED error code in the response.
Hey Rajiv,
Just to be clear, when you received the empty fetch response, did you check
the error codes? It would help to also include some more information (such
as broker and topic settings). If you can come up with a way to reproduce
it, that will help immensely.
Also, would you mind updating
The new Java consumer in 0.9.0 will not work with 0.8.2 since it depends on
the group management protocol built into Kafka, but the older consumer
should still work.
-Jason
On Thu, Feb 11, 2016 at 2:44 AM, Joe San wrote:
> I have a 0.9.0 version of the Kafka consumer.
We have them in the Confluent docs:
http://docs.confluent.io/2.0.0/kafka/monitoring.html#new-consumer-metrics.
-Jason
On Thu, Feb 11, 2016 at 4:40 AM, Avi Flax wrote:
> On Thursday, December 17, 2015 at 18:08, Guozhang Wang wrote:
> > We should add a section for that. Siyuan
Hey Alexey,
The API of the new consumer is designed around an event loop in which all
IO is driven by the poll() API. To make this work, you need to call poll()
in a loop (see the javadocs for examples). So in this example, when you
call commitAsync(), the request is basically just queued up to
Hey Yifan,
As far as how the consumer works internally, there's not a big difference
between using a long timeout or a short timeout. Which you choose really
depends on the needs of your application. Typically people use a short
timeout in order to be able to break from the loop with a boolean
gt; Krzysztof
> On 26 January 2016 at 19:04:58, Jason Gustafson (ja...@confluent.io)
> wrote:
>
> Hey Krzysztof,
>
> So far I haven't had any luck figuring out the cause of the 5 second pause,
> but I've reproduced it with the old consumer on 0.8.2, so that rules out
> anything
Hey Rajiv,
Thanks for the detailed report. Can you go ahead and create a JIRA? I do
see the exceptions locally, but not nearly at the rate that you're
reporting. That might be a factor of the number of partitions, so I'll do
some investigation.
-Jason
On Wed, Jan 27, 2016 at 8:40 AM, Rajiv
That is correct. KIP-19 has the details:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-19+-+Add+a+request+timeout+to+NetworkClient
.
-Jason
On Fri, Jan 29, 2016 at 3:08 AM, tao xiao wrote:
> Hi team,
>
> I want to understanding the meaning of request.timeout.ms
Hey Tom,
Yes, it is possible that the poll() will rebalance and resume fetching for
a previously paused partition. First thought is to use a
ConsumerRebalanceListener to re-pause the partitions after the rebalance
completes.The rebalance listener offers two hooks: onPartitionsRevoked() is
called
Most of the use cases of pause/resume that I've seen work only on single
partitions (e.g in Kafka Streams), so the current varargs method is kind of
nice. It would also be nice to be able to do the following:
consumer.pause(consumer.assignment());
Both variants seem convenient in different
Hi Pierre,
Thanks for your persistence on this issue. I've gone back and forth on this
a few times. The current API can definitely be annoying in some cases, but
breaking compatibility still sucks. We do have the @Unstable annotation on
the API, but it's unclear what exactly it means and I'm
Nope. Pausing a partition just stops the consumer from sending any more
fetches for it. It will not trigger a partition reassignment. One thing to
be wary of, however, is that the partition will automatically be unpaused
after the next normal rebalance.
-Jason
On Mon, Feb 22, 2016 at 7:19 AM,
hile some consumers had
> partitions paused, those consumers that were paused would become unpaused?
>
> On Mon, Feb 22, 2016 at 2:02 PM, Jason Gustafson <ja...@confluent.io>
> wrote:
>
> > Nope. Pausing a partition just stops the consumer from sending any more
>
since I may not be assigned the same partitions as
> before.
>
> On Wed, Feb 24, 2016 at 1:44 PM, Jason Gustafson <ja...@confluent.io>
> wrote:
>
> > Sure, but in that case, the commits are still being stored in Kafka, so
> > resetting to the last committed posi
he topic/partition is
> resumed
> > and poll is called again. However, during this it's possible that the
> > consumers get restarted (as part of an upgrade, etc) or a consumer dies
> and
> > a new one starts up.
> >
> > On Mon, Feb 22, 2016 at 2:05 PM, Jason Gust
one being
> processed, I invoke "commitSync" passing it the map of commits to sync.
>
> On Wed, Feb 24, 2016 at 1:35 PM, Jason Gustafson <ja...@confluent.io>
> wrote:
>
> > I think the problem is the call to position() from within the callback.
> > When onA
Hey Guven,
This problem is what KIP-41 was created for:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-41%3A+KafkaConsumer+Max+Records
.
The patch for this was committed yesterday and will be included in 0.10. If
you need something in the shorter term, you could probably use the client
Hey Luke,
I took a look at the code and it does look like the whitelist argument is
handled differently between the old and new consumers. For the new
consumer, we just treat it as a raw regular expression, but the old
consumer does some preprocessing. We should probably do the preprocessing
in
Hi there,
I think what you're asking is how the group protocol can guarantee that
each partition is assigned to one and only consumer in the group at any
point in time. Is that right? The short answer is that it can't. Because of
unexpected pauses on the client (e.g. for garbage collection),
ing as well
>
> other than that, i will experiment with the pause() api, separate thread
> for the actual message processing and poll()'ing with all partitions paused
>
> guven
>
>
> > On 25 Feb 2016, at 20:19, Jason Gustafson <ja...@confluent.io> wrote:
> >
> &
e a
> ticket with Confluent and attach the logs to it.
>
> Regards
> Venkat
>
>
>
> On 2/19/16, 11:22 AM, "Jason Gustafson" <ja...@confluent.io> wrote:
>
> >Hi Venkatesan,
> >
> >Autocreation of topics happens when the broker receives a to
t a work around.
> Consumer is definitely picking up messages with some delay.
>
> -Sam
>
>
> > On 22-Jan-2016, at 11:54 am, Jason Gustafson <ja...@confluent.io> wrote:
> >
> > Hi Krzysztof,
> >
> > This is definitely weird. I see the data in
Hi Krzysztof,
This is definitely weird. I see the data in the broker's send queue, but
there's a delay of 5 seconds before it's sent to the client. Can you create
a JIRA?
Thanks,
Jason
On Thu, Jan 21, 2016 at 11:30 AM, Samya Maiti
wrote:
> +1, facing same issue.
>
Apologies for the late arrival to this thread. There was a bug in the
0.9.0.0 release of Kafka which could cause the consumer to stop fetching
from a partition after a rebalance. If you're seeing this, please checkout
the 0.9.0 branch of Kafka and see if you can reproduce this problem. If you
can,
Rajiv
>
> On Mon, Jan 25, 2016 at 11:23 AM, Jason Gustafson <ja...@confluent.io>
> wrote:
>
> > Apologies for the late arrival to this thread. There was a bug in the
> > 0.9.0.0 release of Kafka which could cause the consumer to stop fetching
> > from a partition
Hey Krzysztof,
So far I haven't had any luck figuring out the cause of the 5 second pause,
but I've reproduced it with the old consumer on 0.8.2, so that rules out
anything specific to the new consumer. Can you tell me which os/jvm you're
seeing it with? Also, can you try changing the
l with do poll(0)
> does it renew the token?
> (2) What happens to the coordinator if all consumers die?
>
> Franco.
>
>
>
>
> 2016-01-15 19:30 GMT+01:00 Jason Gustafson <ja...@confluent.io>:
>
> > Hi Franco,
> >
> > The new consumer combines the funct
Woops. Looks like Alex got there first. Glad you were able to figure it out.
-Jason
On Thu, Feb 18, 2016 at 9:55 AM, Jason Gustafson <ja...@confluent.io> wrote:
> Hi Robin,
>
> It would be helpful if you posted the full code you were trying to use.
> How to seek largely depe
The consumer is single-threaded, so we only trigger commits in the call to
poll(). As long as you consume all the records returned from each poll
call, the committed offset will never get ahead of the consumed offset, and
you'll have at-lest-once delivery. Note that the implication is that "
Hi Venkatesan,
Autocreation of topics happens when the broker receives a topic metadata
request. That should mean that both topics get created when the consumer
does the initial poll() since that is the first time that topic metadata
would be fetched (fetching topic metadata allows the consumer
Tough to answer. Definitely the rate of reported bugs has fallen. Other
than the one Becket found a few weeks back, I haven't seen anything major
since the start of the year. My advice would probably be "proceed with
caution."
-Jason
On Fri, Feb 19, 2016 at 1:06 PM, allen chan
To clarify, the bug I mentioned has been fixed in 0.9.0.1.
-Jason
On Fri, Feb 19, 2016 at 1:33 PM, Ismael Juma wrote:
> Even though we did not remove the beta label, all significant bugs we are
> aware of have been fixed (thanks Jason!). I'd say you should try it out. :)
>
>
Hey Tao,
This error indicates that a rebalance completed successfully before the
consumer could rejoin. Basically it works like this:
1. Consumer 1 joins the group and is assigned member id A
2. Consumer 1's session timeout expires before successfully heartbeating.
3. The group is rebalanced
Hey Rajiv,
That sounds suspiciously like one of the bugs from 0.9.0.0. Have you
updated kafka-clients to 0.9.0.1?
-Jason
On Mon, Mar 14, 2016 at 11:18 AM, Rajiv Kurian wrote:
> Has any one run into similar problems. I have experienced the same problem
> again. This time
1 - 100 of 183 matches
Mail list logo