lue for max.poll.interval.ms is 5 minutes (30 millis) so
> if you are executing a poll regularly each 6 minutes, you will see
> rebalacing.
>
>
> 2018-04-05 19:01 GMT-03:00 Scott Thibault <scott.thiba...@multiscalehn.com
> >:
>
> > No, there is only one cons
a poll regularly each 6 minutes, you will see
> rebalacing.
>
>
> 2018-04-05 19:01 GMT-03:00 Scott Thibault <scott.thiba...@multiscalehn.com
> >:
>
> > No, there is only one consumer in the group.
> >
> >
> > On Thu, Apr 5, 2018 at 2:39 PM, Gabriel Giu
No, there is only one consumer in the group.
On Thu, Apr 5, 2018 at 2:39 PM, Gabriel Giussi <gabrielgiu...@gmail.com>
wrote:
> There is some other consumer (in the same process or another) using the
> same group.id?
>
> 2018-04-05 14:36 GMT-03:00 Scott Thibault <scott.thib
I'm using the Kafka 1.0.1 Java client with 1 consumer and 1 partition and
using the ConsumerRebalanceListener I can see that the partition keeps
getting revoked and then reassigned. My consumer is in it's own thread to
ensure poll is invoked regularly. Is there some other reason this might be
My read of the documentation is that no records should be returned when the
partition is paused. I have this consumer loop which is meant to keep the
heartbeat going while the processing is busy:
while (!closed.get) {
val records = client.poll(timeout)
if (records.count() > 0 &&
Did you verify that the process has the correct limit applied?
cat /proc//limits
--Scott Thibault
On Sun, Jul 31, 2016 at 4:14 PM, Kessiler Rodrigues <kessi...@callinize.com>
wrote:
> I’m still experiencing this issue…
>
> Here are the kafka logs.
>
> [2016-07-31 20:1
Hi all,
There was some discussion on the list a while back about a possible feature
to archive log segments when they expire rather than deleting them. Did
anything like that ever become a realization?
Thanks
--
*This e-mail is not encrypted. Due to the unsecured nature of unencrypted
Hi,
If we add a new broker and then assign it as a new replica for a topic,
does the entire log for the topic get copied to that new node or does the
new node just get new data?
Thanks
--Scott Thibault
--
*This e-mail is not encrypted. Due to the unsecured nature of unencrypted
e-mail
, you can't simply access that data from another cluster b/c of meta
data being stored in zookeeper rather than in the log.
--Scott Thibault
On Mon, Jul 13, 2015 at 4:44 AM, Daniel Schierbeck
daniel.schierb...@gmail.com wrote:
Would it be possible to document how to configure Kafka to never delete
it to anyone; please delete/destroy and inform the
sender immediately.
On Monday, 13 July 2015 at 15:41, Scott Thibault wrote:
We've tried to use Kafka not as a persistent store, but as a long-term
archival store. An outstanding issue we've had with that is that the
broker holds
software design. As mentioned above, aggregate topics using
key-based partitioning can help with this.
Regards,
On Wed, Jun 3, 2015 at 7:47 AM, Scott Thibault
scott.thiba...@multiscalehn.com wrote:
Hi,
I'm running into the common issue of too many files open by the broker.
While increasing
.
Is there some way to prevent the broker from holding an open descriptor for
every file?
--Scott Thibault
--
*This e-mail is not encrypted. Due to the unsecured nature of unencrypted
e-mail, there may be some level of risk that the information in this e-mail
could be read by a third party
12 matches
Mail list logo