You could possibly fetch the message at the current consumer offset and
examine the timestamp of the message and compare it with the timestamp of
the high water mark. That's what I do today, so I'm also all ears if there
is a more obvious solution.
On Feb 6, 2018 9:40 PM, "Jeff Widman" wrote:
>
I would like to monitor how far behind our consumer groups are using
wall-clock time in addition to the normal integer offset lag. This way
services that have tight latency SLAs can alert when a consumer falls
behind by N minutes.
Is there a way to do this by querying the cluster/brokers?
It's ea
+1
Checked signature
Ran test suite where there was one flaky test (KAFKA-5889):
kafka.metrics.MetricsTest > testMetricsLeak FAILED
java.lang.AssertionError: expected:<1365> but was:<1368>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:8
By the way, is there a bug that was fixed in the later release.
https://issues.apache.org/jira/browse/KAFKA-6030
Can you please confirm ?
On Tue, Feb 6, 2018 at 1:38 PM, Ted Yu wrote:
> The log cleaner abortion in the log file preceded log deletion.
>
> On Tue, Feb 6, 2018 at 1:36 PM, Raghav w
The log cleaner abortion in the log file preceded log deletion.
On Tue, Feb 6, 2018 at 1:36 PM, Raghav wrote:
> Ted
>
> Sorry, I did not understand your point here.
>
> On Tue, Feb 6, 2018 at 1:09 PM, Ted Yu wrote:
>
> > bq. but is aborted.
> >
> > See the following in LogManager#asyncDelete():
Ted
Sorry, I did not understand your point here.
On Tue, Feb 6, 2018 at 1:09 PM, Ted Yu wrote:
> bq. but is aborted.
>
> See the following in LogManager#asyncDelete():
>
> //We need to wait until there is no more cleaning task on the log to
> be deleted before actually deleting it.
>
>
Hi Tony,
Your Streams configs look good to me, and the additional streams log from
StreamThread are normal operational logs that do not related to the issue.
I suspect there is a network partition between your client to the broker
node, and to investigate which host this `node -1` is referring to
bq. but is aborted.
See the following in LogManager#asyncDelete():
//We need to wait until there is no more cleaning task on the log to
be deleted before actually deleting it.
if (cleaner != null && !isFuture) {
cleaner.abortCleaning(topicPartition)
FYI
On Tue, Feb 6, 2018
>From the log-cleaner.log, I see the following. It seems like it resume but
is aborted. Not sure how to read this:
[2018-02-06 18:06:22,178] INFO Compaction for partition topic043-27 is
resumed (kafka.log.LogCleaner)
[2018-02-06 18:06:22,178] INFO The cleaning for partition topic043-27 is
aborted
Could you provide any broker/zk logs ? Zookeeper and Kafka logs a lot of
info during housekeeping ops such as log retentionthere must be
something there..
On 6 Feb 2018 8:24 pm, "Raghav" wrote:
> Hi
>
> While configuring a topic, we are specifying the retention bytes per topic
> as follows
Linux. CentOS.
On Tue, Feb 6, 2018 at 12:26 PM, M. Manna wrote:
> Is this Windows or Linux?
>
> On 6 Feb 2018 8:24 pm, "Raghav" wrote:
>
> > Hi
> >
> > While configuring a topic, we are specifying the retention bytes per
> topic
> > as follows. Our retention time in hours is 48.
> >
> > *bin/ka
We are on Kafka 10.2.1 and facing similar issue. Upgrading to 1.0 is
disruptive. Any other way this can be circumvented ?
Thanks.
On Fri, Jan 12, 2018 at 1:24 AM, Wim Van Leuven <
wim.vanleu...@highestpoint.biz> wrote:
> awesome!
>
> On Thu, 11 Jan 2018 at 23:48 Thunder Stumpges
> wrote:
>
> >
Is this Windows or Linux?
On 6 Feb 2018 8:24 pm, "Raghav" wrote:
> Hi
>
> While configuring a topic, we are specifying the retention bytes per topic
> as follows. Our retention time in hours is 48.
>
> *bin/kafka-topics.sh, --zookeeper zk-1:2181,zk-2:2181,zk-3:2181 --create
> --topic AmazingTopi
Hi
While configuring a topic, we are specifying the retention bytes per topic
as follows. Our retention time in hours is 48.
*bin/kafka-topics.sh, --zookeeper zk-1:2181,zk-2:2181,zk-3:2181 --create
--topic AmazingTopic --replication-factor 2 --partitions 64 --config
retention.bytes=16106127360 --
Hi everyone,
I wanted to share with you some common utilities we open sourced (
http://engineering.cerner.com/blog/cerner-open-sources-its-kafka-utilities/).
We hope others find them useful.
Bryan
Hi Guozhang,
Thanks for looking into this. Below are the stream config values.
INFO 2018-02-02 08:33:25.708 [main] org.apache.kafka.streams.StreamsConfig
- StreamsConfig values:
application.id = cv-v1
application.server =
bootstrap.servers = [172.31.10.35:9092, 172.31.14.8:9092]
buffered.records
16 matches
Mail list logo