kafka-streams app(s) stopped consuming new events

2017-06-28 Thread Dmitriy Vsekhvalnov
Hi all, looking for some assistance in debugging kafka-streams application. Kafka broker 0.10.2.1 - x3 Node cluster kafka-streams 0.10.2.1 - x2 application nodes x 1 stream thread each. In streams configuration only: - SSL transport - kafka.streams.commitIntervalMs set to 5000 (instead of d

Re: kafka-streams app(s) stopped consuming new events

2017-06-28 Thread Dmitriy Vsekhvalnov
could use more information. Can you share the > streams logs and broker logs? > > Have you confirmed messages are still being delivered to topics (via > console consumer)? > > Thanks, > Bill > > On Wed, Jun 28, 2017 at 8:24 AM, Dmitriy Vsekhvalnov < > dvsekhval...@gma

Re: kafka-streams app(s) stopped consuming new events

2017-06-28 Thread Dmitriy Vsekhvalnov
ms logs for the last 30 minutes up to > the time the application stopped processing records. > > Thanks, > Bill > > On Wed, Jun 28, 2017 at 9:04 AM, Dmitriy Vsekhvalnov < > dvsekhval...@gmail.com > > wrote: > > > Hi Bill, > > > > 1. sure, can extra

Re: kafka-streams app(s) stopped consuming new events

2017-06-28 Thread Dmitriy Vsekhvalnov
Wed, Jun 28, 2017 at 9:51 AM, Dmitriy Vsekhvalnov < > dvsekhval...@gmail.com > > wrote: > > > Here are logs: > > > > app: > > https://gist.github.com/dvsekhvalnov/f98afc3463f0c63b1722417e3710a8 > > e7#file-kafka-streams-log > > brokers: > >

Re: kafka-streams app(s) stopped consuming new events

2017-06-30 Thread Dmitriy Vsekhvalnov
> any chance you could start streaming with DEBUG logs on and collect those > logs? I’m hoping something shows up there. > > Thanks, > Eno > > > > On Jun 28, 2017, at 5:30 PM, Dmitriy Vsekhvalnov > wrote: > > > > Nothing for stat-change.log for giving time w

Re: kafka-streams app(s) stopped consuming new events

2017-06-30 Thread Dmitriy Vsekhvalnov
Set org.apache.kafka.streams to DEBUG. Here is gist: https://gist.github.com/dvsekhvalnov/b84b72349837f6c6394f1adfe18cdb61#file-debug-logs On Fri, Jun 30, 2017 at 12:37 PM, Dmitriy Vsekhvalnov < dvsekhval...@gmail.com> wrote: > Sure, how to enable debug logs? Just adjust logba

Re: kafka-streams app(s) stopped consuming new events

2017-06-30 Thread Dmitriy Vsekhvalnov
store. > It might be helpful if you take some thread dumps to see where it is > blocked. > > Thanks, > Damian > > On Fri, 30 Jun 2017 at 16:04 Dmitriy Vsekhvalnov > wrote: > > > Set org.apache.kafka.streams to DEBUG. > > > > Here is gist: > > > >

Re: kafka-streams app(s) stopped consuming new events

2017-06-30 Thread Dmitriy Vsekhvalnov
, 2017 at 6:33 PM, Damian Guy wrote: > Yep, if you take another thread dump is it in the same spot? > Which version of streams are you running? Are you using docker? > > Thanks, > Damian > > On Fri, 30 Jun 2017 at 16:22 Dmitriy Vsekhvalnov > wrote: > > &g

Re: kafka-streams app(s) stopped consuming new events

2017-06-30 Thread Dmitriy Vsekhvalnov
vior-in-1-core-environments > It seems like the same issue. > > Thanks, > Damian > > On Fri, 30 Jun 2017 at 17:16 Dmitriy Vsekhvalnov > wrote: > > > Yes, StreamThread-1 #93 daemon is still at at org.rocksdb.RocksDB.put. > > > > No, AWS machine. > > >

Re: kafka-streams app(s) stopped consuming new events

2017-07-03 Thread Dmitriy Vsekhvalnov
Thanks Damian ! That's was it, after fixing number compaction threads to be higher than 1, it finally continue to consume stream. On Fri, Jun 30, 2017 at 7:48 PM, Dmitriy Vsekhvalnov wrote: > Yeah, can confirm there is only 1 vCPU. > > Okay, will try that configuration and get ba

Consumers re-consuming messages again after re-balance?

2017-07-03 Thread Dmitriy Vsekhvalnov
Hi all, looking for some explanations. We running 2 instances of consumer (same consumer group) and getting little bit weird behavior after 3 days of inactivity. Env: kafka broker 0.10.2.1 consumer java 0.10.2.1 + spring-kafka + enable.auto.commit = true (all default settings). Scenario: 1. r

Re: Consumers re-consuming messages again after re-balance?

2017-07-03 Thread Dmitriy Vsekhvalnov
lanced, you will likely start consuming the > messages > > from the earliest offset again. I'd recommend setting this higher than > the > > default of 24 hours. > > > > Thanks, > > Damian > > > > On Mon, 3 Jul 2017 at 15:56 Dmitriy Vsekhvalnov >

Re: Consumers re-consuming messages again after re-balance?

2017-07-04 Thread Dmitriy Vsekhvalnov
Thanks guys, was exactly `offsets.retention.minutes`. Figured out that `enable.auto.commit` was set to false in reality, somewhere deep in spring properties and that's what have been causing offsets removal when idle. On Mon, Jul 3, 2017 at 7:04 PM, Dmitriy Vsekhvalnov wrote: &

Where to run kafka-consumer-groups.sh from?

2017-07-07 Thread Dmitriy Vsekhvalnov
Hi all, question about lag checking. We've tried to periodically sample consumer lag with: kafka-consumer-groups.sh --bootstrap-server broker:9092 --new-consumer --group service-group --describe it's all fine, but depending on host we run it from it gives different results. E.g: - when runn

Re: Where to run kafka-consumer-groups.sh from?

2017-07-07 Thread Dmitriy Vsekhvalnov
> > Also, could you paste some results from the console printout? > > On 7 July 2017 at 12:47, Dmitriy Vsekhvalnov > wrote: > > > Hi all, > > > > question about lag checking. We've tried to periodically sample consumer > > lag with: > > > >

Re: Where to run kafka-consumer-groups.sh from?

2017-07-10 Thread Dmitriy Vsekhvalnov
Guys, let me up this one again. Still looking for comments about kafka-consumer-groups.sh tool. Thank you. On Fri, Jul 7, 2017 at 3:14 PM, Dmitriy Vsekhvalnov wrote: > I've tried 3 brokers on command line, like that: > > /usr/local/kafka/bin/kafka-consumer-groups.sh --bootstrap-s

Re: Where to run kafka-consumer-groups.sh from?

2017-07-12 Thread Dmitriy Vsekhvalnov
ommit interval makes everything blury anyways. If you can specify your > pain more precisely maybe we can work around it. > > Best Jan > > > On 10.07.2017 10:31, Dmitriy Vsekhvalnov wrote: > >> Guys, let me up this one again. Still looking for comments about >> kafka

Correct way to increase replication factor for kafka-streams internal topics

2017-10-04 Thread Dmitriy Vsekhvalnov
Hi all, What is correct way to increase RF for existing internal topics that kafka-streams create (re-partitioning streams)? We are increasing RF for source topics and would like to align kafka-streams as well. App part configuration is simple, but what to do with existing internal topics? Remov

Re: Correct way to increase replication factor for kafka-streams internal topics

2017-10-05 Thread Dmitriy Vsekhvalnov
application-reset-tool > - > https://www.confluent.io/blog/data-reprocessing-with-kafka- > streams-resetting-a-streams-application/ > > > If you wan to avoid any downtime, deploy the application with a new > application.id to reprocess all data. Afterward, stop the old > applicat

kafka-streams configuration for high resilience (crashing when broker crashing) ?

2017-10-05 Thread Dmitriy Vsekhvalnov
Hi all, we were testing Kafka cluster outages by randomly crashing broker nodes (1 of 3 for instance) while still keeping majority of replicas available. Time to time our kafka-stream app crashing with exception: [ERROR] [StreamThread-1] [org.apache.kafka.streams.processor.internals.StreamThread

Re: kafka-streams configuration for high resilience (crashing when broker crashing) ?

2017-10-05 Thread Dmitriy Vsekhvalnov
; > tors 5 okt. 2017 kl. 18:45 skrev Dmitriy Vsekhvalnov < > dvsekhval...@gmail.com > >: > > > Hi all, > > > > we were testing Kafka cluster outages by randomly crashing broker nodes > (1 > > of 3 for instance) while still keeping majority of replica

Re: kafka-streams configuration for high resilience (crashing when broker crashing) ?

2017-10-05 Thread Dmitriy Vsekhvalnov
iguration-parameters > > 2017-10-05 19:12 GMT+02:00 Dmitriy Vsekhvalnov : > > > replication.factor set to match source topics. (3 in our case). > > > > what do you mean by retires? I don't see retries property in StreamConfig > > class. > > > > On Thu

Re: kafka-streams configuration for high resilience (crashing when broker crashing) ?

2017-10-05 Thread Dmitriy Vsekhvalnov
Ok, we can try that. Some other settings to try? On Thu, Oct 5, 2017 at 20:42 Stas Chizhov wrote: > I would set it to Integer.MAX_VALUE > > 2017-10-05 19:29 GMT+02:00 Dmitriy Vsekhvalnov : > > > I see, but producer.retries set to 10 by default. > > > > What value

kafka broker loosing offsets?

2017-10-06 Thread Dmitriy Vsekhvalnov
Hi all, we several time faced situation where consumer-group started to re-consume old events from beginning. Here is scenario: 1. x3 broker kafka cluster on top of x3 node zookeeper 2. RF=3 for all topics 3. log.retention.hours=168 and offsets.retention.minutes=20160 4. running sustainable load

Re: kafka broker loosing offsets?

2017-10-06 Thread Dmitriy Vsekhvalnov
Hi Ted, Broker: v0.11.0.0 Consumer: kafka-clients v0.11.0.0 auto.offset.reset = earliest On Fri, Oct 6, 2017 at 6:24 PM, Ted Yu wrote: > What's the value for auto.offset.reset ? > > Which release are you using ? > > Cheers > > On Fri, Oct 6, 2017 at 7:

Re: kafka broker loosing offsets?

2017-10-06 Thread Dmitriy Vsekhvalnov
wrote: > > > > > > > normally, log.retention.hours (168hrs) should be higher than > > > > offsets.retention.minutes (336 hrs)? > > > > > > > > > > > > On Fri, Oct 6, 2017 at 8:58 PM, Dmitriy Vsekhvalnov < > > >

Re: kafka broker loosing offsets?

2017-10-06 Thread Dmitriy Vsekhvalnov
s partition assigned during > next rebalance it can commit old stale offset- can this be the case? > > > fre 6 okt. 2017 kl. 17:59 skrev Dmitriy Vsekhvalnov < > dvsekhval...@gmail.com > >: > > > Reprocessing same events again - is fine for us (idempotent). While &g

Re: kafka broker loosing offsets?

2017-10-06 Thread Dmitriy Vsekhvalnov
@olamobile.com.invalid>: > > > is there a way to read messages on a topic partition from a specific node > > we that we choose (and not by the topic partition leader) ? > > I would like to read myself that each of the __consumer_offsets partition > > replicas have the same

Re: kafka broker loosing offsets?

2017-10-06 Thread Dmitriy Vsekhvalnov
ve you checked > offsets of your consumer - right after offsets jump back - does it start > from the topic start or does it go back to some random position? Have you > checked if all offsets are actually being committed by consumers? > > fre 6 okt. 2017 kl. 20:59 skrev Dmitriy Vsek

Re: kafka broker loosing offsets?

2017-10-09 Thread Dmitriy Vsekhvalnov
ned on? If killing 100 is the only > way to reproduce the problem, it is possible with unclean leader election > turned on that leadership was transferred to out of ISR follower which may > not have the latest high watermark > On Sat, Oct 7, 2017 at 3:51 AM Dmitriy Vsekhvalnov &g

kafka-streams dying if can't create internal topics

2017-10-10 Thread Dmitriy Vsekhvalnov
Hi all, still doing disaster testing with Kafka cluster, when crashing several brokers at once sometimes we observe exception in kafka-stream app about inability to create internal topics: [WARN ] [org.apache.kafka.streams.processor.internals.InternalTopicManager] [Could not create internal topic

Re: kafka-streams dying if can't create internal topics

2017-10-10 Thread Dmitriy Vsekhvalnov
Hi Matthias, thanks. Would you mind point me to correct Jira URL where i can file an issue? Thanks again. On Tue, Oct 10, 2017 at 8:38 PM, Matthias J. Sax wrote: > Yes, please file a Jira. We need to fix this. Thanks a lot! > > -Matthias > > On 10/10/17 5:24 AM, Dmitriy Vs

Re: kafka-streams dying if can't create internal topics

2017-10-10 Thread Dmitriy Vsekhvalnov
orite search engine... > > > -Matthias > > > On 10/10/17 10:48 AM, Dmitriy Vsekhvalnov wrote: > > Hi Matthias, > > > > thanks. Would you mind point me to correct Jira URL where i can file an > > issue? > > > > Thanks again. > > > > On Tue, Oct 10, 2

Re: kafka broker loosing offsets?

2017-10-11 Thread Dmitriy Vsekhvalnov
expected that Kafka will fail symmetrical with respect to any broker. On Mon, Oct 9, 2017 at 6:26 PM, Dmitriy Vsekhvalnov wrote: > Hi tao, > > we had unclean leader election enabled at the beginning. But then disabled > it and also reduced 'max.poll.records' value. It help

Re: kafka broker loosing offsets?

2017-10-11 Thread Dmitriy Vsekhvalnov
ffsets after broker > restart 0.11.0.0" from Phil Luckhurst, it sounds similar. > > Thanks, > > Ben > > On Wed, Oct 11, 2017 at 4:44 PM Dmitriy Vsekhvalnov < > dvsekhval...@gmail.com> > wrote: > > > Hey, want to resurrect this thread. > > > > De

Re: kafka broker loosing offsets?

2017-10-20 Thread Dmitriy Vsekhvalnov
port it here if it ever happen again in the future. > We’ll also upgrade all our clusters to 0.11.0.1 in the next days. > > 🤞🏻! > > > Le 11 oct. 2017 à 17:47, Dmitriy Vsekhvalnov a > écrit : > > > > Yeah just pops up in my list. Thanks, i'll take a look. &

kafka-streams died with org.apache.kafka.clients.consumer.CommitFailedException

2018-02-27 Thread Dmitriy Vsekhvalnov
Good day everybody, we faced unexpected kafka-streams application death after 3 months of work with exception below. Our setup: - 2 instances (both died) of kafka-stream app - kafka-streams 0.11.0.0 - kafka broker 1.0.0 Sounds like re-balanced happened and something went terribly wrong this t

Re: kafka-streams died with org.apache.kafka.clients.consumer.CommitFailedException

2018-02-27 Thread Dmitriy Vsekhvalnov
would not die but recover from the error automatically. > > Thus, I would recommend to upgrade to 1.0 eventually. > > > -Matthias > > On 2/27/18 8:06 AM, Dmitriy Vsekhvalnov wrote: > > Good day everybody, > > > > we faced unexpected kafka-streams application

kafka-streams, late data, tumbling windows aggregations and event time

2018-03-05 Thread Dmitriy Vsekhvalnov
Good morning, we have simple use-case where we want to count number of events by each hour grouped by some fields from event itself. Our event timestamp is embedded into messages itself (json) and we using trivial custom timestamp extractor (which called and works as expected). What we facing is

Re: kafka-streams, late data, tumbling windows aggregations and event time

2018-03-05 Thread Dmitriy Vsekhvalnov
e as the > one in the payload, if you do observe that is not the case, this is > unexpected. In that case could you share your complete code snippet, > especially how input stream "in" is defined, and your config properties > defined for us to investigate? > > Guozhang &g

Re: kafka-streams, late data, tumbling windows aggregations and event time

2018-03-05 Thread Dmitriy Vsekhvalnov
it with the append time. So when the messages are fetched by downstream > processors which always use the metadata timestamp extractor, it will get > the append timestamp set by brokers. > > > Guozhang > > On Mon, Mar 5, 2018 at 9:53 AM, Dmitriy Vsekhvalnov < > dvse

Re: kafka-streams, late data, tumbling windows aggregations and event time

2018-03-05 Thread Dmitriy Vsekhvalnov
onfig.html#TOPIC_PREFIX > > > We are also discussing to always override the log.message.timestamp.type > config for internal topics to CreateTime, I vaguely remember there is a > JIRA open for it in case you are interested in contributing to Streams > library. > > Gu

Re: kafka-streams, late data, tumbling windows aggregations and event time

2018-03-06 Thread Dmitriy Vsekhvalnov
What the logic behind that? Honestly was expected sink messages to get "now" timestamp. On Mon, Mar 5, 2018 at 11:48 PM, Guozhang Wang wrote: > Sounds great! :) > > On Mon, Mar 5, 2018 at 12:28 PM, Dmitriy Vsekhvalnov < > dvsekhval...@gmail.com > > wrote:

Re: Developing with kafka and other non jvm languages

2018-03-26 Thread Dmitriy Vsekhvalnov
Hi Gary, don't have experience with other go libs (they seems to be way younger), but Sarama is quite low level, which is both at same time powerful and to some extent more complicated to work with. With pure Sarama client you have to implement wildcard (or pattern based) topic subscription yours

Kafka-streams: mix Processor API with windowed grouping

2018-04-06 Thread Dmitriy Vsekhvalnov
Hey, good day everyone, another kafka-streams friday question. We hit the wall with DSL implementation and would like to try low-level Processor API. What we looking for is to: - repartition incoming source stream via grouping records by some fields + windowed (hourly, daily, e.t.c). - and

Re: Kafka-streams: mix Processor API with windowed grouping

2018-04-06 Thread Dmitriy Vsekhvalnov
elaborate a bit more on this logic > so maybe we can still around it around with the pure high-level DSL? > > > Guozhang > > > On Fri, Apr 6, 2018 at 8:49 AM, Dmitriy Vsekhvalnov < > dvsekhval...@gmail.com> > wrote: > > > Hey, good day everyone, > > >

Re: Kafka-streams: mix Processor API with windowed grouping

2018-04-09 Thread Dmitriy Vsekhvalnov
ck > if the new count replaces and existing count in the array/list. > > > -Matthias > > On 4/6/18 10:36 AM, Dmitriy Vsekhvalnov wrote: > > Thanks guys, > > > > ok, question then - is it possible to use state store with .aggregate()? > > >

Re: Kafka-streams: mix Processor API with windowed grouping

2018-04-09 Thread Dmitriy Vsekhvalnov
gt; > Btw: here is an example of TopN: > https://github.com/confluentinc/kafka-streams- > examples/blob/4.0.0-post/src/main/java/io/confluent/examples/streams/ > TopArticlesExampleDriver.java > > > > -Matthias > > On 4/9/18 4:46 AM, Dmitriy Vsekhvalnov wrote: > &g

Re: Kafka-streams: mix Processor API with windowed grouping

2018-04-10 Thread Dmitriy Vsekhvalnov
fka 1.0. > > The example implements a custom (fault-tolerant) state store backed by CMS, > which is then used in a Transformer. The Transformer is then plugged into > the DSL for a mix-and-match setup of DSL and Processor API. > > > On Mon, Apr 9, 2018 at 9:34 PM, Dmitriy Vs

Re: Debugging message timestamps in Sarama

2018-07-23 Thread Dmitriy Vsekhvalnov
Hey Craig, what exact problem you have with Sarama client? On Mon, Jul 23, 2018 at 5:11 PM Craig Ching wrote: > Hi! > > I'm working on debugging a problem with how message timestamps are handled > in the sarama client. In some cases, the sarama client won't associate a > timestamp with a messa

Re: Debugging message timestamps in Sarama

2018-07-24 Thread Dmitriy Vsekhvalnov
ient.ConsumePartition(topic, 0, sarama.OffsetOldest) > if err != nil { > panic(err) > } > > defer func() { > if err := client.Close(); err != nil { > panic(err) > } > }() > > // Count how many message processed > msgCount := 0 > > go func() { > for { > select {

Re: Debugging message timestamps in Sarama

2018-07-26 Thread Dmitriy Vsekhvalnov
ng wrote: > > > > Hey, thanks for that Dmitriy! I'll have a look. > > > >> On Tue, Jul 24, 2018 at 11:18 AM Dmitriy Vsekhvalnov < > dvsekhval...@gmail.com> wrote: > >> Not really associated with Sarama. > >> > >> But your i

[Ann]: Experimental generic web ui for kafka. open source, maintainers wanted.

2020-10-09 Thread Dmitriy Vsekhvalnov
Good time of the day everyone, I hope it is an appropriate mailing list, my apologies if not. We'd like to make an announcement that we open sourced the web interface to Kafka broker that allows view, search and examine what's going on on the wire (messages, keys, e.t.c) from the browser (bye, by

Kafka: LogAppendTime + compressed messages

2017-05-30 Thread Dmitriy Vsekhvalnov
Hi all, we noticed that when kafka broker configured with: log.message.timestamp.type=LogAppendTime to timestamp incoming messages on its own and producer is configured to use any kind of compression. What we end up on the wire for consumer: - outer compressed envelope - LogAppendTime, by

Re: Kafka: LogAppendTime + compressed messages

2017-05-30 Thread Dmitriy Vsekhvalnov
://github.com/apache/kafka/blob/trunk/clients/src/ > main/java/org/apache/kafka/common/record/DefaultRecord.java#L341 > > Hope this helps. > > Out of curiosity, which clients do this differently? > > Ismael > > On Tue, May 30, 2017 at 8:30 AM, Dmitriy Vsekhvalnov < > d