Re: [VOTE] 1.0.1 RC1

2018-02-14 Thread Guozhang Wang
+1 Ran tests, verified web docs. On Wed, Feb 14, 2018 at 6:00 PM, Satish Duggana wrote: > +1 (non-binding) > > - Ran testAll/releaseTarGzAll on 1.0.1-rc1 > tag > - Ran through quickstart of core/streams > > Thanks, >

Re: [VOTE] 1.0.1 RC1

2018-02-14 Thread Satish Duggana
+1 (non-binding) - Ran testAll/releaseTarGzAll on 1.0.1-rc1 tag - Ran through quickstart of core/streams Thanks, Satish. On Tue, Feb 13, 2018 at 11:30 PM, Damian Guy wrote: > +1 > > Ran tests, verified streams quickstart

Re: unable to find custom JMX metrics

2018-02-14 Thread Guozhang Wang
Salah, I'm cross-posting my answer from SO here: Looking at your code closely again, I realized you may forget to add the metric into your sensor, i.e. you need to call `sensorStartTs.add(metricName, MeasurableStat)` where `MeasurableStat` defines the type of the stat, like Sum, Avg, Count, etc.

Re: error when attempting a unit test of spring kafka producer

2018-02-14 Thread Ian Ewing
Also using these dependencies - Gradle: org.springframework.kafka:spring-kafka-test:1.1.7.RELEASE - Gradle: org.springframework.kafka:spring-kafka:1.3.2.RELEASE On Wed, Feb 14, 2018 at 2:13 PM, Ian Ewing wrote: > From my build.gradle: > > buildscript { >

Re: unable to find custom JMX metrics

2018-02-14 Thread Matthias J. Sax
Cross posted at SO: https://stackoverflow.com/questions/48745642/kstreams-streamsmetrics-recordthroughput-where-are-they-in-jconsole-adding-ow On 2/12/18 3:52 AM, Salah Alkawari wrote: > hi, > i have a processor that generates custom jmx metrics: > public class ProcessorJMX implements

Re: error when attempting a unit test of spring kafka producer

2018-02-14 Thread Ian Ewing
>From my build.gradle: buildscript { repositories { mavenCentral() } dependencies { classpath("org.springframework.boot:spring-boot-gradle-plugin:1.5.10.RELEASE") } } apply plugin: 'java' apply plugin: 'eclipse' apply plugin: 'idea' apply plugin:

Re: Kafka Streams 0.11 consumers losing offsets for all group.ids

2018-02-14 Thread Matthias J. Sax
Sorry for the long delay. Just rediscovered this... Hard to tell without logs. Can you still reproduce the issue? Debug logs for broker and stream application would be helpful to dig into it. -Matthias On 1/2/18 6:26 AM, Adam Gurson wrote: > Thank you for the response! The

Re: ProducerFencedException: Producer attempted an operation with an old epoch.

2018-02-14 Thread Matthias J. Sax
We discovered and fixed some bugs in upcoming 1.0.1 and 1.1.0 releases. Maybe you can try those out? A ProducerFenced Exception should actually be self-healing and resolve over time. How long did the application retry to rebalance? Without logs, its hard to tell what might cause the issue

Re: Store not ready

2018-02-14 Thread Matthias J. Sax
What version to you use? Kafka Streams should be able to keep running while you restart you brokers. If not, it seems to be a bug in Kafka Streams itself. -Matthias On 2/3/18 7:39 PM, dizzy0ny wrote: > Hi,We have a recurring problem that I wonder if there is a better way to > solve.  Currently

Re: how to enhance Kafka streaming Consumer rate ?

2018-02-14 Thread Matthias J. Sax
Is your network saturated? If yes, you can try to start more instances of Kafka Streams instead of running with multiple thread within one instance to increase available network capacity. -Matthias On 2/8/18 12:30 AM, ? ? wrote: > Hi: > I used kafka streaming for real time analysis. > and I put

Re: Kafka Stream tuning.

2018-02-14 Thread Guozhang Wang
Hello Brilly, If you commit every second (note the commit interval unit is milliseconds, so 1000 means a second), and each commit takes 23 millis, you will get about that throughput. The question is 1) do you really need to commit every second? 2) If you really do, how to reduce it. For 2) since

Re: Kafka cluster instablility

2018-02-14 Thread Ted Yu
For #2 and #3, you would get better stability if zookeeper and Kafka get dedicated machines. Have you profiled the performance of the nodes where multiple processes ran (zookeeper / Kafka / Druid) ? How was disk and network IO like ? Cheers On Wed, Feb 14, 2018 at 9:38 AM, Avinash Herle

Kafka cluster instablility

2018-02-14 Thread Avinash Herle
Hi, I'm using Kafka version 0.11.0.2. In my cluster, I've 4 nodes running Kafka of which 3 nodes also running Zookeeper. I've a few producer processes that publish to Kafka and multiple consumer processes, a streaming engine (Spark) that ingests from Kafka and also publishes data to Kafka, and a

Re: Compression in Kafka

2018-02-14 Thread Uddhav Arote
Oh, that makes sense. So, to summarize 1. producer and broker compression codecs different: the broker decompresses and re-compresses the message batches 2. producer and broker compression codecs same: (lz4 & lz4) -- retain the producer compression ** 3. producer and broker compression codec (lz4

Re: Compression in Kafka

2018-02-14 Thread Manikumar
It is not double compression. When I say re-compression, brokers decompress the messages and compress again with new codec. On Wed, Feb 14, 2018 at 5:18 PM, Uddhav Arote wrote: > Thanks. > > I am using console-producer with following settings with lz4 broker >

Re: Compression in Kafka

2018-02-14 Thread Manikumar
If the broker "compression.type" is "producer", then the broker retains the original compression codec set by the producer. If the producer and broker codecs are different, then broker recompress the data using broker "compression.type". On Wed, Feb 14, 2018 at 10:58 AM, Uddhav Arote

Re: Finding consumer group coordinator from CLI?

2018-02-14 Thread Manikumar
KIP-175/KAFKA-5526 added this support. This is part of upcoming Kafka 1.1.0 release. On Wed, Feb 14, 2018 at 1:36 PM, Devendar Rao wrote: > Hi, Is there a way to find out the consumer group coordinator using kafka > sh util from CLI? Thanks >

Finding consumer group coordinator from CLI?

2018-02-14 Thread Devendar Rao
Hi, Is there a way to find out the consumer group coordinator using kafka sh util from CLI? Thanks