Re: Compression in Kafka

2018-02-14 Thread Manikumar
It is not double compression. When I say re-compression,  brokers
decompress the messages and compress again with
new codec.

On Wed, Feb 14, 2018 at 5:18 PM, Uddhav Arote  wrote:

> Thanks.
>
> I am using console-producer with following settings with lz4 broker
> compression codec
> 1. None producer compression codec
> 2. Snappy producer compression codec
> 3. lz4 producer compression codec
>
> I send a 354 Byte message with each of the above settings. However, I do
> not see any kind of double compression happening, when the producer and
> broker compression codecs are different
>
> *Output 1:*
>  offset: 0 position: 0 CreateTime: 1518607686194 isvalid: true payloadsize:
> 61 magic: 1 compresscodec: LZ4CompressionCodec crc:  3693540371 payload:
>   compressed 354Byte message
>
>  using --deep-iteration
>  offset: 0 position: 0 CreateTime: 1518607686194 isvalid: true payloadsize:
> 354 magic: 1 compresscodec: NoCompressionCodec crc: 4190573446 payload: 354
> byte message
>
>
> *Output 2:*offset: 7 position: 517 CreateTime: 1518608039723 isvalid: true
> payloadsize: 61 magic: 1 compresscodec: LZ4CompressionCodec crc: 4075439033
> payload: compressed 354B message
>
> using --deep-iteration
> offset: 7 position: 517 CreateTime: 1518608039723 isvalid: true
> payloadsize: 354 magic: 1 compresscodec: NoCompressionCodec crc: 4061704088
> payload: Same 354B message
>
> *Output 3:*
> offset: 11 position: 883 CreateTime: 1518608269618 isvalid: true
> payloadsize: 61 magic: 1 compresscodec: LZ4CompressionCodec crc: 981370250
> payload: compressed 354B message
>
> using --deep-iteration
> offset: 11 position: 883 CreateTime: 1518608269618 isvalid: true
> payloadsize: 354 magic: 1 compresscodec: NoCompressionCodec crc: 468622988
> payload: same 354B message
>
> Please note the compression codecs in the --deep-iteration case,
>  Case 1 is OK, but in case 2 shouldn't it be SnappyCompression and 3 may be
> LZ4Compression
>
> Or is it visible when the message batch into large batches?
>
> Thanks
> Uddhav
>
> On Wed, Feb 14, 2018 at 6:05 PM, Manikumar 
> wrote:
>
> >   If the broker "compression.type" is "producer", then the broker retains
> > the original compression codec set by the producer.
> >   If the producer and broker codecs are different, then broker recompress
> > the data using broker "compression.type".
> >
> > On Wed, Feb 14, 2018 at 10:58 AM, Uddhav Arote 
> > wrote:
> >
> > > Hi Kafka users,
> > >
> > > I am trying to understand the behavior of compression in Kafka.
> Consider
> > a
> > > scenario, where producer sets compression.codec "snappy" and broker's
> > > compression.code  "lz4"?
> > > In this scenario, what is the behavior of the compression?
> > >
> > > As far as I have understood is the following,
> > > The messages compressed by the producer are wrapped in the wrapper
> > message
> > > and send to the broker. If the broker compression.codec is "producer",
> > the
> > > message is written as is to the log.
> > > In the code,
> > >
> > > https://github.com/apache/kafka/blob/962bc638f9c2ab249e5008a587ee78
> > > e3ba35fcb9/core/src/main/scala/kafka/log/LogValidator.scala#L218
> > >
> > >
> > > what I understand is that if the producer and broker codecs are not
> same,
> > > then the compression should happen again.
> > >
> > > But I am not sure about this. Can somebody tell me how this works?
> > >
> > > Thanks,
> > > Uddhav
> > >
> >
>


Re: Finding consumer group coordinator from CLI?

2018-02-14 Thread Manikumar
KIP-175/KAFKA-5526 added this support.  This is part of upcoming Kafka
1.1.0 release.

On Wed, Feb 14, 2018 at 1:36 PM, Devendar Rao 
wrote:

> Hi,  Is there a way to find out the consumer group coordinator using kafka
> sh util from CLI? Thanks
>


Re: Compression in Kafka

2018-02-14 Thread Manikumar
  If the broker "compression.type" is "producer", then the broker retains
the original compression codec set by the producer.
  If the producer and broker codecs are different, then broker recompress
the data using broker "compression.type".

On Wed, Feb 14, 2018 at 10:58 AM, Uddhav Arote 
wrote:

> Hi Kafka users,
>
> I am trying to understand the behavior of compression in Kafka. Consider a
> scenario, where producer sets compression.codec "snappy" and broker's
> compression.code  "lz4"?
> In this scenario, what is the behavior of the compression?
>
> As far as I have understood is the following,
> The messages compressed by the producer are wrapped in the wrapper message
> and send to the broker. If the broker compression.codec is "producer", the
> message is written as is to the log.
> In the code,
>
> https://github.com/apache/kafka/blob/962bc638f9c2ab249e5008a587ee78
> e3ba35fcb9/core/src/main/scala/kafka/log/LogValidator.scala#L218
>
>
> what I understand is that if the producer and broker codecs are not same,
> then the compression should happen again.
>
> But I am not sure about this. Can somebody tell me how this works?
>
> Thanks,
> Uddhav
>


Re: Compression in Kafka

2018-02-14 Thread Uddhav Arote
Oh, that makes sense.
So, to summarize
1. producer and broker compression codecs different: the broker
decompresses and re-compresses the message batches
2. producer and broker compression codecs same: (lz4 & lz4) -- retain the
producer compression **
3. producer and broker compression codec (lz4 and 'producer') -- retain the
original compression codec

Case 2 and 3 same process? **
What is the need to recompress in case of different codecs?

Thanks

On Wed, Feb 14, 2018 at 8:53 PM, Manikumar 
wrote:

> It is not double compression. When I say re-compression,  brokers
> decompress the messages and compress again with
> new codec.
>
> On Wed, Feb 14, 2018 at 5:18 PM, Uddhav Arote 
> wrote:
>
> > Thanks.
> >
> > I am using console-producer with following settings with lz4 broker
> > compression codec
> > 1. None producer compression codec
> > 2. Snappy producer compression codec
> > 3. lz4 producer compression codec
> >
> > I send a 354 Byte message with each of the above settings. However, I do
> > not see any kind of double compression happening, when the producer and
> > broker compression codecs are different
> >
> > *Output 1:*
> >  offset: 0 position: 0 CreateTime: 1518607686194 isvalid: true
> payloadsize:
> > 61 magic: 1 compresscodec: LZ4CompressionCodec crc:  3693540371 payload:
> >   compressed 354Byte message
> >
> >  using --deep-iteration
> >  offset: 0 position: 0 CreateTime: 1518607686194 isvalid: true
> payloadsize:
> > 354 magic: 1 compresscodec: NoCompressionCodec crc: 4190573446 payload:
> 354
> > byte message
> >
> >
> > *Output 2:*offset: 7 position: 517 CreateTime: 1518608039723 isvalid:
> true
> > payloadsize: 61 magic: 1 compresscodec: LZ4CompressionCodec crc:
> 4075439033
> > payload: compressed 354B message
> >
> > using --deep-iteration
> > offset: 7 position: 517 CreateTime: 1518608039723 isvalid: true
> > payloadsize: 354 magic: 1 compresscodec: NoCompressionCodec crc:
> 4061704088
> > payload: Same 354B message
> >
> > *Output 3:*
> > offset: 11 position: 883 CreateTime: 1518608269618 isvalid: true
> > payloadsize: 61 magic: 1 compresscodec: LZ4CompressionCodec crc:
> 981370250
> > payload: compressed 354B message
> >
> > using --deep-iteration
> > offset: 11 position: 883 CreateTime: 1518608269618 isvalid: true
> > payloadsize: 354 magic: 1 compresscodec: NoCompressionCodec crc:
> 468622988
> > payload: same 354B message
> >
> > Please note the compression codecs in the --deep-iteration case,
> >  Case 1 is OK, but in case 2 shouldn't it be SnappyCompression and 3 may
> be
> > LZ4Compression
> >
> > Or is it visible when the message batch into large batches?
> >
> > Thanks
> > Uddhav
> >
> > On Wed, Feb 14, 2018 at 6:05 PM, Manikumar 
> > wrote:
> >
> > >   If the broker "compression.type" is "producer", then the broker
> retains
> > > the original compression codec set by the producer.
> > >   If the producer and broker codecs are different, then broker
> recompress
> > > the data using broker "compression.type".
> > >
> > > On Wed, Feb 14, 2018 at 10:58 AM, Uddhav Arote 
> > > wrote:
> > >
> > > > Hi Kafka users,
> > > >
> > > > I am trying to understand the behavior of compression in Kafka.
> > Consider
> > > a
> > > > scenario, where producer sets compression.codec "snappy" and broker's
> > > > compression.code  "lz4"?
> > > > In this scenario, what is the behavior of the compression?
> > > >
> > > > As far as I have understood is the following,
> > > > The messages compressed by the producer are wrapped in the wrapper
> > > message
> > > > and send to the broker. If the broker compression.codec is
> "producer",
> > > the
> > > > message is written as is to the log.
> > > > In the code,
> > > >
> > > > https://github.com/apache/kafka/blob/962bc638f9c2ab249e5008a587ee78
> > > > e3ba35fcb9/core/src/main/scala/kafka/log/LogValidator.scala#L218
> > > >
> > > >
> > > > what I understand is that if the producer and broker codecs are not
> > same,
> > > > then the compression should happen again.
> > > >
> > > > But I am not sure about this. Can somebody tell me how this works?
> > > >
> > > > Thanks,
> > > > Uddhav
> > > >
> > >
> >
>


Finding consumer group coordinator from CLI?

2018-02-14 Thread Devendar Rao
Hi,  Is there a way to find out the consumer group coordinator using kafka
sh util from CLI? Thanks


Kafka cluster instablility

2018-02-14 Thread Avinash Herle
Hi,

I'm using Kafka version 0.11.0.2. In my cluster, I've 4 nodes running Kafka
of which 3 nodes also running Zookeeper. I've a few producer processes that
publish to Kafka and multiple consumer processes, a streaming engine
(Spark) that ingests from Kafka and also publishes data to Kafka, and a
distributed data store (Druid) which reads all messages from Kafka and
stores in the DB. Druid also uses the same Zookeeper cluster being used by
Kafka for cluster state management.

*Kafka Configs:*
1) No replication being used
2) Number of network threads 30
3) Number of IO threads 8
4) Machines have 64GB RAM and 16 cores
5) 3 topics with 64 partitions per topic

*Questions:*

1) *Partitions going offline*
I frequently see partitions going offline because of which the scheduling
delay of the Spark application increases and input rate gets jittery. I
tried enabling replication too to see if it helped with the problem. It
didn't quite make a difference. What could be the cause of this issue? Lack
of resources or cluster misconfigurations? Can the cause be large number of
receiver processes?

*2) Colocation of Zookeeper and Kafka:*
As I mentioned above, I'm running 3 nodes with both Zookeeper and Kafka
colocated. Both the components are containerized, so they are running
inside docker containers. I found a few blogs that suggested not colocating
them for performance reasons. Is it necessary to run them on dedicated
machines?

*3) Using same Zookeeper cluster across different components*
In my cluster, I use the same Zookeeper cluster for state management of the
Kafka cluster and the Druid cluster. Could this cause instability of the
overall system?

Hope I've covered all the necessary information needed. Please let me know
if more information about my cluster is needed.

Thanks in advance,
Avinash
-- 

Excuse brevity and typos. Sent from mobile device.


Re: Kafka Streams 0.11 consumers losing offsets for all group.ids

2018-02-14 Thread Matthias J. Sax
Sorry for the long delay. Just rediscovered this...

Hard to tell without logs. Can you still reproduce the issue? Debug logs
for broker and stream application would be helpful to dig into it.

-Matthias

On 1/2/18 6:26 AM, Adam Gurson wrote:
> Thank you for the response! The offsets.topic.replication.factor is set to
> 2 for Cluster A (the size of the cluster). It is 3 for Cluster B, but the
> number of in-sync replicas was manually increased to 4 (cluster size) for
> the the __consumer_offsets topic after the cluster was created.
> 
> In addition, these topics are written to at least once a minute, so it's
> not the case that a retention interval is being exceeded and the offsets
> being purged as far as I can tell.
> 
> On Fri, Dec 22, 2017 at 4:01 PM, Matthias J. Sax 
> wrote:
> 
>> Thanks for reporting this.
>>
>> What is your `offsets.topic.replication.factor`?
>>
>>
>>
>> -Matthias
>>
>>
>>
>> On 12/19/17 8:32 AM, Adam Gurson wrote:
>>> I am running two kafka 0.11 clusters. Cluster A has two 0.11.0.0 brokers
>>> with 3 zookeepers. Cluster B has 4 0.11.0.1 brokers with 5 zookeepers.
>>>
>>> We have recently updated from running 0.8.2 client and brokers to 0.11.
>> In
>>> addition, we added two kafka streams group.id that process data from
>> one of
>>> the topics that all of the old code processes from.
>>>
>>> Most of the time, scaling the streams clients up or down works ask
>>> expected. The streams clients go into a rebalance and come up with all
>>> consumer offsets correct for the topic.
>>>
>>> However, I have found two cases were a sever loss of offsets is occuring:
>>>
>>> On Cluster A (min.insync.replicas=1), I do a normal "cycle" of the
>> brokers,
>>> to stop/start them one at a time, giving time for the brokers to
>> handshake
>>> and exchange leadership as necessary. Twice now I have done this, and
>> both
>>> kafka streams consumers have rebalanced only to come up with totally
>> messed
>>> up offsets. The offsets for one group.id is set to 5,000,000 for all
>>> partitions, and the other group.id offsets were set to a number just
>> short
>>> of 7,000,000.
>>>
>>> On Cluster B (min.insync.replicas=2), I am running the exact same streams
>>> code. I have seen cases where if I scale up or down twoo quickly (i.e.
>> add
>>> or remove too many streams clients at once) before a rebalance has
>>> finished, the offsets for the group.ids are completely lost. This causes
>>> the streams consumers to reset according to "auto.offset.reset".
>>>
>>> In both cases, streams is calculating real-time metrics for data flowing
>>> through our brokers. These are serious issues because it causes them to
>>> completely get the counting wrong, either doubly counting or skipping
>> data
>>> altogether. I have scoured the web and have been unable to find anyone
>> else
>>> having this issue with streams.
>>>
>>> I should also mention that all of our old 0.8.2 consumer code (which is
>>> updated to 0.11 client library) never has any problems with offsets. My
>>> guess is because they are still using zookeeper to store their offsets.
>>>
>>> This implies to me that the __consumer_offsets topic isn't being utilized
>>> by streams clients correctly.
>>>
>>> I'm at a total loss at this point and would greatly appreciate any
>> advice.
>>> Thank you.
>>>
>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: error when attempting a unit test of spring kafka producer

2018-02-14 Thread Ian Ewing
>From my build.gradle:

buildscript {
repositories {
mavenCentral()
}
dependencies {

classpath("org.springframework.boot:spring-boot-gradle-plugin:1.5.10.RELEASE")
}
}

apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'org.springframework.boot'

jar {
baseName = 'calculator'
version =  '0.1.0'
}

repositories {
mavenCentral()
}

sourceCompatibility = 1.8
targetCompatibility = 1.8

dependencies {
compile("org.springframework.boot:spring-boot-starter-web")
testCompile("org.springframework.boot:spring-boot-starter-test")
compile("org.springframework.kafka:spring-kafka:1.3.2.RELEASE")
testCompile("org.springframework.kafka:spring-kafka-test")
}

And this is from my project structure. I wonder if that is part of the
problem, having .10 and .11?


   - Gradle: org.apache.kafka:kafka-clients:0.11.0.0
   - Gradle: org.apache.kafka:kafka-clients:test:0.11.0.0
   - Gradle: org.apache.kafka:kafka_2.11:0.10.1.1
   - Gradle: org.apache.kafka:kafka_2.11:test:0.10.1.1

On Feb 13, 2018 21:09, "Ted Yu"  wrote:

> LoginType was in 0.10.x release.
>
> This seems to indicate Kafka version mismatch.
>
> Can you check the dependencies of your test ?
>
> Thanks
>
> On Tue, Feb 13, 2018 at 8:03 PM, Ian Ewing  wrote:
>
> > I have been trying to figure out how to unit test a kafka producer.
> Should
> > take in a simple integer and perform some addition. Followed what I could
> > find on spring kafka unit testing but keep running into this error:
> >
> > 19:53:12.788 [main] ERROR kafka.server.KafkaServer - [Kafka Server 0],
> > Fatal error during KafkaServer startup. Prepare to shutdown
> > java.lang.NoClassDefFoundError: org/apache/kafka/common/networ
> k/LoginType
> > at kafka.network.Processor.(SocketServer.scala:406)
> > at kafka.network.SocketServer.newProcessor(SocketServer.scala:141)
> > at
> > kafka.network.SocketServer$$anonfun$startup$1$$anonfun$
> > apply$1.apply$mcVI$sp(SocketServer.scala:94)
> > at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
> > at
> > kafka.network.SocketServer$$anonfun$startup$1.apply(SocketSe
> rver.scala:93)
> > at
> > kafka.network.SocketServer$$anonfun$startup$1.apply(SocketSe
> rver.scala:89)
> > at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> > at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> > at scala.collection.MapLike$DefaultValuesIterable.foreach(
> > MapLike.scala:206)
> > at kafka.network.SocketServer.startup(SocketServer.scala:89)
> > at kafka.server.KafkaServer.startup(KafkaServer.scala:219)
> > at kafka.utils.TestUtils$.createServer(TestUtils.scala:120)
> > at kafka.utils.TestUtils.createServer(TestUtils.scala)
> > at
> > org.springframework.kafka.test.rule.KafkaEmbedded.
> > before(KafkaEmbedded.java:154)
> > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:46)
> > at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> > at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> > at
> > org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(
> > SpringJUnit4ClassRunner.java:191)
> > at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> > at
> > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(
> > JUnit4IdeaTestRunner.java:68)
> > at
> > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.
> > startRunnerWithArgs(IdeaTestRunner.java:51)
> > at
> > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(
> > JUnitStarter.java:242)
> > at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStart
> er.java:70)
> > Caused by: java.lang.ClassNotFoundException:
> > org.apache.kafka.common.network.LoginType
> > at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> > at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
> > at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> > ... 23 common frames omitted
> >
> >
> > Has anyone come across this situation? Any ideas on the direction a
> > solution would take? I can provide more information, code, etc. Whatever
> > extra is needed. Don't want to bog down the email with too much.
> >
> > Thanks
> > Ian
> >
>


Re: how to enhance Kafka streaming Consumer rate ?

2018-02-14 Thread Matthias J. Sax
Is your network saturated? If yes, you can try to start more instances
of Kafka Streams instead of running with multiple thread within one
instance to increase available network capacity.


-Matthias

On 2/8/18 12:30 AM, ? ? wrote:
> Hi:
> I used kafka streaming for real time analysis.
> and I put stream_thread_num same with partitions of topic
> and set ConsumerConfig.max_poll_records =500
> I use foreach  method only in this
> but find with large records in kafka. the cosumer LAG is big some times and 
> trigger kafka topic rebalance.
> How to how to enhance Kafka streaming Consumer rate ?
> 
> funk...@live.com
> 



signature.asc
Description: OpenPGP digital signature


Re: Store not ready

2018-02-14 Thread Matthias J. Sax
What version to you use?

Kafka Streams should be able to keep running while you restart you
brokers. If not, it seems to be a bug in Kafka Streams itself.

-Matthias

On 2/3/18 7:39 PM, dizzy0ny wrote:
> Hi,We have a recurring problem that I wonder if there is a better way to 
> solve.  Currently when we restart our backed Kafka services and then our 
> datastreams app, the app is unable to reach many of the Kafka topic state 
> stores. We currently retry, but more often than not, it requires a restart of 
> the app to clear up.
> I think this is because perhaps a partition leader has not been elected when 
> the app starts.  Two questions
> 1. Is there a good way to know when a partition leader has been elected such 
> that our restart script can wait2. Is it possible to have the app 
> deterministically wait/retry until the stores are ready?  As mentioned, we 
> have retry logic for up to 200 tries with a few seconds sleep in between, but 
> it seems in some cases we have restart the app as the retries get exceeded.
> Thanks for any assistance
> 



signature.asc
Description: OpenPGP digital signature


Re: ProducerFencedException: Producer attempted an operation with an old epoch.

2018-02-14 Thread Matthias J. Sax
We discovered and fixed some bugs in upcoming 1.0.1 and 1.1.0 releases.

Maybe you can try those out?

A ProducerFenced Exception should actually be self-healing and resolve
over time. How long did the application retry to rebalance?

Without logs, its hard to tell what might cause the issue though. EOS
can be subtle and it could be something different than reported before.


-Matthias


On 2/8/18 9:54 AM, dan bress wrote:
> Hi,
> 
> I recently switched my Kafka Streams 1.0.0 app to use exactly_once
> semantics and since them my cluster has been stuck in rebalancing.  Is
> there an explanation as to what is going on, or how I can resolve it?
> 
> I saw a similar issue discussed on the mailing list, but I don't know if a
> ticket was created or there was a resolution.
> 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201711.mbox/%3CCAKkfnUY0C311Yq%3Drt8kyna4cyucV8HbgWpiYj%3DfnYMt9%2BAb8Mw%40mail.gmail.com%3E
> 
> This is the exception I'm seeing:
> 2018-02-08 17:09:20,763 ERR [kafka-producer-network-thread |
> dp-app-devel-dbress-a92a93de-c1b1-4655-b487-0e9f3f3f3409-StreamThread-4-0_414-producer]
> Sender [Producer
> clientId=dp-app-devel-dbress-a92a93de-c1b1-4655-b487-0e9f3f3f3409-StreamThread-4-0_414-producer,
> transactionalId=dp-app-devel-dbress-0_414] Aborting producer batches due to
> fatal error
> org.apache.kafka.common.errors.ProducerFencedException: Producer attempted
> an operation with an old epoch. Either there is a newer producer with the
> same transactionalId, or the producer's transaction has been expired by the
> broker.
> 2018-02-08 17:09:20,764 ERR
> [dp-app-devel-dbress-a92a93de-c1b1-4655-b487-0e9f3f3f3409-StreamThread-4]
> ProcessorStateManager task [0_414] Failed to flush state store
> summarykey-to-summary:
> org.apache.kafka.common.KafkaException: Cannot perform send because at
> least one previous transactional or idempotent request has failed with
> errors.
> at
> org.apache.kafka.clients.producer.internals.TransactionManager.failIfNotReadyForSend(TransactionManager.java:278)
> at
> org.apache.kafka.clients.producer.internals.TransactionManager.maybeAddPartitionToTransaction(TransactionManager.java:263)
> at
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:804)
> at
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:760)
> at
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:100)
> at
> org.apache.kafka.streams.state.internals.StoreChangeFlushingLogger.flush(StoreChangeFlushingLogger.java:92)
> at
> org.apache.kafka.streams.state.internals.InMemoryKeyValueFlushingLoggedStore.flush(InMemoryKeyValueFlushingLoggedStore.java:139)
> at
> org.apache.kafka.streams.state.internals.InnerMeteredKeyValueStore.flush(InnerMeteredKeyValueStore.java:268)
> at
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.flush(MeteredKeyValueStore.java:126)
> at
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:245)
> at
> org.apache.kafka.streams.processor.internals.AbstractTask.flushState(AbstractTask.java:196)
> at
> org.apache.kafka.streams.processor.internals.StreamTask.flushState(StreamTask.java:324)
> at
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:304)
> at
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
> at
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:299)
> at
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:289)
> at
> org.apache.kafka.streams.processor.internals.AssignedTasks$2.apply(AssignedTasks.java:87)
> at
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:451)
> at
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:380)
> at
> org.apache.kafka.streams.processor.internals.TaskManager.commitAll(TaskManager.java:309)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:1018)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:835)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
> 



signature.asc
Description: OpenPGP digital signature


Re: unable to find custom JMX metrics

2018-02-14 Thread Matthias J. Sax
Cross posted at SO:
https://stackoverflow.com/questions/48745642/kstreams-streamsmetrics-recordthroughput-where-are-they-in-jconsole-adding-ow



On 2/12/18 3:52 AM, Salah Alkawari wrote:
> hi,
> i have a processor that generates custom jmx metrics:
> public class ProcessorJMX implements Processor {
> 
> private StreamsMetrics streamsMetrics;
> private Sensor sensorStartTs;
> 
> @Override
> public void init(ProcessorContext processorContext) {
> streamsMetrics = processorContext.metrics();
> sensorStartTs = streamsMetrics.addSensor("start_ts", 
> Sensor.RecordingLevel.INFO);
> }
> 
> @Override
> public void process(String key, GenericRecord val) {
> streamsMetrics.recordThroughput(sensorStartTs, 
> Long.valueOf(val.get("start_ts").toString()));
> }
> 
> @Override
> public void punctuate(long l) { }
> 
> @Override
> public void close() { }
> }i have this in my ProcessorSupplier - ProcessorSupplierJMX and use it like 
> this:builder.stream(stringSerde, avroSerdeValue, topicOutput).process(new 
> ProcessorSupplierJMX());
> when i start my long running integration test, i goto jconsole, MBeans, 
> kafka.streams - but i dont see this metric in any of the subfolder anywhere. 
> what am i missing? am i looking in the wrong location? or do i need to 
> activate something before i can see this metric?
> Thanks,
> Sal
> 



signature.asc
Description: OpenPGP digital signature


Re: error when attempting a unit test of spring kafka producer

2018-02-14 Thread Ian Ewing
Also using these dependencies

   - Gradle: org.springframework.kafka:spring-kafka-test:1.1.7.RELEASE
   - Gradle: org.springframework.kafka:spring-kafka:1.3.2.RELEASE


On Wed, Feb 14, 2018 at 2:13 PM, Ian Ewing  wrote:

> From my build.gradle:
>
> buildscript {
> repositories {
> mavenCentral()
> }
> dependencies {
> 
> classpath("org.springframework.boot:spring-boot-gradle-plugin:1.5.10.RELEASE")
> }
> }
>
> apply plugin: 'java'
> apply plugin: 'eclipse'
> apply plugin: 'idea'
> apply plugin: 'org.springframework.boot'
>
> jar {
> baseName = 'calculator'
> version =  '0.1.0'
> }
>
> repositories {
> mavenCentral()
> }
>
> sourceCompatibility = 1.8
> targetCompatibility = 1.8
>
> dependencies {
> compile("org.springframework.boot:spring-boot-starter-web")
> testCompile("org.springframework.boot:spring-boot-starter-test")
> compile("org.springframework.kafka:spring-kafka:1.3.2.RELEASE")
> testCompile("org.springframework.kafka:spring-kafka-test")
> }
>
> And this is from my project structure. I wonder if that is part of the 
> problem, having .10 and .11?
>
>
>- Gradle: org.apache.kafka:kafka-clients:0.11.0.0
>- Gradle: org.apache.kafka:kafka-clients:test:0.11.0.0
>- Gradle: org.apache.kafka:kafka_2.11:0.10.1.1
>- Gradle: org.apache.kafka:kafka_2.11:test:0.10.1.1
>
> On Feb 13, 2018 21:09, "Ted Yu"  wrote:
>
>> LoginType was in 0.10.x release.
>>
>> This seems to indicate Kafka version mismatch.
>>
>> Can you check the dependencies of your test ?
>>
>> Thanks
>>
>> On Tue, Feb 13, 2018 at 8:03 PM, Ian Ewing  wrote:
>>
>> > I have been trying to figure out how to unit test a kafka producer.
>> Should
>> > take in a simple integer and perform some addition. Followed what I
>> could
>> > find on spring kafka unit testing but keep running into this error:
>> >
>> > 19:53:12.788 [main] ERROR kafka.server.KafkaServer - [Kafka Server 0],
>> > Fatal error during KafkaServer startup. Prepare to shutdown
>> > java.lang.NoClassDefFoundError: org/apache/kafka/common/networ
>> k/LoginType
>> > at kafka.network.Processor.(SocketServer.scala:406)
>> > at kafka.network.SocketServer.newProcessor(SocketServer.scala:141)
>> > at
>> > kafka.network.SocketServer$$anonfun$startup$1$$anonfun$
>> > apply$1.apply$mcVI$sp(SocketServer.scala:94)
>> > at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
>> > at
>> > kafka.network.SocketServer$$anonfun$startup$1.apply(SocketSe
>> rver.scala:93)
>> > at
>> > kafka.network.SocketServer$$anonfun$startup$1.apply(SocketSe
>> rver.scala:89)
>> > at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>> > at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>> > at scala.collection.MapLike$DefaultValuesIterable.foreach(
>> > MapLike.scala:206)
>> > at kafka.network.SocketServer.startup(SocketServer.scala:89)
>> > at kafka.server.KafkaServer.startup(KafkaServer.scala:219)
>> > at kafka.utils.TestUtils$.createServer(TestUtils.scala:120)
>> > at kafka.utils.TestUtils.createServer(TestUtils.scala)
>> > at
>> > org.springframework.kafka.test.rule.KafkaEmbedded.
>> > before(KafkaEmbedded.java:154)
>> > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource
>> .java:46)
>> > at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>> > at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>> > at
>> > org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(
>> > SpringJUnit4ClassRunner.java:191)
>> > at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>> > at
>> > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(
>> > JUnit4IdeaTestRunner.java:68)
>> > at
>> > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.
>> > startRunnerWithArgs(IdeaTestRunner.java:51)
>> > at
>> > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(
>> > JUnitStarter.java:242)
>> > at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStart
>> er.java:70)
>> > Caused by: java.lang.ClassNotFoundException:
>> > org.apache.kafka.common.network.LoginType
>> > at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>> > at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>> > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>> > at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>> > ... 23 common frames omitted
>> >
>> >
>> > Has anyone come across this situation? Any ideas on the direction a
>> > solution would take? I can provide more information, code, etc. Whatever
>> > extra is needed. Don't want to bog down the email with too much.
>> >
>> > Thanks
>> > Ian
>> >
>>
>


Re: Kafka Stream tuning.

2018-02-14 Thread Guozhang Wang
Hello Brilly,


If you commit every second (note the commit interval unit is milliseconds,
so 1000 means a second), and each commit takes 23 millis, you will get
about that throughput. The question is 1) do you really need to commit
every second? 2) If you really do, how to reduce it. For 2) since you
mentioned you only have a simple filtering application I'd assume it is
stateless, so most of the time spent would be the sync-commit-offset call
to the broker. 23 millis does look a bit too long for a single rpc but
again that would very much dependent on your client-server network. If you
cannot tune your client-server rpc latency better, then you'd have to
consider option 1) to reduce your commit frequency.


Guozhang




On Tue, Feb 13, 2018 at 7:29 PM, TSANG, Brilly 
wrote:

> I have also check the commit-latency-avg, it's around 23 millis per
> commit.  That translate to about the same throughput that I'm getting now
> (0.04message/millis).  Does anyone got any benchmark for kafka stream's
> commit-latency-avg?  Is it possible to tune it to be faster?  I just want
> to verify if this is supposed to be latency limit and we will have to work
> with horizontal scaling with more partition and stream processes if the
> input throughput is higher.
>
> Another side question will be is custom consumer/publisher going to be
> faster than default kafka stream implementation?
>
> Regards,
> Brilly
>
> -Original Message-
> From: TSANG, Brilly [mailto:brilly.ts...@hk.daiwacm.com]
> Sent: Wednesday, February 14, 2018 11:01 AM
> To: users@kafka.apache.org
> Subject: RE: Kafka Stream tuning.
>
> Hey Damian and folks,
>
> I've also tried 1000 and 500 and the performance state is exactly the
> same.  Any other ideas?
>
> Regards,
> Brilly
>
> -Original Message-
> From: Damian Guy [mailto:damian@gmail.com]
> Sent: Tuesday, February 13, 2018 4:48 PM
> To: users@kafka.apache.org
> Subject: Re: Kafka Stream tuning.
>
> Hi Brilly,
>
> My initial guess is that it is the overhead of committing. Commit is
> synchronous and you have the commit interval set to 50ms. Perhaps try
> increasing it.
>
> Thanks,
> Damian
>
> On Tue, 13 Feb 2018 at 07:49 TSANG, Brilly 
> wrote:
>
> > Hi kafka users,
> >
> > I created a filtering stream with the Processor API;  input topic that
> > have input rate at ~5 records per millisecond.  The filtering function
> > on average takes 0.05milliseconds to complete which in ideal case
> > would translate to (1/0.05)  20 records per millisecond.  However,
> > when I benchmark the whole process, the streams is only processing
> > 0.05 record per milliseconds.
> >
> > Anyone have any idea on how to tune the steaming system to be faster
> > as
> > 0.05 record is very far away from the theoretical max of 20?  The
> > results above are per partition based where I have 16 partition for
> > the input topic and all partitions have similar throughput.
> >
> > I've only set the streams to have the following config:
> > Properties config = new Properties();
> > config.put(StreamsConfig.APPLICATION_ID_CONFIG, appId);
> > config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrap);
> > config.put(StreamsConfig.STATE_DIR_CONFIG, stateDir);
> > config.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 50);
> >
> > I'm not defining TimeExtractor so the default one is used.
> >
> > Thanks for any help in advance.
> >
> > Regards,
> > Brilly
> >
> > 
> >
> >
> > **
> > **
> > DISCLAIMER:
> > This email and any attachment(s) are intended solely for the person(s)
> > named above, and are or may contain information of a proprietary or
> > confidential nature. If you are not the intended recipient(s), you
> > should delete this message immediately. Any use, disclosure or
> > distribution of this message without our prior consent is strictly
> prohibited.
> > This message may be subject to errors, incomplete or late delivery,
> > interruption, interception, modification, or may contain viruses.
> > Neither Daiwa Capital Markets Hong Kong Limited, its subsidiaries,
> > affiliates nor their officers or employees represent or warrant the
> > accuracy or completeness, nor accept any responsibility or liability
> > whatsoever for any use of or reliance upon, this email or any of the
> > contents hereof. The contents of this message are for information
> > purposes only, and subject to change without notice.
> > This message is not and is not intended to be an offer or solicitation
> > to buy or sell any securities or financial products, nor does any
> > recommendation, opinion or advice necessarily reflect those of Daiwa
> > Capital Markets Hong Kong Limited, its subsidiaries or affiliates.
> >
> > **
> > **
> >
>
> 

Re: Kafka cluster instablility

2018-02-14 Thread Ted Yu
For #2 and #3, you would get better stability if zookeeper and Kafka get
dedicated machines.

Have you profiled the performance of the nodes where multiple processes ran
(zookeeper / Kafka / Druid) ? How was disk and network IO like ?

Cheers

On Wed, Feb 14, 2018 at 9:38 AM, Avinash Herle 
wrote:

> Hi,
>
> I'm using Kafka version 0.11.0.2. In my cluster, I've 4 nodes running Kafka
> of which 3 nodes also running Zookeeper. I've a few producer processes that
> publish to Kafka and multiple consumer processes, a streaming engine
> (Spark) that ingests from Kafka and also publishes data to Kafka, and a
> distributed data store (Druid) which reads all messages from Kafka and
> stores in the DB. Druid also uses the same Zookeeper cluster being used by
> Kafka for cluster state management.
>
> *Kafka Configs:*
> 1) No replication being used
> 2) Number of network threads 30
> 3) Number of IO threads 8
> 4) Machines have 64GB RAM and 16 cores
> 5) 3 topics with 64 partitions per topic
>
> *Questions:*
>
> 1) *Partitions going offline*
> I frequently see partitions going offline because of which the scheduling
> delay of the Spark application increases and input rate gets jittery. I
> tried enabling replication too to see if it helped with the problem. It
> didn't quite make a difference. What could be the cause of this issue? Lack
> of resources or cluster misconfigurations? Can the cause be large number of
> receiver processes?
>
> *2) Colocation of Zookeeper and Kafka:*
> As I mentioned above, I'm running 3 nodes with both Zookeeper and Kafka
> colocated. Both the components are containerized, so they are running
> inside docker containers. I found a few blogs that suggested not colocating
> them for performance reasons. Is it necessary to run them on dedicated
> machines?
>
> *3) Using same Zookeeper cluster across different components*
> In my cluster, I use the same Zookeeper cluster for state management of the
> Kafka cluster and the Druid cluster. Could this cause instability of the
> overall system?
>
> Hope I've covered all the necessary information needed. Please let me know
> if more information about my cluster is needed.
>
> Thanks in advance,
> Avinash
> --
>
> Excuse brevity and typos. Sent from mobile device.
>


Re: unable to find custom JMX metrics

2018-02-14 Thread Guozhang Wang
Salah,

I'm cross-posting my answer from SO here:

Looking at your code closely again, I realized you may forget to add the
metric into your sensor, i.e. you need to call
`sensorStartTs.add(metricName, MeasurableStat)` where `MeasurableStat`
defines the type of the stat, like Sum, Avg, Count, etc.
More specifically:

sensorStartTs = streamsMetrics.addSensor("start_ts",
Sensor.RecordingLevel.INFO );
sensorStartTs.add("my metric name", myStat);

This is because you used the general `addSensor` API to add the sensor; if
you used advanced `addThroughputSensor` etc it will call `sensor.add` for
you.

Then you should search for your metrics in the thread-level sensors, i.e.
in `stream-metrics`.

Guozhang

On Wed, Feb 14, 2018 at 1:14 PM, Matthias J. Sax 
wrote:

> Cross posted at SO:
> https://stackoverflow.com/questions/48745642/kstreams-streamsmetrics-
> recordthroughput-where-are-they-in-jconsole-adding-ow
>
>
>
> On 2/12/18 3:52 AM, Salah Alkawari wrote:
> > hi,
> > i have a processor that generates custom jmx metrics:
> > public class ProcessorJMX implements Processor {
> >
> > private StreamsMetrics streamsMetrics;
> > private Sensor sensorStartTs;
> >
> > @Override
> > public void init(ProcessorContext processorContext) {
> > streamsMetrics = processorContext.metrics();
> > sensorStartTs = streamsMetrics.addSensor("start_ts",
> Sensor.RecordingLevel.INFO);
> > }
> >
> > @Override
> > public void process(String key, GenericRecord val) {
> > streamsMetrics.recordThroughput(sensorStartTs,
> Long.valueOf(val.get("start_ts").toString()));
> > }
> >
> > @Override
> > public void punctuate(long l) { }
> >
> > @Override
> > public void close() { }
> > }i have this in my ProcessorSupplier - ProcessorSupplierJMX and use it
> like this:builder.stream(stringSerde, avroSerdeValue,
> topicOutput).process(new ProcessorSupplierJMX());
> > when i start my long running integration test, i goto jconsole, MBeans,
> kafka.streams - but i dont see this metric in any of the subfolder
> anywhere. what am i missing? am i looking in the wrong location? or do i
> need to activate something before i can see this metric?
> > Thanks,
> > Sal
> >
>
>


-- 
-- Guozhang


Re: [VOTE] 1.0.1 RC1

2018-02-14 Thread Satish Duggana
+1 (non-binding)

- Ran testAll/releaseTarGzAll on 1.0.1-rc1
 tag
- Ran through quickstart of core/streams

Thanks,
Satish.


On Tue, Feb 13, 2018 at 11:30 PM, Damian Guy  wrote:

> +1
>
> Ran tests, verified streams quickstart works
>
> On Tue, 13 Feb 2018 at 17:52 Damian Guy  wrote:
>
> > Thanks Ewen - i had the staging repo set up as profile that i forgot to
> > add to my maven command. All good.
> >
> > On Tue, 13 Feb 2018 at 17:41 Ewen Cheslack-Postava 
> > wrote:
> >
> >> Damian,
> >>
> >> Which quickstart are you referring to? The streams quickstart only
> >> executes
> >> pre-built stuff afaict.
> >>
> >> In any case, if you're building a maven streams project, did you modify
> it
> >> to point to the staging repository at
> >> https://repository.apache.org/content/groups/staging/ in addition to
> the
> >> default repos? During rc it wouldn't fetch from maven central since it
> >> hasn't been published there yet.
> >>
> >> If that is configured, more compete maven output would be helpful to
> track
> >> down where it is failing to resolve the necessary archetype.
> >>
> >> -Ewen
> >>
> >> On Tue, Feb 13, 2018 at 3:03 AM, Damian Guy 
> wrote:
> >>
> >> > Hi Ewen,
> >> >
> >> > I'm trying to run the streams quickstart and I'm getting:
> >> > [ERROR] Failed to execute goal
> >> > org.apache.maven.plugins:maven-archetype-plugin:3.0.1:generate
> >> > (default-cli) on project standalone-pom: The desired archetype does
> not
> >> > exist (org.apache.kafka:streams-quickstart-java:1.0.1)
> >> >
> >> > Something i'm missing?
> >> >
> >> > Thanks,
> >> > Damian
> >> >
> >> > On Tue, 13 Feb 2018 at 10:16 Manikumar 
> >> wrote:
> >> >
> >> > > +1 (non-binding)
> >> > >
> >> > > ran quick-start, unit tests on the src.
> >> > >
> >> > >
> >> > >
> >> > > On Tue, Feb 13, 2018 at 5:31 AM, Ewen Cheslack-Postava <
> >> > e...@confluent.io>
> >> > > wrote:
> >> > >
> >> > > > Thanks for the heads up, I forgot to drop the old ones, I've done
> >> that
> >> > > and
> >> > > > rc1 artifacts should be showing up now.
> >> > > >
> >> > > > -Ewen
> >> > > >
> >> > > >
> >> > > > On Mon, Feb 12, 2018 at 12:57 PM, Ted Yu 
> >> wrote:
> >> > > >
> >> > > > > +1
> >> > > > >
> >> > > > > Ran test suite which passed.
> >> > > > >
> >> > > > > BTW it seems the staging repo hasn't been updated yet:
> >> > > > >
> >> > > > > https://repository.apache.org/content/groups/staging/org/
> >> > > > > apache/kafka/kafka-clients/
> >> > > > >
> >> > > > > On Mon, Feb 12, 2018 at 10:16 AM, Ewen Cheslack-Postava <
> >> > > > e...@confluent.io
> >> > > > > >
> >> > > > > wrote:
> >> > > > >
> >> > > > > > And of course I'm +1 since I've already done normal release
> >> > > validation
> >> > > > > > before posting this.
> >> > > > > >
> >> > > > > > -Ewen
> >> > > > > >
> >> > > > > > On Mon, Feb 12, 2018 at 10:15 AM, Ewen Cheslack-Postava <
> >> > > > > e...@confluent.io
> >> > > > > > >
> >> > > > > > wrote:
> >> > > > > >
> >> > > > > > > Hello Kafka users, developers and client-developers,
> >> > > > > > >
> >> > > > > > > This is the second candidate for release of Apache Kafka
> >> 1.0.1.
> >> > > > > > >
> >> > > > > > > This is a bugfix release for the 1.0 branch that was first
> >> > released
> >> > > > > with
> >> > > > > > > 1.0.0 about 3 months ago. We've fixed 49 significant issues
> >> since
> >> > > > that
> >> > > > > > > release. Most of these are non-critical, but in aggregate
> >> these
> >> > > fixes
> >> > > > > > will
> >> > > > > > > have significant impact. A few of the more significant fixes
> >> > > include:
> >> > > > > > >
> >> > > > > > > * KAFKA-6277: Make loadClass thread-safe for class loaders
> of
> >> > > Connect
> >> > > > > > > plugins
> >> > > > > > > * KAFKA-6185: Selector memory leak with high likelihood of
> >> OOM in
> >> > > > case
> >> > > > > of
> >> > > > > > > down conversion
> >> > > > > > > * KAFKA-6269: KTable state restore fails after rebalance
> >> > > > > > > * KAFKA-6190: GlobalKTable never finishes restoring when
> >> > consuming
> >> > > > > > > transactional messages
> >> > > > > > > * KAFKA-6529: Stop file descriptor leak when client
> >> disconnects
> >> > > with
> >> > > > > > > staged receives
> >> > > > > > >
> >> > > > > > > Release notes for the 1.0.1 release:
> >> > > > > > > http://home.apache.org/~ewencp/kafka-1.0.1-rc1/
> >> > RELEASE_NOTES.html
> >> > > > > > >
> >> > > > > > > *** Please download, test and vote by Thursday, Feb 15, 5pm
> PT
> >> > ***
> >> > > > > > >
> >> > > > > > > Kafka's KEYS file containing PGP keys we use to sign the
> >> release:
> >> > > > > > > http://kafka.apache.org/KEYS
> >> > > > > > >
> >> > > > > > > * Release artifacts to be voted upon (source and binary):
> >> > > > > > > http://home.apache.org/~ewencp/kafka-1.0.1-rc1/
> >> > > > > > >
> >> > > > > > > * Maven 

Re: [VOTE] 1.0.1 RC1

2018-02-14 Thread Guozhang Wang
+1

Ran tests, verified web docs.

On Wed, Feb 14, 2018 at 6:00 PM, Satish Duggana 
wrote:

> +1 (non-binding)
>
> - Ran testAll/releaseTarGzAll on 1.0.1-rc1
>  tag
> - Ran through quickstart of core/streams
>
> Thanks,
> Satish.
>
>
> On Tue, Feb 13, 2018 at 11:30 PM, Damian Guy  wrote:
>
> > +1
> >
> > Ran tests, verified streams quickstart works
> >
> > On Tue, 13 Feb 2018 at 17:52 Damian Guy  wrote:
> >
> > > Thanks Ewen - i had the staging repo set up as profile that i forgot to
> > > add to my maven command. All good.
> > >
> > > On Tue, 13 Feb 2018 at 17:41 Ewen Cheslack-Postava 
> > > wrote:
> > >
> > >> Damian,
> > >>
> > >> Which quickstart are you referring to? The streams quickstart only
> > >> executes
> > >> pre-built stuff afaict.
> > >>
> > >> In any case, if you're building a maven streams project, did you
> modify
> > it
> > >> to point to the staging repository at
> > >> https://repository.apache.org/content/groups/staging/ in addition to
> > the
> > >> default repos? During rc it wouldn't fetch from maven central since it
> > >> hasn't been published there yet.
> > >>
> > >> If that is configured, more compete maven output would be helpful to
> > track
> > >> down where it is failing to resolve the necessary archetype.
> > >>
> > >> -Ewen
> > >>
> > >> On Tue, Feb 13, 2018 at 3:03 AM, Damian Guy 
> > wrote:
> > >>
> > >> > Hi Ewen,
> > >> >
> > >> > I'm trying to run the streams quickstart and I'm getting:
> > >> > [ERROR] Failed to execute goal
> > >> > org.apache.maven.plugins:maven-archetype-plugin:3.0.1:generate
> > >> > (default-cli) on project standalone-pom: The desired archetype does
> > not
> > >> > exist (org.apache.kafka:streams-quickstart-java:1.0.1)
> > >> >
> > >> > Something i'm missing?
> > >> >
> > >> > Thanks,
> > >> > Damian
> > >> >
> > >> > On Tue, 13 Feb 2018 at 10:16 Manikumar 
> > >> wrote:
> > >> >
> > >> > > +1 (non-binding)
> > >> > >
> > >> > > ran quick-start, unit tests on the src.
> > >> > >
> > >> > >
> > >> > >
> > >> > > On Tue, Feb 13, 2018 at 5:31 AM, Ewen Cheslack-Postava <
> > >> > e...@confluent.io>
> > >> > > wrote:
> > >> > >
> > >> > > > Thanks for the heads up, I forgot to drop the old ones, I've
> done
> > >> that
> > >> > > and
> > >> > > > rc1 artifacts should be showing up now.
> > >> > > >
> > >> > > > -Ewen
> > >> > > >
> > >> > > >
> > >> > > > On Mon, Feb 12, 2018 at 12:57 PM, Ted Yu 
> > >> wrote:
> > >> > > >
> > >> > > > > +1
> > >> > > > >
> > >> > > > > Ran test suite which passed.
> > >> > > > >
> > >> > > > > BTW it seems the staging repo hasn't been updated yet:
> > >> > > > >
> > >> > > > > https://repository.apache.org/content/groups/staging/org/
> > >> > > > > apache/kafka/kafka-clients/
> > >> > > > >
> > >> > > > > On Mon, Feb 12, 2018 at 10:16 AM, Ewen Cheslack-Postava <
> > >> > > > e...@confluent.io
> > >> > > > > >
> > >> > > > > wrote:
> > >> > > > >
> > >> > > > > > And of course I'm +1 since I've already done normal release
> > >> > > validation
> > >> > > > > > before posting this.
> > >> > > > > >
> > >> > > > > > -Ewen
> > >> > > > > >
> > >> > > > > > On Mon, Feb 12, 2018 at 10:15 AM, Ewen Cheslack-Postava <
> > >> > > > > e...@confluent.io
> > >> > > > > > >
> > >> > > > > > wrote:
> > >> > > > > >
> > >> > > > > > > Hello Kafka users, developers and client-developers,
> > >> > > > > > >
> > >> > > > > > > This is the second candidate for release of Apache Kafka
> > >> 1.0.1.
> > >> > > > > > >
> > >> > > > > > > This is a bugfix release for the 1.0 branch that was first
> > >> > released
> > >> > > > > with
> > >> > > > > > > 1.0.0 about 3 months ago. We've fixed 49 significant
> issues
> > >> since
> > >> > > > that
> > >> > > > > > > release. Most of these are non-critical, but in aggregate
> > >> these
> > >> > > fixes
> > >> > > > > > will
> > >> > > > > > > have significant impact. A few of the more significant
> fixes
> > >> > > include:
> > >> > > > > > >
> > >> > > > > > > * KAFKA-6277: Make loadClass thread-safe for class loaders
> > of
> > >> > > Connect
> > >> > > > > > > plugins
> > >> > > > > > > * KAFKA-6185: Selector memory leak with high likelihood of
> > >> OOM in
> > >> > > > case
> > >> > > > > of
> > >> > > > > > > down conversion
> > >> > > > > > > * KAFKA-6269: KTable state restore fails after rebalance
> > >> > > > > > > * KAFKA-6190: GlobalKTable never finishes restoring when
> > >> > consuming
> > >> > > > > > > transactional messages
> > >> > > > > > > * KAFKA-6529: Stop file descriptor leak when client
> > >> disconnects
> > >> > > with
> > >> > > > > > > staged receives
> > >> > > > > > >
> > >> > > > > > > Release notes for the 1.0.1 release:
> > >> > > > > > > http://home.apache.org/~ewencp/kafka-1.0.1-rc1/
> > >> > RELEASE_NOTES.html
> > >> > > > > > >
> > >> > > > > > > ***