Build failed in Jenkins: kafka-trunk-jdk8 #3333

2019-01-20 Thread Apache Jenkins Server
See 


Changes:

[mjsax] MINOR: Handle case where connector status endpoints returns 404 (#6176)

--
[...truncated 4.53 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

> Task :streams:upgrade-system-tests-21:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-21:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-21:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-21:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-21:compileTestJava
> Task :streams:upgrade-system-tests-21:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-21:testClasses
> Task :streams:upgrade-system-tests-21:checkstyleTest
> Task :streams:upgrade-system-tests-21:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-21:test
> Task :streams:streams-scala:spotbugsMain

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple STARTED


Re: [DISCUSS] KIP-390: Add producer option to adjust compression level

2019-01-20 Thread Dongjin Lee
I see. Let me have a check. If not needed, of course, we don't have to
waste on configuration options.

Since the KIP deadline is imminent, I just opened the voting thread. Let's
continue the discussion here.

Best,
Dongjin

On Mon, Jan 21, 2019 at 1:30 AM Ismael Juma  wrote:

> Hi Dongjin,
>
> When the compression type is "producer", then the broker doesn't recompress
> though. Thinking about it some more, there are some uncommon cases where
> recompression does happen (the old (and hopefully hardly used by now)
> message format == 0 and some edge cases), so it is a good point you raised.
>
> It's a bit unfortunate to add so many topic configs for cases that probably
> don't matter. That is, if you are using "producer" compression, you
> probably don't need to configure these settings and can live with the
> defaults. Perhaps we should only support the topic config for the cases
> where you are actually recompressing in the broker.
>
> What do you think? I'd be interested in other people's thoughts too.
>
> Ismael
>
> On Sun, Jan 20, 2019 at 2:14 AM Dongjin Lee  wrote:
>
> > Hi Ismael,
> >
> > It seems like it needs more explanation. Here is the detailed reasoning.
> >
> > You know, topic and broker's 'compression.type' allows 'uncompressed',
> > 'producer' with standard codecs (i.e., gzip, snappy, lz4, zstd.) And this
> > configuration is used by the broker in the re-compressing process after
> > offset assignment. After this feature, the new configs,
> 'compression.level'
> > and 'compression.buffer.size', also will be used in this process.
> >
> > The problem arises when given topic's compression type (whether it was
> > inherited from broker's configuration or explicitly set) is 'producer.'
> > With this setting, the compression codec to be used is decided by the
> > producer client. Since there is no way to restore the compression level
> and
> > buffer size from the message, we can take the following strategies:
> >
> > 1. Just use given 'compression.level' and 'compression.buffer.size'
> > settings.
> >
> > It will cause so many errors. Let's imagine the case of topic's
> > configuration is { compression.type=producer, compression.level=10,
> > compression.buffer.size=8192 }. In this case, all producers with gzip or
> > lz4 compressed messages will result in an error. (gzip doesn't allow
> > compression level 10, and lz4 also for a buffer size of 8192.)
> >
> > 2. Extend the message format to include compression configurations.
> >
> > With this strategy, we need to change the message format - it's a too big
> > change.
> >
> > 3. If topic's compression.type is 'producer', use the default
> configuration
> > for the given codec.
> >
> > With this strategy, allowing fine-grained compression configuration is
> > meaningless.
> >
> > For the above reasons, I think the only alternative is providing options
> > that can be used when the topic's 'compression.type' is 'producer.' In
> > other words, adding compression.[gzip, lz4, zstd].level and
> > compression.[gzip.snappy.lz4].buffer.size options - and it is what I did
> in
> > the last modification.
> >
> > (wait, the reasoning above should be included in the KIP in the rejected
> > alternatives section, isn't it?)
> >
> > Thanks,
> > Dongjin
> >
> > On Sun, Jan 20, 2019 at 2:33 AM Ismael Juma  wrote:
> >
> > > Hi Dongjin,
> > >
> > > For topic level, you can only have a single compression type so the way
> > it
> > > was before was fine, right? The point you raise is how to set broker
> > > defaults that vary depending on the compression type, correct?
> > >
> > > Ismael
> > >
> > > On Mon, Jan 14, 2019 at 10:18 AM Dongjin Lee 
> wrote:
> > >
> > > > I just realized that there was a missing hole in the KIP, so I fixed
> > it.
> > > > The draft implementation will be updated soon.
> > > >
> > > > In short, the proposed change did not regard the case of the topic or
> > > > broker's 'compression.type' is 'producer'; in this case, the broker
> has
> > > to
> > > > handle all kinds of the supported codec. So I added additional
> options
> > > > (compression.[gzip,snappy,lz4, zstd].level,
> > compression.[gzip,snappy,lz4,
> > > > zstd].buffer.size) with handling routines.
> > > >
> > > > Please have a look when you are free.
> > > >
> > > > Thanks,
> > > > Dongjin
> > > >
> > > > On Mon, Jan 7, 2019 at 6:23 AM Dongjin Lee 
> wrote:
> > > >
> > > > > Thanks for pointing out Ismael. It's now updated.
> > > > >
> > > > > Best,
> > > > > Dongjin
> > > > >
> > > > > On Mon, Jan 7, 2019 at 4:36 AM Ismael Juma 
> > wrote:
> > > > >
> > > > >> Thanks Dongjin. One minor suggestion: we should mention that the
> > > broker
> > > > >> side configs are also topic configs (i.e. can be set for a given
> > > topic).
> > > > >>
> > > > >> Ismael
> > > > >>
> > > > >> On Sun, Jan 6, 2019, 10:37 AM Dongjin Lee  > wrote:
> > > > >>
> > > > >> > Happy new year.
> > > > >> >
> > > > >> > I just updated the title and contents of KIP and Jira issue,
> with
> > > > >> updated
> > 

[VOTE] KIP-390: Allow fine-grained configuration for compression

2019-01-20 Thread Dongjin Lee
Hi dev,

I would like to open a vote for KIP-390: Allow fine-grained configuration
for compression. Here is the KIP:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-390%3A+Allow+fine-grained+configuration+for+compression

You can check the (on-going) discussion here:

https://lists.apache.org/thread.html/967980193088e2c8cbb39a4782e358961fee828c6c50d880610a8353@%3Cdev.kafka.apache.org%3E

Thanks,
Dongjin

-- 
*Dongjin Lee*



*A hitchhiker in the mathematical world.github:
github.com/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
slideshare:
www.slideshare.net/dongjinleekr *


Build failed in Jenkins: kafka-trunk-jdk11 #230

2019-01-20 Thread Apache Jenkins Server
See 


Changes:

[mjsax] MINOR: Handle case where connector status endpoints returns 404 (#6176)

--
[...truncated 2.27 MB...]
kafka.utils.PasswordEncoderTest > testEncoderConfigChange STARTED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecode STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecode PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > testDecodeMap STARTED

kafka.utils.json.JsonValueTest > testDecodeMap PASSED

kafka.utils.json.JsonValueTest > testDecodeSeq STARTED

kafka.utils.json.JsonValueTest > testDecodeSeq PASSED

kafka.utils.json.JsonValueTest > testJsonObjectGet STARTED

kafka.utils.json.JsonValueTest > testJsonObjectGet PASSED

kafka.utils.json.JsonValueTest > testJsonValueEquals STARTED

kafka.utils.json.JsonValueTest > testJsonValueEquals PASSED

kafka.utils.json.JsonValueTest > testJsonArrayIterator STARTED

kafka.utils.json.JsonValueTest > testJsonArrayIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectApply STARTED

kafka.utils.json.JsonValueTest > testJsonObjectApply PASSED

kafka.utils.json.JsonValueTest > testDecodeBoolean STARTED

kafka.utils.json.JsonValueTest > testDecodeBoolean PASSED

kafka.utils.CoreUtilsTest > testGenerateUuidAsBase64 STARTED

kafka.utils.CoreUtilsTest > testGenerateUuidAsBase64 PASSED

kafka.utils.CoreUtilsTest > testAbs STARTED

kafka.utils.CoreUtilsTest > testAbs PASSED

kafka.utils.CoreUtilsTest > testReplaceSuffix STARTED

kafka.utils.CoreUtilsTest > testReplaceSuffix PASSED

kafka.utils.CoreUtilsTest > testCircularIterator STARTED

kafka.utils.CoreUtilsTest > testCircularIterator PASSED

kafka.utils.CoreUtilsTest > testReadBytes STARTED

kafka.utils.CoreUtilsTest > testReadBytes PASSED

kafka.utils.CoreUtilsTest > testCsvList STARTED

kafka.utils.CoreUtilsTest > testCsvList PASSED

kafka.utils.CoreUtilsTest > testReadInt STARTED

kafka.utils.CoreUtilsTest > testReadInt PASSED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate STARTED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate PASSED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.CoreUtilsTest > testCsvMap STARTED

kafka.utils.CoreUtilsTest > testCsvMap PASSED

kafka.utils.CoreUtilsTest > testInLock STARTED

kafka.utils.CoreUtilsTest > testInLock PASSED

kafka.utils.CoreUtilsTest > testTryAll STARTED

kafka.utils.CoreUtilsTest > testTryAll PASSED

kafka.utils.CoreUtilsTest > testSwallow STARTED

kafka.utils.CoreUtilsTest > testSwallow PASSED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
STARTED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
PASSED

kafka.utils.TopicFilterTest > testWhitelists STARTED

kafka.utils.TopicFilterTest > testWhitelists PASSED

kafka.utils.JsonTest > testParseToWithInvalidJson STARTED

kafka.utils.JsonTest > testParseToWithInvalidJson PASSED

kafka.utils.JsonTest > testParseTo STARTED

kafka.utils.JsonTest > testParseTo PASSED

kafka.utils.JsonTest > testJsonParse STARTED

kafka.utils.JsonTest > testJsonParse PASSED

kafka.utils.JsonTest > testLegacyEncodeAsString STARTED

kafka.utils.JsonTest > testLegacyEncodeAsString PASSED

kafka.utils.JsonTest > 

[DISCUSS] KIP-221: Enhance KStream with Connecting Topic Creation and Repartition Hint

2019-01-20 Thread Boyang Chen
Hey all,

I would like to start a new discussion thread for refined KIP-221: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-221%3A+Enhance+KStream+with+Connecting+Topic+Creation+and+Repartition+Hint

The major goal for this KIP is simplified to empower KStream #to and #groupBy 
APIs ability to rescale the repartition logic independent of upstream 
partitions counts.

Let me know your thoughts, and credit to Jeyhun who is the original KIP owner!

Best,
Boyang


Re: [DISCUSS] KIP-300: Add Windowed KTable API in StreamsBuilder

2019-01-20 Thread Boyang Chen
Hey Mattihas,

I have addressed the comments in KIP. Feel free to take another look.

Also you are right, those are implementation details that we could discuss in 
diff 

Boyang


From: Matthias J. Sax 
Sent: Saturday, January 19, 2019 3:16 AM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-300: Add Windowed KTable API in StreamsBuilder

Thank Boyang!

>> I think it should be good to just extend ConsumedInternal and 
>> MaterializedInternal with window size, and keep
>> external API clean. Just so you know it would be even more messy for 
>> internal implementation if we don't carry
>> the window size around within existing data struct.

I cannot follow here. But because this is internal stuff anyway, I would
prefer to discuss this on the PR instead of the mailing list.


-Matthias

On 1/18/19 10:58 AM, Boyang Chen wrote:
> Hey Matthias,
>
> thanks a lot for the comments!
>
> It seems that `windowSize` is a mandatory argument for windowed-tables,
> thus all overload should have the first two parameters being `String
> topic` and `Duration windowSize`.
> Yep, that sounds good to me.
>
> For session-tables, there should be no `windowSize` parameter because
> each session can have a different size and as a matter of fact, both the
> window start and window end timestamp are contained in the key anyway
> for this reason. (This is different to time windows as the KIP mentions.)
> Good suggestion, I think we should be able to skip the windowsize for session 
> store.
>
> Thus, I don't think that there is any need to extend `Consumed` or
> `Materialized` -- in contrast, extending both as suggested would result
> in bad API, because those new methods would be available for
> key-value-tables, too.
> I think it should be good to just extend ConsumedInternal and 
> MaterializedInternal with window size, and keep
> external API clean. Just so you know it would be even more messy for internal 
> implementation if we don't carry
> the window size around within existing data struct.
>
> About generic types: why is `windowedTable()` using `Consumers`
> while `sessionTable` is using `Consumed>`? The KIP
> mentions that we can wrap provided key-serdes automatically with
> corresponding window serdes. Thus, it seems the correct type should be `K`?
> Yes that's a typo, and I already fixed it.
>
> I will let you know when the KIP updates are done.
>
> Best,
> Boyang
> 
> From: Matthias J. Sax 
> Sent: Thursday, January 17, 2019 7:52 AM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-300: Add Windowed KTable API in StreamsBuilder
>
> Couple of follow up comment on the KIP:
>
> It seems that `windowSize` is a mandatory argument for windowed-tables,
> thus all overload should have the first two parameters being `String
> topic` and `Duration windowSize`.
>
> For session-tables, there should be no `windowSize` parameter because
> each session can have a different size and as a matter of fact, both the
> window start and window end timestamp are contained in the key anyway
> for this reason. (This is different to time windows as the KIP mentions.)
>
> Thus, I don't think that there is any need to extend `Consumed` or
> `Materialized` -- in contrast, extending both as suggested would result
> in bad API, because those new methods would be available for
> key-value-tables, too.
>
> About generic types: why is `windowedTable()` using `Consumers`
> while `sessionTable` is using `Consumed>`? The KIP
> mentions that we can wrap provided key-serdes automatically with
> corresponding window serdes. Thus, it seems the correct type should be `K`?
>
>
> -Matthias
>
>
> On 1/12/19 8:35 PM, Boyang Chen wrote:
>> Hey Matthias,
>>
>> thanks for taking a look! It would be great to see this pushed in 2.2. 
>> Depending on the tight timeline, I hope to at least get the KIP approved so 
>> that we don't see back and forth again as the KTable API has been constantly 
>> changing. I couldn't guarantee the implementation timeline until we agree on 
>> the updated high level APIs first. Does that make sense?
>>
>> Best,
>> Boyang
>> 
>> From: Matthias J. Sax 
>> Sent: Sunday, January 13, 2019 10:53 AM
>> To: dev@kafka.apache.org
>> Subject: Re: [DISCUSS] KIP-300: Add Windowed KTable API in StreamsBuilder
>>
>> Do you want to get this into 2.2 release? KIP deadline is 1/24, so quite
>> soon.
>>
>> Overall, the KIP is very useful. I can review again in more details if
>> you aim for 2.2 -- did you address all previous comment about the KIP
>> already?
>>
>>
>> -Matthias
>>
>>
>>
>> On 1/8/19 2:50 PM, Boyang Chen wrote:
>>> Hey folks,
>>>
>>> I would like to start a discussion thread on adding new time/session 
>>> windowed KTable APIs for KStream:
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-300%3A+Add+Windowed+KTable+API+in+StreamsBuilder
>>>
>>> We have been working on this thread around 7 months ago, and it is 
>>> successfully 

[jira] [Created] (KAFKA-7847) KIP-421: Support resolving externalized secrets in AbstractConfig

2019-01-20 Thread TEJAL ADSUL (JIRA)
TEJAL ADSUL created KAFKA-7847:
--

 Summary: KIP-421: Support resolving externalized secrets in 
AbstractConfig
 Key: KAFKA-7847
 URL: https://issues.apache.org/jira/browse/KAFKA-7847
 Project: Kafka
  Issue Type: Improvement
  Components: config
Reporter: TEJAL ADSUL


This proposal intends to enhance the AbstractConfig base class to support 
replacing variables in configurations just prior to parsing and validation. 
This simple change will make it very easy for client applications, Kafka 
Connect, and Kafka Streams to use shared code to easily incorporate 
externalized secrets and other variable replacements within their 
configurations. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7846) KIP-421: Support resolving externalized secrets in AbstractConfig

2019-01-20 Thread TEJAL ADSUL (JIRA)
TEJAL ADSUL created KAFKA-7846:
--

 Summary: KIP-421: Support resolving externalized secrets in 
AbstractConfig
 Key: KAFKA-7846
 URL: https://issues.apache.org/jira/browse/KAFKA-7846
 Project: Kafka
  Issue Type: Improvement
  Components: config
Reporter: TEJAL ADSUL


This proposal intends to enhance the AbstractConfig base class to support 
replacing variables in configurations just prior to parsing and validation. 
This simple change will make it very easy for client applications, Kafka 
Connect, and Kafka Streams to use shared code to easily incorporate 
externalized secrets and other variable replacements within their 
configurations. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[DISCUSS] KIP-421: Support resolving externalized secrets in AbstractConfig

2019-01-20 Thread tejal
Hi all,

I have published KIP-421: Support resolving externalized secrets in 
AbstractConfig
on the wiki here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-421%3A+Support+resolving+externalized+secrets+in+AbstractConfig

This KIP intends to enhance the AbstractConfig base class to support replacing 
variables in configurations just prior to parsing and validation. This simple 
change will make it very easy for client applications, Kafka Connect, and Kafka 
Streams to use shared code to easily incorporate externalized secrets and other 
variable replacements within their configurations. 

Please let me know your insightful feedback. 

Regards,
Tejal


Re: Permissions to create KIP

2019-01-20 Thread Ismael Juma
Hi Tejal,

Thanks for your interest. Access has been granted.

Ismael

On Sun, Jan 20, 2019 at 3:08 PM Tejal Adsul  wrote:

> Hi,
>
> I work at Confluent. Please could you grant me permission to create a KIP
> for apache kafka, I wanted to propose a change to AbstractConfig in Kafka.
> Following are my details
> Full NameTEJAL ADSUL
> emailte...@confluent.io
> ID: tejal
>
> Thanks,
> Tejal
>


Build failed in Jenkins: kafka-trunk-jdk8 #3332

2019-01-20 Thread Apache Jenkins Server
See 


Changes:

[vahid.hashemian] MINOR: Remove unused imports, exceptions, and values (#6117)

--
[...truncated 4.21 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [DISCUSS] KIP-394: Require member.id for initial join group request

2019-01-20 Thread Boyang Chen
Hey Ismael,

thanks for the suggestion here! I think the reason is because creating 
individual id on client is purely random (or at least I couldn't think of how 
to make sure it is "known to be unique"). Id collision will not be handled 
gracefully as we could perceive.

However let client generate unique id is a very good idea, which we proposed in 
KIP-345
 to let end user supply instance identities. This could save us one round-trip 
time to solve member identity problem. Since enabling new feature requires user 
operation that is not guaranteed to happen, KIP-394 is a patch to alleviate the 
similar issue for existing consumer use cases.

Best,
Boyang


From: Ismael Juma 
Sent: Monday, January 21, 2019 6:07 AM
To: dev
Subject: Re: [DISCUSS] KIP-394: Require member.id for initial join group request

Hi,

I'm late to the discussion, so apologies. One question, did we consider
having the client generate a member id in the first join group? This could
be random or known to be unique and would avoid a second join group request
in the common case. If we considered and rejected this option, it would be
good to include why in the "Rejected Alternatives" section.

Ismael

On Mon, Nov 26, 2018, 5:48 PM Boyang Chen  Hey friends,
>
>
> I would like to start a discussion thread for KIP-394 which is trying to
> mitigate broker cache bursting issue due to anonymous join group requests:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-394%3A+Require+member.id+for+initial+join+group+request
>
>
> Thanks!
>
> Boyang
>


Build failed in Jenkins: kafka-trunk-jdk11 #229

2019-01-20 Thread Apache Jenkins Server
See 


Changes:

[vahid.hashemian] MINOR: Remove unused imports, exceptions, and values (#6117)

--
[...truncated 2.27 MB...]
org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testReplaceVariableWithTTLFirstCancelThenScheduleRestart STARTED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testReplaceVariableWithTTLFirstCancelThenScheduleRestart PASSED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testTransformNullConfiguration STARTED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testTransformNullConfiguration PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitSuccessFollowedByFailure STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitSuccessFollowedByFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testRewindOnRebalanceDuringPoll STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testRewindOnRebalanceDuringPoll PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitFailure 
STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitFailure 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedBasicValidation STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedBasicValidation PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedCustomValidation STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedCustomValidation PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTask STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTask PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalance STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalance PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalanceFailedConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalanceFailedConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker STARTED


Permissions to create KIP

2019-01-20 Thread Tejal Adsul
Hi,

I work at Confluent. Please could you grant me permission to create a KIP for 
apache kafka, I wanted to propose a change to AbstractConfig in Kafka.
Following are my details
Full NameTEJAL ADSUL
emailte...@confluent.io
ID: tejal

Thanks,
Tejal


Re: [DISCUSS] KIP-394: Require member.id for initial join group request

2019-01-20 Thread Ismael Juma
Hi,

I'm late to the discussion, so apologies. One question, did we consider
having the client generate a member id in the first join group? This could
be random or known to be unique and would avoid a second join group request
in the common case. If we considered and rejected this option, it would be
good to include why in the "Rejected Alternatives" section.

Ismael

On Mon, Nov 26, 2018, 5:48 PM Boyang Chen  Hey friends,
>
>
> I would like to start a discussion thread for KIP-394 which is trying to
> mitigate broker cache bursting issue due to anonymous join group requests:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-394%3A+Require+member.id+for+initial+join+group+request
>
>
> Thanks!
>
> Boyang
>


Re: [DISCUSS] KIP-419 Safely notify Kafka Connect SourceTask is stopped

2019-01-20 Thread Christopher Bogan
On 2019/01/18 18:02:24, Andrew Schofield  wrote:
> Hi,> >
> I’ve created a new KIP to enhance the SourceTask interface in Kafka
Connect.> >
> >
>
https://cwiki.apache.org/confluence/display/KAFKA/KIP-419:+Safely+notify+Kafka+Connect+SourceTask+is+stopped>
>
> >
> Comments welcome.> >
> >
> Andrew Schofield> >
> IBM Event Streams> >
> >
>


Re: [DISCUSS] KIP-390: Add producer option to adjust compression level

2019-01-20 Thread Ismael Juma
Hi Dongjin,

When the compression type is "producer", then the broker doesn't recompress
though. Thinking about it some more, there are some uncommon cases where
recompression does happen (the old (and hopefully hardly used by now)
message format == 0 and some edge cases), so it is a good point you raised.

It's a bit unfortunate to add so many topic configs for cases that probably
don't matter. That is, if you are using "producer" compression, you
probably don't need to configure these settings and can live with the
defaults. Perhaps we should only support the topic config for the cases
where you are actually recompressing in the broker.

What do you think? I'd be interested in other people's thoughts too.

Ismael

On Sun, Jan 20, 2019 at 2:14 AM Dongjin Lee  wrote:

> Hi Ismael,
>
> It seems like it needs more explanation. Here is the detailed reasoning.
>
> You know, topic and broker's 'compression.type' allows 'uncompressed',
> 'producer' with standard codecs (i.e., gzip, snappy, lz4, zstd.) And this
> configuration is used by the broker in the re-compressing process after
> offset assignment. After this feature, the new configs, 'compression.level'
> and 'compression.buffer.size', also will be used in this process.
>
> The problem arises when given topic's compression type (whether it was
> inherited from broker's configuration or explicitly set) is 'producer.'
> With this setting, the compression codec to be used is decided by the
> producer client. Since there is no way to restore the compression level and
> buffer size from the message, we can take the following strategies:
>
> 1. Just use given 'compression.level' and 'compression.buffer.size'
> settings.
>
> It will cause so many errors. Let's imagine the case of topic's
> configuration is { compression.type=producer, compression.level=10,
> compression.buffer.size=8192 }. In this case, all producers with gzip or
> lz4 compressed messages will result in an error. (gzip doesn't allow
> compression level 10, and lz4 also for a buffer size of 8192.)
>
> 2. Extend the message format to include compression configurations.
>
> With this strategy, we need to change the message format - it's a too big
> change.
>
> 3. If topic's compression.type is 'producer', use the default configuration
> for the given codec.
>
> With this strategy, allowing fine-grained compression configuration is
> meaningless.
>
> For the above reasons, I think the only alternative is providing options
> that can be used when the topic's 'compression.type' is 'producer.' In
> other words, adding compression.[gzip, lz4, zstd].level and
> compression.[gzip.snappy.lz4].buffer.size options - and it is what I did in
> the last modification.
>
> (wait, the reasoning above should be included in the KIP in the rejected
> alternatives section, isn't it?)
>
> Thanks,
> Dongjin
>
> On Sun, Jan 20, 2019 at 2:33 AM Ismael Juma  wrote:
>
> > Hi Dongjin,
> >
> > For topic level, you can only have a single compression type so the way
> it
> > was before was fine, right? The point you raise is how to set broker
> > defaults that vary depending on the compression type, correct?
> >
> > Ismael
> >
> > On Mon, Jan 14, 2019 at 10:18 AM Dongjin Lee  wrote:
> >
> > > I just realized that there was a missing hole in the KIP, so I fixed
> it.
> > > The draft implementation will be updated soon.
> > >
> > > In short, the proposed change did not regard the case of the topic or
> > > broker's 'compression.type' is 'producer'; in this case, the broker has
> > to
> > > handle all kinds of the supported codec. So I added additional options
> > > (compression.[gzip,snappy,lz4, zstd].level,
> compression.[gzip,snappy,lz4,
> > > zstd].buffer.size) with handling routines.
> > >
> > > Please have a look when you are free.
> > >
> > > Thanks,
> > > Dongjin
> > >
> > > On Mon, Jan 7, 2019 at 6:23 AM Dongjin Lee  wrote:
> > >
> > > > Thanks for pointing out Ismael. It's now updated.
> > > >
> > > > Best,
> > > > Dongjin
> > > >
> > > > On Mon, Jan 7, 2019 at 4:36 AM Ismael Juma 
> wrote:
> > > >
> > > >> Thanks Dongjin. One minor suggestion: we should mention that the
> > broker
> > > >> side configs are also topic configs (i.e. can be set for a given
> > topic).
> > > >>
> > > >> Ismael
> > > >>
> > > >> On Sun, Jan 6, 2019, 10:37 AM Dongjin Lee  wrote:
> > > >>
> > > >> > Happy new year.
> > > >> >
> > > >> > I just updated the title and contents of KIP and Jira issue, with
> > > >> updated
> > > >> > draft implementation. Now both of compression level and buffer
> size
> > > >> options
> > > >> > are available to producer and broker configuration. You can check
> > the
> > > >> > updated KIP from modified URL:
> > > >> >
> > > >> >
> > > >> >
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-390%3A+Allow+fine-grained+configuration+for+compression
> > > >> >
> > > >> > Please have a look when you are free.
> > > >> >
> > > >> > Thanks,
> > > >> > Dongjin
> > > >> >
> > > >> > On Mon, Dec 3, 

Re: [DISCUSS] KIP-400 - Improve exit status in case of errors in ConsoleProducer

2019-01-20 Thread Dongjin Lee
Thank you for the KIP. In my opinion, this feature must work well with
shell scripts, by improving interoperability. Isn't it?

Thanks,
Dongjin

On Fri, Jan 18, 2019, 6:30 AM kamal kaur  Hi everyone,
>
> This is ready for discussion.
>
> *Jira* - https://issues.apache.org/jira/browse/KAFKA-6812
>
> *KIP -*
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-400:+Improve+exit+status+in+case+of+errors+in+ConsoleProducer
>
>
> Please let me know if there is anything else I am missing to start this
> discussion.
>
> thanks
>
> Kamal
>


Re: [DISCUSS] KIP-390: Add producer option to adjust compression level

2019-01-20 Thread Dongjin Lee
Hi Ismael,

It seems like it needs more explanation. Here is the detailed reasoning.

You know, topic and broker's 'compression.type' allows 'uncompressed',
'producer' with standard codecs (i.e., gzip, snappy, lz4, zstd.) And this
configuration is used by the broker in the re-compressing process after
offset assignment. After this feature, the new configs, 'compression.level'
and 'compression.buffer.size', also will be used in this process.

The problem arises when given topic's compression type (whether it was
inherited from broker's configuration or explicitly set) is 'producer.'
With this setting, the compression codec to be used is decided by the
producer client. Since there is no way to restore the compression level and
buffer size from the message, we can take the following strategies:

1. Just use given 'compression.level' and 'compression.buffer.size'
settings.

It will cause so many errors. Let's imagine the case of topic's
configuration is { compression.type=producer, compression.level=10,
compression.buffer.size=8192 }. In this case, all producers with gzip or
lz4 compressed messages will result in an error. (gzip doesn't allow
compression level 10, and lz4 also for a buffer size of 8192.)

2. Extend the message format to include compression configurations.

With this strategy, we need to change the message format - it's a too big
change.

3. If topic's compression.type is 'producer', use the default configuration
for the given codec.

With this strategy, allowing fine-grained compression configuration is
meaningless.

For the above reasons, I think the only alternative is providing options
that can be used when the topic's 'compression.type' is 'producer.' In
other words, adding compression.[gzip, lz4, zstd].level and
compression.[gzip.snappy.lz4].buffer.size options - and it is what I did in
the last modification.

(wait, the reasoning above should be included in the KIP in the rejected
alternatives section, isn't it?)

Thanks,
Dongjin

On Sun, Jan 20, 2019 at 2:33 AM Ismael Juma  wrote:

> Hi Dongjin,
>
> For topic level, you can only have a single compression type so the way it
> was before was fine, right? The point you raise is how to set broker
> defaults that vary depending on the compression type, correct?
>
> Ismael
>
> On Mon, Jan 14, 2019 at 10:18 AM Dongjin Lee  wrote:
>
> > I just realized that there was a missing hole in the KIP, so I fixed it.
> > The draft implementation will be updated soon.
> >
> > In short, the proposed change did not regard the case of the topic or
> > broker's 'compression.type' is 'producer'; in this case, the broker has
> to
> > handle all kinds of the supported codec. So I added additional options
> > (compression.[gzip,snappy,lz4, zstd].level, compression.[gzip,snappy,lz4,
> > zstd].buffer.size) with handling routines.
> >
> > Please have a look when you are free.
> >
> > Thanks,
> > Dongjin
> >
> > On Mon, Jan 7, 2019 at 6:23 AM Dongjin Lee  wrote:
> >
> > > Thanks for pointing out Ismael. It's now updated.
> > >
> > > Best,
> > > Dongjin
> > >
> > > On Mon, Jan 7, 2019 at 4:36 AM Ismael Juma  wrote:
> > >
> > >> Thanks Dongjin. One minor suggestion: we should mention that the
> broker
> > >> side configs are also topic configs (i.e. can be set for a given
> topic).
> > >>
> > >> Ismael
> > >>
> > >> On Sun, Jan 6, 2019, 10:37 AM Dongjin Lee  > >>
> > >> > Happy new year.
> > >> >
> > >> > I just updated the title and contents of KIP and Jira issue, with
> > >> updated
> > >> > draft implementation. Now both of compression level and buffer size
> > >> options
> > >> > are available to producer and broker configuration. You can check
> the
> > >> > updated KIP from modified URL:
> > >> >
> > >> >
> > >> >
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-390%3A+Allow+fine-grained+configuration+for+compression
> > >> >
> > >> > Please have a look when you are free.
> > >> >
> > >> > Thanks,
> > >> > Dongjin
> > >> >
> > >> > On Mon, Dec 3, 2018 at 12:50 AM Ismael Juma 
> > wrote:
> > >> >
> > >> > > The updated title sounds fine to me.
> > >> > >
> > >> > > Ismael
> > >> > >
> > >> > > On Sun, Dec 2, 2018, 5:25 AM Dongjin Lee  wrote:
> > >> > >
> > >> > > > Hi Ismael,
> > >> > > >
> > >> > > > Got it. Your direction is perfectly reasonable. I am now
> updating
> > >> the
> > >> > KIP
> > >> > > > document and the implementation.
> > >> > > >
> > >> > > > By allowing the buffer/block size to be configurable, it would
> be
> > >> > better
> > >> > > to
> > >> > > > update the title of the KIP like 'Allow fine-grained
> configuration
> > >> for
> > >> > > > compression'. Is that right?
> > >> > > >
> > >> > > > @Other committers:
> > >> > > >
> > >> > > > Is there any other opinion on allowing the buffer/block size to
> be
> > >> > > > configurable?
> > >> > > >
> > >> > > > Thanks,
> > >> > > > Dongjin
> > >> > > >
> > >> > > > On Thu, Nov 29, 2018 at 1:45 AM Ismael Juma 
> > >> wrote:
> > >> > > >
> > >> > > > > Hi Dongjin,
> > >> > > > 

Build failed in Jenkins: kafka-trunk-jdk11 #228

2019-01-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-3522: Add internal RecordConverter interface (#6150)

--
[...truncated 2.47 MB...]

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > testDecodeMap STARTED

kafka.utils.json.JsonValueTest > testDecodeMap PASSED

kafka.utils.json.JsonValueTest > testDecodeSeq STARTED

kafka.utils.json.JsonValueTest > testDecodeSeq PASSED

kafka.utils.json.JsonValueTest > testJsonObjectGet STARTED

kafka.utils.json.JsonValueTest > testJsonObjectGet PASSED

kafka.utils.json.JsonValueTest > testJsonValueEquals STARTED

kafka.utils.json.JsonValueTest > testJsonValueEquals PASSED

kafka.utils.json.JsonValueTest > testJsonArrayIterator STARTED

kafka.utils.json.JsonValueTest > testJsonArrayIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectApply STARTED

kafka.utils.json.JsonValueTest > testJsonObjectApply PASSED

kafka.utils.json.JsonValueTest > testDecodeBoolean STARTED

kafka.utils.json.JsonValueTest > testDecodeBoolean PASSED

kafka.utils.CoreUtilsTest > testGenerateUuidAsBase64 STARTED

kafka.utils.CoreUtilsTest > testGenerateUuidAsBase64 PASSED

kafka.utils.CoreUtilsTest > testAbs STARTED

kafka.utils.CoreUtilsTest > testAbs PASSED

kafka.utils.CoreUtilsTest > testReplaceSuffix STARTED

kafka.utils.CoreUtilsTest > testReplaceSuffix PASSED

kafka.utils.CoreUtilsTest > testCircularIterator STARTED

kafka.utils.CoreUtilsTest > testCircularIterator PASSED

kafka.utils.CoreUtilsTest > testReadBytes STARTED

kafka.utils.CoreUtilsTest > testReadBytes PASSED

kafka.utils.CoreUtilsTest > testCsvList STARTED

kafka.utils.CoreUtilsTest > testCsvList PASSED

kafka.utils.CoreUtilsTest > testReadInt STARTED

kafka.utils.CoreUtilsTest > testReadInt PASSED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate STARTED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate PASSED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.CoreUtilsTest > testCsvMap STARTED

kafka.utils.CoreUtilsTest > testCsvMap PASSED

kafka.utils.CoreUtilsTest > testInLock STARTED

kafka.utils.CoreUtilsTest > testInLock PASSED

kafka.utils.CoreUtilsTest > testTryAll STARTED

kafka.utils.CoreUtilsTest > testTryAll PASSED

kafka.utils.CoreUtilsTest > testSwallow STARTED

kafka.utils.CoreUtilsTest > testSwallow PASSED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
STARTED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
PASSED

kafka.utils.TopicFilterTest > testWhitelists STARTED

kafka.utils.TopicFilterTest > testWhitelists PASSED

kafka.utils.JsonTest > testParseToWithInvalidJson STARTED

kafka.utils.JsonTest > testParseToWithInvalidJson PASSED

kafka.utils.JsonTest > testParseTo STARTED

kafka.utils.JsonTest > testParseTo PASSED

kafka.utils.JsonTest > testJsonParse STARTED

kafka.utils.JsonTest > testJsonParse PASSED

kafka.utils.JsonTest > testLegacyEncodeAsString STARTED

kafka.utils.JsonTest > testLegacyEncodeAsString PASSED

kafka.utils.JsonTest > testEncodeAsBytes STARTED

kafka.utils.JsonTest > testEncodeAsBytes PASSED

kafka.utils.JsonTest > testEncodeAsString STARTED

kafka.utils.JsonTest > testEncodeAsString PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsOverwriteExisting 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsOverwriteExisting PASSED

kafka.utils.CommandLineUtilsTest > testParseSingleArg STARTED

kafka.utils.CommandLineUtilsTest > testParseSingleArg PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #3331

2019-01-20 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-3522: Add internal RecordConverter interface (#6150)

--
[...truncated 2.27 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED


[jira] [Resolved] (KAFKA-6964) Add ability to print all internal topic names

2019-01-20 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6964.
--
Resolution: Won't Fix

> Add ability to print all internal topic names
> -
>
> Key: KAFKA-6964
> URL: https://issues.apache.org/jira/browse/KAFKA-6964
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Jagadesh Adireddi
>Priority: Major
>  Labels: needs-kip
>
> For security access reasons some streams users need to build all internal 
> topics before deploying their streams application.  While it's possible to 
> get all internal topic names from the {{Topology#describe()}} method, it 
> would be nice to have a separate method that prints out only the internal 
> topic names to ease the process.
> I think this change will require a KIP, so I've added the appropriate label.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)