Re: [DISCUSS] KIP-580: Exponential Backoff for Kafka Clients

2020-03-13 Thread Boyang Chen
Thanks for the KIP Sanjana. I think the motivation is good, but lack of
more quantitative analysis. For instance:

1. How much retries we are saving by applying the exponential retry vs
static retry? There should be some mathematical relations between the
static retry ms, the initial exponential retry ms, the max exponential
retry ms in a given time interval.
2. How does this affect the client timeout? With exponential retry, the
client shall be getting easier to timeout on a parent level caller, for
instance stream attempts to retry initializing producer transactions with
given 5 minute interval. With exponential retry this mechanism could
experience more frequent timeout which we should be careful with.
3. With regards to #2, we should have more detailed checklist of all the
existing static retry scenarios, and adjust the initial exponential retry
ms to make sure we won't get easily timeout in high level due to too few
attempts.

Boyang

On Fri, Mar 13, 2020 at 4:38 PM Sanjana Kaundinya 
wrote:

> Hi Everyone,
>
> I’ve written a KIP about introducing exponential backoff for Kafka
> clients. Would appreciate any feedback on this.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-580%3A+Exponential+Backoff+for+Kafka+Clients
>
> Thanks,
> Sanjana
>


[DISCUSS] KIP-580: Exponential Backoff for Kafka Clients

2020-03-13 Thread Sanjana Kaundinya
Hi Everyone,

I’ve written a KIP about introducing exponential backoff for Kafka clients. 
Would appreciate any feedback on this.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-580%3A+Exponential+Backoff+for+Kafka+Clients

Thanks,
Sanjana


[jira] [Created] (KAFKA-9720) Update gradle to 6.0+

2020-03-13 Thread David Arthur (Jira)
David Arthur created KAFKA-9720:
---

 Summary: Update gradle to 6.0+ 
 Key: KAFKA-9720
 URL: https://issues.apache.org/jira/browse/KAFKA-9720
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: David Arthur
 Fix For: 2.6.0


Gradle 6.x has been out for a few months and has a few bug fixes and 
performance improvements. 

* https://docs.gradle.org/6.0/release-notes.html
* https://docs.gradle.org/6.1/release-notes.html
* https://docs.gradle.org/6.2/release-notes.html

We should consider update the build to the latest version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1237

2020-03-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9714; Eliminate unused reference to IBP in


--
[...truncated 2.91 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled 

Re: [DISCUSS] KIP-577: Allow HTTP Response Headers Configured for Kafka Connect

2020-03-13 Thread Jeff Huang
Hi Aneel,

It is great idea. I will update KIP based on your suggestion. 

Jeff.

On 2020/03/13 18:58:23, Aneel Nazareth  wrote: 
> If we're including and excluding paths, it seems like it might make
> sense to allow for the configuration of multiple filters.
> 
> We could do this with a pattern similar to how Kafka listeners are
> configured. Something like:
> 
> response.http.header.filters = myfilter1,myfilter2
> response.http.header.myfilter1.included.paths = ...
> response.http.header.myfilter1.included.mime.types = ...
> response.http.header.myfilter1.config = set X-Frame-Options: DENY,"add
> Cache-Control: no-cache, no-store, must-revalidate", ...
> 
> response.http.header.myfilter2.included.paths = ...
> response.http.header.myfilter2.included.mime.types = ...
> response.http.header.myfilter2.config = setDate Expires: 3154000 ...
> 
> But before we go down that road: are people going to want to be able
> to set multiple different header filters? Or is one header filter for
> all of the responses good enough?
> 
> On Fri, Mar 13, 2020 at 10:56 AM Jeff Huang  wrote:
> >
> > Hi Aneel,
> >
> > That is really great point. I will update KIP. We need add following 
> > properties combining with header configs:
> > includedPaths - CSV of path specs to include
> > excludedPaths - CSV of path specs to exclude
> > includedMimeTypes - CSV of mime types to include
> > excludedMimeTypes - CSV of mime types to exclude
> > includedHttpMethods - CSV of http methods to include
> > excludedHttpMethods - CSV of http methods to exclude
> >
> > Jeff.
> >
> > On 2020/03/13 14:28:11, Aneel Nazareth  wrote:
> > > Hi Jeff,
> > >
> > > Thanks for the KIP.
> > >
> > > Will users always want to set identical headers on all responses? Does
> > > it make sense to also allow configuration of the HeaderFilter init
> > > parameters like "includedPaths", "excludedHttpMethods", etc.? Does it
> > > make sense to allow multiple configurations (so that eg. different
> > > paths have different headers?)
> > >
> > > Cheers,
> > > Aneel
> > >
> > > On Thu, Mar 12, 2020 at 7:05 PM Zhiguo Huang  
> > > wrote:
> > > >
> > > >
> > >
> 


Build failed in Jenkins: kafka-trunk-jdk11 #1236

2020-03-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8768: DeleteRecords request/response automated protocol (#7957)

[github] KAFKA-9718; Don't log passwords for AlterConfigs in request logs 
(#8294)


--
[...truncated 2.91 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava

[jira] [Resolved] (KAFKA-9715) TransactionStateManager: Eliminate unused reference to interBrokerProtocolVersion

2020-03-13 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9715.

Resolution: Fixed

> TransactionStateManager: Eliminate unused reference to 
> interBrokerProtocolVersion
> -
>
> Key: KAFKA-9715
> URL: https://issues.apache.org/jira/browse/KAFKA-9715
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Kowshik Prakasam
>Priority: Minor
>
> In TransactionStateManager, the attribute interBrokerProtocolVersion is 
> unused within the class. It can therefore be eliminated from the code. Please 
> refer to this LOC:
> [https://github.com/apache/kafka/blob/07db26c20fcbccbf758591607864f7fd4bd8975f/core/src/main/scala/kafka/coordinator/transaction/TransactionStateManager.scala#L78]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-577: Allow HTTP Response Headers Configured for Kafka Connect

2020-03-13 Thread Aneel Nazareth
If we're including and excluding paths, it seems like it might make
sense to allow for the configuration of multiple filters.

We could do this with a pattern similar to how Kafka listeners are
configured. Something like:

response.http.header.filters = myfilter1,myfilter2
response.http.header.myfilter1.included.paths = ...
response.http.header.myfilter1.included.mime.types = ...
response.http.header.myfilter1.config = set X-Frame-Options: DENY,"add
Cache-Control: no-cache, no-store, must-revalidate", ...

response.http.header.myfilter2.included.paths = ...
response.http.header.myfilter2.included.mime.types = ...
response.http.header.myfilter2.config = setDate Expires: 3154000 ...

But before we go down that road: are people going to want to be able
to set multiple different header filters? Or is one header filter for
all of the responses good enough?

On Fri, Mar 13, 2020 at 10:56 AM Jeff Huang  wrote:
>
> Hi Aneel,
>
> That is really great point. I will update KIP. We need add following 
> properties combining with header configs:
> includedPaths - CSV of path specs to include
> excludedPaths - CSV of path specs to exclude
> includedMimeTypes - CSV of mime types to include
> excludedMimeTypes - CSV of mime types to exclude
> includedHttpMethods - CSV of http methods to include
> excludedHttpMethods - CSV of http methods to exclude
>
> Jeff.
>
> On 2020/03/13 14:28:11, Aneel Nazareth  wrote:
> > Hi Jeff,
> >
> > Thanks for the KIP.
> >
> > Will users always want to set identical headers on all responses? Does
> > it make sense to also allow configuration of the HeaderFilter init
> > parameters like "includedPaths", "excludedHttpMethods", etc.? Does it
> > make sense to allow multiple configurations (so that eg. different
> > paths have different headers?)
> >
> > Cheers,
> > Aneel
> >
> > On Thu, Mar 12, 2020 at 7:05 PM Zhiguo Huang  
> > wrote:
> > >
> > >
> >


[jira] [Resolved] (KAFKA-9718) Don't log passwords for AlterConfigs requests in request logs

2020-03-13 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-9718.
---
  Reviewer: Manikumar
Resolution: Fixed

> Don't log passwords for AlterConfigs requests in request logs
> -
>
> Key: KAFKA-9718
> URL: https://issues.apache.org/jira/browse/KAFKA-9718
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.6.0
>
>
> We currently avoid logging passwords in log files by logging only parsed 
> values were passwords are logged as `[hidden]`. But for AlterConfigs requests 
> in request logs, we log all entries since they just appear as string entries. 
> Since we allow altering password configs like SSL key passwords and JAAS 
> config, we shouldn't include these in log files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-579: new exception on min.insync.replicas > replication.factor

2020-03-13 Thread Paolo Moriello
Hi Mickael,

Thanks for your interest in this. The main motivation to NOT make topic
creation fail when this mismatch happens is because at the moment it is
possible to produce/consume on topics if acks is not set to all. I'm not
sure we want to disable this behavior (as we would by failing at topic
creation). That's why I decided to go for a softer approach, which at least
gives some more clarity to the users and avoids other issues mentioned in
the KIP.

Let's see what others think!

On Fri, 13 Mar 2020 at 17:16, Mickael Maison 
wrote:

> Hi Paolo,
>
> Thanks for looking at this issue. This can indeed be a source of confusion.
>
> I'm wondering if we should prevent the creation of topics with
> min.insync.replicas > replication.factor?
> You listed that as a rejected alternative because it requires more
> changes. However, I can't think of any scenarios where a user would
> want to create such a topic. I'm guessing it's probably always by
> mistake.
>
> Let's see what other people think but I think it's worth checking what
> needs to be done if we wanted to prevent topics with bogus configs
>
> On Fri, Mar 13, 2020 at 3:28 PM Paolo Moriello
>  wrote:
> >
> > Hi,
> >
> > Following this Jira ticket (
> https://issues.apache.org/jira/browse/KAFKA-4680),
> > I've created a proposal (
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-579%3A+new+exception+on+min.insync.replicas+%3E+replication.factor
> )
> > to add a new exception/error to be used on min.insync.replicas >
> > replication.factor.
> >
> > The proposal aims to introduce a new exception specific for the
> > configuration mismatch above to be used when producers requires acks =
> all.
> > At the moment we are using NotEnoughReplicaException, which is a
> retriable
> > exception and is used to fail on insync replicas < min isr. Plan is to
> have
> > a new, non-retriable exception, to separate the two cases.
> >
> > I've also submitted a PR for the change mentioned above:
> > https://github.com/apache/kafka/pull/8225
> >
> > Please have a look and let me know what you think.
> >
> > Thanks,
> > Paolo
>


Build failed in Jenkins: kafka-trunk-jdk11 #1235

2020-03-13 Thread Apache Jenkins Server
See 


Changes:

[manikumar] KAFKA-9685: Solve Set concatenation perf issue in AclAuthorizer


--
[...truncated 2.91 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

Re: [DISCUSS] KIP-579: new exception on min.insync.replicas > replication.factor

2020-03-13 Thread Mickael Maison
Hi Paolo,

Thanks for looking at this issue. This can indeed be a source of confusion.

I'm wondering if we should prevent the creation of topics with
min.insync.replicas > replication.factor?
You listed that as a rejected alternative because it requires more
changes. However, I can't think of any scenarios where a user would
want to create such a topic. I'm guessing it's probably always by
mistake.

Let's see what other people think but I think it's worth checking what
needs to be done if we wanted to prevent topics with bogus configs

On Fri, Mar 13, 2020 at 3:28 PM Paolo Moriello
 wrote:
>
> Hi,
>
> Following this Jira ticket (https://issues.apache.org/jira/browse/KAFKA-4680),
> I've created a proposal (
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-579%3A+new+exception+on+min.insync.replicas+%3E+replication.factor)
> to add a new exception/error to be used on min.insync.replicas >
> replication.factor.
>
> The proposal aims to introduce a new exception specific for the
> configuration mismatch above to be used when producers requires acks = all.
> At the moment we are using NotEnoughReplicaException, which is a retriable
> exception and is used to fail on insync replicas < min isr. Plan is to have
> a new, non-retriable exception, to separate the two cases.
>
> I've also submitted a PR for the change mentioned above:
> https://github.com/apache/kafka/pull/8225
>
> Please have a look and let me know what you think.
>
> Thanks,
> Paolo


Re: 回复:回复:回复:[Vote] KIP-571: Add option to force remove members in StreamsResetter

2020-03-13 Thread Boyang Chen
Thanks Matthias and Guozhang for the feedback. I'm not worrying too much
about the member.id exposure as we have done so in a couple of areas. As
for the recommended admin client change, I think it makes sense in an
encapsulation perspective. Maybe I'm still a bit hesitant as we are losing
the flexibility of closing only a subset of `dynamic members` potentially,
but we could always get back and address it if some user feels necessary to
have it.

My short answer would be, LGTM :)

Boyang

On Thu, Mar 12, 2020 at 5:26 PM Guozhang Wang  wrote:

> Hi Matthias,
>
> About the AdminClient param API: that's a great point here. I think overall
> if users want to just "remove all members" they should not need to first
> get all the member.ids themselves, but instead internally the admin client
> can first issue a describe-group request to get all the member.ids, and
> then use them in the next issued leave-group request, all abstracted away
> from the users. With that in mind, maybe in
> RemoveMembersFromConsumerGroupOptions we can just introduce an overloaded
> flag param besides "members" that indicate "remove all"?
>
> Guozhang
>
> On Thu, Mar 12, 2020 at 2:59 PM Matthias J. Sax  wrote:
>
> > Feyman,
> >
> > some more comments/questions:
> >
> > The description of `LeaveGroupRequest` is clear but it's unclear how
> > `MemberToRemove` should behave. Which parameter is required? Which is
> > optional? What is the relationship between both.
> >
> > The `LeaveGroupRequest` description clearly states that specifying a
> > `memberId` is optional if the `groupInstanceId` is provided. If
> > `MemberToRemove` applies the same pattern, it must be explicitly defined
> > in the KIP (and explained in the JavaDocs of `MemberToRemove`) because
> > we cannot expect that an admin-client users knows that internally a
> > `LeaveGroupRequest` is used nor what the semantics of a
> > `LeaveGroupRequest` are.
> >
> >
> > About Admin API:
> >
> > In general, I am also confused that we allow so specify a `memberId` at
> > all, because the `memberId` is an internal id that is not really exposed
> > to the user. Hence, from a AdminClient point of view, accepting a
> > `memberId` as input seems questionable to me? Of course, `memberId` can
> > be collected via `describeConsumerGroups()` but it will return the
> > `memberId` of _all_ consumer in the group and thus how would a user know
> > which member should be removed for a dynamic group (if an individual
> > member should be removed)?
> >
> > Hence, how can any user get to know the `memberId` of an individual
> > client in a programtic way?
> >
> > Also I am wondering in general, why the removal of single dynamic member
> > is important? In general, I would expect a short `session.timeout` for
> > dynamic groups and thus removing a specific member from the group seems
> > not to be an important feature -- for static groups we expect a long
> > `session.timeout` and a user can also identify individual clients via
> > `groupInstandId`, hence the feature makes sense for this case and is
> > straight forward to use.
> >
> >
> > About StreamsResetter:
> >
> > For this case we just say "remove all members" and thus the
> > `describeConsumerGroup` approach works. However, it seems to be a
> > special case?
> >
> > Or, if we expected that the "remove all members" use case is the norm,
> > why can't we make a change admin-client to directly accept a `group.id`?
> > The admin-client can internal first do a `DescribeGroupRequest` and
> > afterward corresponding `LeaveGroupRequest` -- i.e., instead of building
> > this pattern in `StreamsResetter` we build it directly into
> `AdminClient`.
> >
> > Last, for static group the main use case seems to be to remove an
> > individual member from the group but this feature is not covered by the
> > KIP: I think using `--force` to remove all members makes sense, but an
> > important second feature to remove an individual static member would
> > require it's own flag to specify a single `group.instance.id`.
> >
> >
> > Thoughts?
> >
> >
> > -Matthias
> >
> >
> >
> >
> >
> > On 3/11/20 8:43 PM, feyman2009 wrote:
> > > Hi, Sophie
> > > For 1) Sorry, I found that my expression is kind of misleading,
> > what I actually mean is: "if --force not specified, an exception saying
> > there are still active members on broker side will be thrown and
> > suggesting using StreamsResetter with --force", I just updated the KIP
> > page.
> > >
> > > For 2)
> > > I may also had some misleading expression previous, to clarify
> :
> > >
> > > Also, it's more efficient to just send a single "clear the group"
> > request vs sending a LeaveGroup
> > > request for every single member. What do you think?
> > > => the comparison is to send a single "clear the group" request vs
> > sending a "get members" + a "remove members" request since the
> > adminClient.removeMembersFromConsumerGroup support batch removal. We
> > don't need to send lots of leaveGroup requests 

[jira] [Created] (KAFKA-9719) Add Integration Test For ensuring the EOS-beta app would crash with broker downgrade

2020-03-13 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9719:
--

 Summary: Add Integration Test For ensuring the EOS-beta app would 
crash with broker downgrade
 Key: KAFKA-9719
 URL: https://issues.apache.org/jira/browse/KAFKA-9719
 Project: Kafka
  Issue Type: Sub-task
Reporter: Boyang Chen


As we finished KAFKA-9657, we need to make sure the mechanism actually works by 
doing an integration test for it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9657) Add configurable throw on unsupported protocol

2020-03-13 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-9657.

Resolution: Fixed

> Add configurable throw on unsupported protocol
> --
>
> Key: KAFKA-9657
> URL: https://issues.apache.org/jira/browse/KAFKA-9657
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> Right now Stream could not handle the case when the brokers are downgraded, 
> thus potentially could violate EOS requirement. We could add an (internal) 
> config to either consumer or producer to actually crash on unsupported 
> version when the broker connecting to is on an older version unexpectedly, to 
> prevent this case from causing correctness concern.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-577: Allow HTTP Response Headers Configured for Kafka Connect

2020-03-13 Thread Jeff Huang
Hi Aneel,

That is really great point. I will update KIP. We need add following properties 
combining with header configs:
includedPaths - CSV of path specs to include
excludedPaths - CSV of path specs to exclude
includedMimeTypes - CSV of mime types to include
excludedMimeTypes - CSV of mime types to exclude
includedHttpMethods - CSV of http methods to include
excludedHttpMethods - CSV of http methods to exclude

Jeff.

On 2020/03/13 14:28:11, Aneel Nazareth  wrote: 
> Hi Jeff,
> 
> Thanks for the KIP.
> 
> Will users always want to set identical headers on all responses? Does
> it make sense to also allow configuration of the HeaderFilter init
> parameters like "includedPaths", "excludedHttpMethods", etc.? Does it
> make sense to allow multiple configurations (so that eg. different
> paths have different headers?)
> 
> Cheers,
> Aneel
> 
> On Thu, Mar 12, 2020 at 7:05 PM Zhiguo Huang  wrote:
> >
> >
> 


NetworkException: The server disconnected before a response was received

2020-03-13 Thread Madan Mohan Mohanty
Hi 
we have 
Brokers: 3 Zookeepers: 3 Servers: 3 Kafka: 0.10.0.1 Zookeeeper: 3.4.3
Very rarely we get NetworkException: Server disconnected before response 
received. Cluster consists of 3 brokers and 3 zookeepers. Producer server and 
Kafka cluster are in same network. below are configuration


spring.kafka.producer.properties.connections.max.idle.ms=72
spring.kafka.producer.retries=3
spring.kafka.producer.batch-size=1024
spring.kafka.producer.properties.request.timeout.ms=72
spring.kafka.producer.properties.retry.backoff.ms=8000
spring.kafka.producer.properties.linger.ms=100
spring.kafka.producer.acks=0



[DISCUSS] KIP-579: new exception on min.insync.replicas > replication.factor

2020-03-13 Thread Paolo Moriello
Hi,

Following this Jira ticket (https://issues.apache.org/jira/browse/KAFKA-4680),
I've created a proposal (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-579%3A+new+exception+on+min.insync.replicas+%3E+replication.factor)
to add a new exception/error to be used on min.insync.replicas >
replication.factor.

The proposal aims to introduce a new exception specific for the
configuration mismatch above to be used when producers requires acks = all.
At the moment we are using NotEnoughReplicaException, which is a retriable
exception and is used to fail on insync replicas < min isr. Plan is to have
a new, non-retriable exception, to separate the two cases.

I've also submitted a PR for the change mentioned above:
https://github.com/apache/kafka/pull/8225

Please have a look and let me know what you think.

Thanks,
Paolo


[jira] [Resolved] (KAFKA-9685) Solve Set concatenation perf issue in AclAuthorizer

2020-03-13 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-9685.
--
Fix Version/s: 2.6.0
   Resolution: Fixed

Issue resolved by pull request 8261
[https://github.com/apache/kafka/pull/8261]

> Solve Set concatenation perf issue in AclAuthorizer
> ---
>
> Key: KAFKA-9685
> URL: https://issues.apache.org/jira/browse/KAFKA-9685
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.1.0
>Reporter: Jiao Zhang
>Priority: Minor
> Fix For: 2.6.0
>
>
> In version 1.1, 
> [https://github.com/apache/kafka/blob/71b1e19fc60b5e1f9bba33025737ec2b7fb1c2aa/core/src/main/scala/kafka/security/auth/SimpleAclAuthorizer.scala#L110]
>  the logic for checking acls is preparing a merged acl Set with
> {code:java}
> acls = getAcls(new Resource(resource.resourceType, 
> Resource.WildCardResource)) ++ getAcls(resource);{code}
> and then pass it as aclMatch's parameter.
>  We found scala's Set ++ operation is very slow for example in the case that 
> the Set on right hand of ++ has more than 100 entries.
>  And the bad performance of ++ is due to iterating every entry of the Set on 
> right hand of ++, in which the calculation of HashCode seems heavy.
>  The performance of 'authorize' is important as each request delivered to 
> broker goes through the logic, that's the reason we can't leave it as-is 
> although the change for this proposal seems trivial.
> Here is the approach. We propose to solve this issue by introducing a new 
> class 'AclSets' which takes multiple Sets as parameters and do 'find' against 
> them one by one.
> {code:java}
> class AclSets(sets: Set[Acl]*){
>   def find(p: Acl => Boolean): Option[Acl] = 
> sets.flatMap(_.find(p)).headOption       
>   def isEmpty: Boolean = !sets.exists(_.nonEmpty) 
> }
> {code}
> This approach avoid the Set ++ operation like following,
> {code:java}
> val acls = new AclSets(getAcls(new Resource(resource.resourceType, 
> Resource.WildCardResource)), getAcls(resource)){code}
> and thus outperforms a lot compared to old logic.
> The benchmark result(we did the test with kafka version 1.1) shows notable 
> difference under the condition:
>  1. set on left consists of 60 entries
>  2. set of right consists of 30 entries
>  3. search for absent entry (so that all entries are iterated)
> Benchmark Results is as following.
> Mode                                                     Cnt    Score         
> Error   Units
>  ScalaSetConcatination.Set thrpt          3   281.974  ± 140.029  ops/ms
>  ScalaSetConcatination.AclSets thrpt   3   887.426 ± 40.261    ops/ms
> As the upstream also use the similar ++ operation, 
> [https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/security/authorizer/AclAuthorizer.scala#L360]
>  we think it's necessary to fix this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] Apache Kafka 2.4.1

2020-03-13 Thread Vahid Hashemian
Thanks a lot for running this release Bill!

Regards,
--Vahid

On Fri, Mar 13, 2020 at 2:56 AM Mickael Maison 
wrote:

> Thanks Bill for managing this release!
>
> On Fri, Mar 13, 2020 at 5:58 AM Guozhang Wang  wrote:
> >
> > Thanks Bill for driving this. And Many thanks to all who've contributed
> to
> > this release!
> >
> >
> > Guozhang
> >
> > On Thu, Mar 12, 2020 at 3:00 PM Matthias J. Sax 
> wrote:
> >
> > > Thanks for driving the release Bill!
> > >
> > > -Matthias
> > >
> > > On 3/12/20 1:22 PM, Bill Bejeck wrote:
> > > > The Apache Kafka community is pleased to announce the release for
> Apache
> > > > Kafka 2.4.1
> > > >
> > > > This is a bug fix release and it includes fixes and improvements
> from 39
> > > > JIRAs, including a few critical bugs.
> > > >
> > > > All of the changes in this release can be found in the release notes:
> > > > https://www.apache.org/dist/kafka/2.4.1/RELEASE_NOTES.html
> > > >
> > > >
> > > > You can download the source and binary release (Scala 2.11, 2.12, and
> > > 2.13)
> > > > from:
> > > > https://kafka.apache.org/downloads#2.4.1
> > > >
> > > >
> > >
> ---
> > > >
> > > >
> > > > Apache Kafka is a distributed streaming platform with four core APIs:
> > > >
> > > >
> > > > ** The Producer API allows an application to publish a stream
> records to
> > > > one or more Kafka topics.
> > > >
> > > > ** The Consumer API allows an application to subscribe to one or more
> > > > topics and process the stream of records produced to them.
> > > >
> > > > ** The Streams API allows an application to act as a stream
> processor,
> > > > consuming an input stream from one or more topics and producing an
> > > > output stream to one or more output topics, effectively transforming
> the
> > > > input streams to output streams.
> > > >
> > > > ** The Connector API allows building and running reusable producers
> or
> > > > consumers that connect Kafka topics to existing applications or data
> > > > systems. For example, a connector to a relational database might
> > > > capture every change to a table.
> > > >
> > > >
> > > > With these APIs, Kafka can be used for two broad classes of
> application:
> > > >
> > > > ** Building real-time streaming data pipelines that reliably get data
> > > > between systems or applications.
> > > >
> > > > ** Building real-time streaming applications that transform or react
> > > > to the streams of data.
> > > >
> > > >
> > > > Apache Kafka is in use at large and small companies worldwide,
> including
> > > > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> Rabobank,
> > > > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > > >
> > > > A big thank you for the following 35 contributors to this release!
> > > >
> > > > A. Sophie Blee-Goldman, Alex Kokachev, bill, Bill Bejeck, Boyang
> Chen,
> > > > Brian Bushree, Brian Byrne, Bruno Cadonna, Chia-Ping Tsai, Chris
> Egerton,
> > > > Colin Patrick McCabe, David Jacot, David Kim, David Mao, Dhruvil
> Shah,
> > > > Gunnar Morling, Guozhang Wang, huxi, Ismael Juma, Ivan Yurchenko,
> Jason
> > > > Gustafson, John Roesler, Konstantine Karantasis, Lev Zemlyanov,
> Manikumar
> > > > Reddy, Matthew Wong, Matthias J. Sax, Michael Gyarmathy, Michael
> Viamari,
> > > > Nigel Liang, Rajini Sivaram, Randall Hauch, Tomislav, Vikas Singh,
> Xin
> > > Wang
> > > >
> > > > We welcome your help and feedback. For more information on how to
> > > > report problems, and to get involved, visit the project website at
> > > > https://kafka.apache.org/
> > > >
> > > > Thank you!
> > > >
> > > >
> > > > Regards,
> > > >
> > > > Bill Bejeck
> > > >
> > >
> > >
> >
> > --
> > -- Guozhang
>


-- 

Thanks!
--Vahid


Re: [DISCUSS] KIP-577: Allow HTTP Response Headers Configured for Kafka Connect

2020-03-13 Thread Aneel Nazareth
Hi Jeff,

Thanks for the KIP.

Will users always want to set identical headers on all responses? Does
it make sense to also allow configuration of the HeaderFilter init
parameters like "includedPaths", "excludedHttpMethods", etc.? Does it
make sense to allow multiple configurations (so that eg. different
paths have different headers?)

Cheers,
Aneel

On Thu, Mar 12, 2020 at 7:05 PM Zhiguo Huang  wrote:
>
>


[jira] [Resolved] (KAFKA-7908) retention.ms and message.timestamp.difference.max.ms are tied

2020-03-13 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona resolved KAFKA-7908.
--
Resolution: Fixed

> retention.ms and message.timestamp.difference.max.ms are tied
> -
>
> Key: KAFKA-7908
> URL: https://issues.apache.org/jira/browse/KAFKA-7908
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.0
>Reporter: Ciprian Pascu
>Priority: Minor
> Fix For: 2.4.0, 2.3.0
>
>
> When configuring retention.ms for a topic, following warning will be printed:
> _retention.ms for topic X is set to 180. It is smaller than 
> message.timestamp.difference.max.ms's value 9223372036854775807. This may 
> result in frequent log rolling. (kafka.log.Log)_
>  
> message.timestamp.difference.max.ms has not been configured explicitly, so it 
> has the default value of 9223372036854775807; I haven't seen anywhere 
> mentioned that this parameter needs to be configured also, if retention.ms is 
> configured; also, if we look at the default values for these parameters, they 
> are also so, that retention.ms < message.timestamp.difference.max.ms; so, 
> what is the purpose of this warning, in this case?
> The warning is generated from this code 
> (core/src/main/scala/kafka/log/Log.scala):
>   _def updateConfig(updatedKeys: Set[String], newConfig: LogConfig): Unit = {_
>     _*if ((updatedKeys.contains(LogConfig.RetentionMsProp)*_
>   *_|| 
> updatedKeys.contains(LogConfig.MessageTimestampDifferenceMaxMsProp))_*
>   _&& topicPartition.partition == 0  // generate warnings only for one 
> partition of each topic_
>   _&& newConfig.retentionMs < newConfig.messageTimestampDifferenceMaxMs)_
>   _warn(s"${LogConfig.RetentionMsProp} for topic ${topicPartition.topic} 
> is set to ${newConfig.retentionMs}. It is smaller than " +_
>     _s"${LogConfig.MessageTimestampDifferenceMaxMsProp}'s value 
> ${newConfig.messageTimestampDifferenceMaxMs}. " +_
>     _s"This may result in frequent log rolling.")_
>     _this.config = newConfig_
>   _}_
>  
> Shouldn't the || operand in the bolded condition be replaced with &&?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-518: Allow listing consumer groups per state

2020-03-13 Thread Rajini Sivaram
+1 (binding)

Thanks for the KIP, Mickael!

Regards,

Rajini


On Thu, Mar 12, 2020 at 11:06 PM Colin McCabe  wrote:

> Thanks, Mickael.  +1 (binding)
>
> best,
> Colin
>
> On Fri, Mar 6, 2020, at 02:05, Mickael Maison wrote:
> > Thanks David and Gwen for the votes
> > Colin, I believe I've answered all your questions, can you take another
> look?
> >
> > So far we have 1 binding and 5 non binding votes.
> >
> > On Mon, Mar 2, 2020 at 4:56 PM Gwen Shapira  wrote:
> > >
> > > +1 (binding)
> > >
> > > Gwen Shapira
> > > Engineering Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter | blog
> > >
> > > On Mon, Mar 02, 2020 at 8:32 AM, David Jacot < dja...@confluent.io >
> wrote:
> > >
> > > >
> > > >
> > > >
> > > > +1 (non-binding). Thanks for the KIP!
> > > >
> > > >
> > > >
> > > > David
> > > >
> > > >
> > > >
> > > > On Thu, Feb 6, 2020 at 10:45 PM Colin McCabe < cmccabe@ apache. org
> (
> > > > cmcc...@apache.org ) > wrote:
> > > >
> > > >
> > > >>
> > > >>
> > > >> Hi Mickael,
> > > >>
> > > >>
> > > >>
> > > >> Thanks for the KIP. I left a comment on the DISCUSS thread as well.
> > > >>
> > > >>
> > > >>
> > > >> best,
> > > >> Colin
> > > >>
> > > >>
> > > >>
> > > >> On Thu, Feb 6, 2020, at 08:58, Mickael Maison wrote:
> > > >>
> > > >>
> > > >>>
> > > >>>
> > > >>> Hi Manikumar,
> > > >>>
> > > >>>
> > > >>>
> > > >>> I believe I've answered David's comments in the DISCUSS thread.
> Thanks
> > > >>>
> > > >>>
> > > >>>
> > > >>> On Wed, Jan 15, 2020 at 10:15 AM Manikumar < manikumar. reddy@
> gmail. com (
> > > >>> manikumar.re...@gmail.com ) >
> > > >>>
> > > >>>
> > > >>
> > > >>
> > > >>
> > > >> wrote:
> > > >>
> > > >>
> > > >>>
> > > 
> > > 
> > >  Hi Mickael,
> > > 
> > > 
> > > 
> > >  Thanks for the KIP. Can you respond to the comments from David on
> > > 
> > > 
> > > >>>
> > > >>>
> > > >>
> > > >>
> > > >>
> > > >> discuss
> > > >>
> > > >>
> > > >>>
> > > 
> > > 
> > >  thread?
> > > 
> > > 
> > > 
> > >  Thanks,
> > > 
> > > 
> > > >>>
> > > >>>
> > > >>
> > > >>
> > > >
> > > >
> > > >
> >
>


[jira] [Created] (KAFKA-9718) Don't log passwords for AlterConfigs requests in request logs

2020-03-13 Thread Rajini Sivaram (Jira)
Rajini Sivaram created KAFKA-9718:
-

 Summary: Don't log passwords for AlterConfigs requests in request 
logs
 Key: KAFKA-9718
 URL: https://issues.apache.org/jira/browse/KAFKA-9718
 Project: Kafka
  Issue Type: Bug
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 2.6.0


We currently avoid logging passwords in log files by logging only parsed values 
were passwords are logged as `[hidden]`. But for AlterConfigs requests in 
request logs, we log all entries since they just appear as string entries. 
Since we allow altering password configs like SSL key passwords and JAAS 
config, we shouldn't include these in log files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9717) KafkaStreams#metrics() method randomly throws NullPointerException

2020-03-13 Thread Zygimantas (Jira)
Zygimantas created KAFKA-9717:
-

 Summary: KafkaStreams#metrics() method randomly throws 
NullPointerException
 Key: KAFKA-9717
 URL: https://issues.apache.org/jira/browse/KAFKA-9717
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.2.0
 Environment: Kubernetes
Reporter: Zygimantas


We have implemented monitoring tool which monitors Kafka Streams application 
and regularly (every 20s) calls KafkaStreams.metrics() method in that 
application. But metrics() method randomly throws NullPointerException. It 
happens almost every time after application startup, but may also happen at 
random points in time after running application for few hours.

Stacktrace:
{code:java}
java.lang.NullPointerException
 at 
org.apache.kafka.streams.processor.internals.StreamThread.producerMetrics(StreamThread.java:1320)
 at org.apache.kafka.streams.KafkaStreams.metrics(KafkaStreams.java:379)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9716) Values of compression-rate and compression-rate-avg are misleading

2020-03-13 Thread Christian Kosmowski (Jira)
Christian Kosmowski created KAFKA-9716:
--

 Summary: Values of compression-rate and compression-rate-avg are 
misleading
 Key: KAFKA-9716
 URL: https://issues.apache.org/jira/browse/KAFKA-9716
 Project: Kafka
  Issue Type: Bug
  Components: clients, compression
Affects Versions: 2.4.1
Reporter: Christian Kosmowski


The values of the following metrics:

compression-rate and compression-rate-avg and basically every other 
compression-rate (i.e.) topic compression rate

are confusing.

They are calculated as follows:
{code:java}
if (numRecords == 0L) {
buffer().position(initialPosition);
builtRecords = MemoryRecords.EMPTY;
} else {
if (magic > RecordBatch.MAGIC_VALUE_V1)
this.actualCompressionRatio = (float) writeDefaultBatchHeader() / 
this.uncompressedRecordsSizeInBytes;
else if (compressionType != CompressionType.NONE)
this.actualCompressionRatio = (float) 
writeLegacyCompressedWrapperHeader() / this.uncompressedRecordsSizeInBytes;

ByteBuffer buffer = buffer().duplicate();
buffer.flip();
buffer.position(initialPosition);
builtRecords = MemoryRecords.readableRecords(buffer.slice());
}
{code}
basically the compressed size is divided by the uncompressed size which leads 
to a value < 1 for high compression (good if you want compression) or > 1 for 
poor compression (bad if you want compression).

>From the name "compression rate" i would expect the exact opposite. Apart from 
>the fact that the word "rate" usually refers to comparisons based on values of 
>different units (miles per hour) the correct word "ratio" would refer to the 
>uncompressed size divided by the compressed size.

So if the compressed data takes half the space of the uncompressed data the 
correct value for compression ratio (or rate) would be 2 and not 0.5 as kafka 
reports it. That is really confusing and i would AT LEAST expect that this 
behaviour would be documented somewhere, but it's not all documentation sources 
just say "the compression rate".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Add a customized logo for Kafka Streams

2020-03-13 Thread Patrik Kleindl
Great idea, I would definitely buy the book with that on the cover :-)
best regards
Patrik

On Fri, 13 Mar 2020 at 09:57, Becket Qin  wrote:

> I also like this one!
>
> Jiangjie (Becket) Qin
>
> On Fri, Mar 13, 2020 at 9:07 AM Matthias J. Sax  wrote:
>
> > I personally love it!
> >
> > -Matthias
> >
> > On 3/12/20 11:31 AM, Sophie Blee-Goldman wrote:
> > > How reasonable of it. Let's try this again:
> > > Streams Logo option 2
> > >
> > <
> >
> https://docs.google.com/drawings/d/1WoWn0kF3E7dbL1FGYT8_-bIeT2IjfYORAm6gg8Ils4k/edit?usp=sharing
> > >
> > >
> > > On Thu, Mar 12, 2020 at 9:34 AM Guozhang Wang 
> > wrote:
> > >
> > >> Hi Sophie,
> > >>
> > >> I cannot find the attachment from your previous email --- in fact, ASF
> > >> mailing list usually blocks all attachments for security reasons. If
> you
> > >> can share a link to the image (google drawings etc) in your email that
> > >> would be great.
> > >>
> > >> Guozhang
> > >>
> > >> On Wed, Mar 11, 2020 at 1:02 PM Sophie Blee-Goldman <
> > sop...@confluent.io>
> > >> wrote:
> > >>
> > >>> Just to throw another proposal out there and inspire some debate,
> > here's
> > >>> a similar-but-different
> > >>> idea (inspired by John + some sketches I found on google):
> > >>>
> > >>> *~~ See attachment, inlined image is too large for the mailing list
> ~~*
> > >>>
> > >>> This one's definitely more fun, my only concern is that it doesn't
> > really
> > >>> scale well. At the lower end
> > >>> of sizes the otters will be pretty difficult to see; and I had to
> > stretch
> > >>> out the Kafka circles even at
> > >>> the larger end just to fit them through.
> > >>>
> > >>> But maybe with a cleaner drawing and some color it'll still look
> > good and
> > >>> be recognizable + distinct
> > >>> enough when small.
> > >>>
> > >>> Any thoughts? Any binding and/or non-binding votes?
> > >>>
> > >>> On Sun, Mar 8, 2020 at 1:00 AM Sophie Blee-Goldman <
> > sop...@confluent.io>
> > >>> wrote:
> > >>>
> >  Seems the mailing list may have filtered the inlined prototype logo,
> >  attaching it here instead
> > 
> >  On Sat, Mar 7, 2020 at 11:54 PM Sophie Blee-Goldman
> > 
> >  wrote:
> > 
> > > Matthias makes a good point about being careful not to position
> > Streams
> > > as
> > > outside of Apache Kafka. One obvious thing we could do it just
> > include
> > > the
> > > Kafka logo as-is in the Streams logo, somehow.
> > >
> > > I have some unqualified opinions on what that might look like:
> > > A good logo is simple and clean, so incorporating the Kafka logo
> as a
> > > minor
> > > detail within a more complicated image is probably not the best way
> > to
> > > get
> > > the quick and easy comprehension/recognition that we're going for.
> > >
> > > That said I'd throw out the idea of just attaching something to the
> > > Kafka logo,
> > > perhaps a stream-dwelling animal, perhaps a (river) otter? It could
> > be
> > > "swimming" left of the Kafka logo, with its head touching the upper
> > > circle and
> > > its tail touching the bottom one. Like Streams, it starts with
> Kafka
> > > and ends
> > > with Kafka (ie reading input topics and writing to output topics).
> > >
> > > Without further ado, here's my very rough prototype for the Kafka
> > > Streams logo:
> > >
> > > [image: image.png]
> > > Obviously the real thing would be colored and presumably done by
> > someone
> > > with actual artist talent/experience (or at least photoshop
> ability).
> > >
> > > Thoughts?
> > >
> > > On Sat, Mar 7, 2020, 1:08 PM Matthias J. Sax 
> > wrote:
> > >
> > > Boyang,
> > >
> > > thanks for starting this discussion. I like the idea in general
> > > however we need to be a little careful IMHO -- as you mentioned Kafka
> > > is one project and thus we should avoid the impression that Kafka
> > > Streams is not part of Apache Kafka.
> > >
> > > Besides this, many projects use animals that are often very adorable.
> > > Maybe we could find a cute Streams related mascot? :)
> > >
> > > I would love to hear opinions especially from the PMC if having a logo
> > > for Kafka Streams is a viable thing to do.
> > >
> > >
> > > -Matthias
> > >
> > > On 3/3/20 1:01 AM, Patrik Kleindl wrote:
> >  Hi Boyang Great idea, that would help in some discussions. To
> > throw
> >  in a first idea: https://imgur.com/a/UowXaMk best regards
> Patrik
> > 
> >  On Mon, 2 Mar 2020 at 18:23, Boyang Chen
> >   wrote:
> > 
> > > Hey Apache Kafka committers and community folks,
> > >
> > > over the years Kafka Streams has been widely adopted and tons
> of
> > > blog posts and tech talks have been trying to introduce it to
> > > people with need of stream processing. As it is part of Apache
> > > Kafka project, there is always an awkward situation where Kafka
> > 

Build failed in Jenkins: kafka-trunk-jdk11 #1234

2020-03-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9657: Throw upon offset fetch unsupported stable flag protocol 


--
[...truncated 2.10 MB...]

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldNotEmitTombstonesWhenDeletingNonExistingRecords[leftJoin=true, 
optimization=all, materialized=true, rejoin=true] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldEmitTombstoneWhenDeletingNonJoiningRecords[leftJoin=true, 
optimization=all, materialized=true, rejoin=true] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldEmitTombstoneWhenDeletingNonJoiningRecords[leftJoin=true, 
optimization=all, materialized=true, rejoin=true] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> joinShouldProduceNullsWhenValueHasNonMatchingForeignKey[leftJoin=true, 
optimization=all, materialized=true, rejoin=true] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> joinShouldProduceNullsWhenValueHasNonMatchingForeignKey[leftJoin=true, 
optimization=all, materialized=true, rejoin=true] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromLeftThenDeleteLeftEntity[leftJoin=true, optimization=all, 
materialized=true, rejoin=true] STARTED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldExposeRocksDBMetricsForNonSegmentedStateStoreBeforeAndAfterFailureWithEmptyStateDir[exactly_once]
 PASSED

org.apache.kafka.streams.integration.RocksDBMetricsIntegrationTest > 
shouldVerifyThatMetricsGetMeasurementsFromRocksDBForNonSegmentedStateStore[at_least_once]
 STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromLeftThenDeleteLeftEntity[leftJoin=true, optimization=all, 
materialized=true, rejoin=true] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromRightThenDeleteRightEntity[leftJoin=true, optimization=all, 
materialized=true, rejoin=true] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromRightThenDeleteRightEntity[leftJoin=true, optimization=all, 
materialized=true, rejoin=true] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldUnsubscribeOldForeignKeyIfLeftSideIsUpdated[leftJoin=true, 
optimization=all, materialized=true, rejoin=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldUnsubscribeOldForeignKeyIfLeftSideIsUpdated[leftJoin=true, 
optimization=all, materialized=true, rejoin=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldNotEmitTombstonesWhenDeletingNonExistingRecords[leftJoin=true, 
optimization=all, materialized=true, rejoin=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldNotEmitTombstonesWhenDeletingNonExistingRecords[leftJoin=true, 
optimization=all, materialized=true, rejoin=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldEmitTombstoneWhenDeletingNonJoiningRecords[leftJoin=true, 
optimization=all, materialized=true, rejoin=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> shouldEmitTombstoneWhenDeletingNonJoiningRecords[leftJoin=true, 
optimization=all, materialized=true, rejoin=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> joinShouldProduceNullsWhenValueHasNonMatchingForeignKey[leftJoin=true, 
optimization=all, materialized=true, rejoin=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> joinShouldProduceNullsWhenValueHasNonMatchingForeignKey[leftJoin=true, 
optimization=all, materialized=true, rejoin=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromLeftThenDeleteLeftEntity[leftJoin=true, optimization=all, 
materialized=true, rejoin=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromLeftThenDeleteLeftEntity[leftJoin=true, optimization=all, 
materialized=true, rejoin=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromRightThenDeleteRightEntity[leftJoin=true, optimization=all, 
materialized=true, rejoin=false] STARTED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> doJoinFromRightThenDeleteRightEntity[leftJoin=true, optimization=all, 
materialized=true, rejoin=false] PASSED

org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest 
> 

Re: [ANNOUNCE] Apache Kafka 2.4.1

2020-03-13 Thread Mickael Maison
Thanks Bill for managing this release!

On Fri, Mar 13, 2020 at 5:58 AM Guozhang Wang  wrote:
>
> Thanks Bill for driving this. And Many thanks to all who've contributed to
> this release!
>
>
> Guozhang
>
> On Thu, Mar 12, 2020 at 3:00 PM Matthias J. Sax  wrote:
>
> > Thanks for driving the release Bill!
> >
> > -Matthias
> >
> > On 3/12/20 1:22 PM, Bill Bejeck wrote:
> > > The Apache Kafka community is pleased to announce the release for Apache
> > > Kafka 2.4.1
> > >
> > > This is a bug fix release and it includes fixes and improvements from 39
> > > JIRAs, including a few critical bugs.
> > >
> > > All of the changes in this release can be found in the release notes:
> > > https://www.apache.org/dist/kafka/2.4.1/RELEASE_NOTES.html
> > >
> > >
> > > You can download the source and binary release (Scala 2.11, 2.12, and
> > 2.13)
> > > from:
> > > https://kafka.apache.org/downloads#2.4.1
> > >
> > >
> > ---
> > >
> > >
> > > Apache Kafka is a distributed streaming platform with four core APIs:
> > >
> > >
> > > ** The Producer API allows an application to publish a stream records to
> > > one or more Kafka topics.
> > >
> > > ** The Consumer API allows an application to subscribe to one or more
> > > topics and process the stream of records produced to them.
> > >
> > > ** The Streams API allows an application to act as a stream processor,
> > > consuming an input stream from one or more topics and producing an
> > > output stream to one or more output topics, effectively transforming the
> > > input streams to output streams.
> > >
> > > ** The Connector API allows building and running reusable producers or
> > > consumers that connect Kafka topics to existing applications or data
> > > systems. For example, a connector to a relational database might
> > > capture every change to a table.
> > >
> > >
> > > With these APIs, Kafka can be used for two broad classes of application:
> > >
> > > ** Building real-time streaming data pipelines that reliably get data
> > > between systems or applications.
> > >
> > > ** Building real-time streaming applications that transform or react
> > > to the streams of data.
> > >
> > >
> > > Apache Kafka is in use at large and small companies worldwide, including
> > > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> > > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > >
> > > A big thank you for the following 35 contributors to this release!
> > >
> > > A. Sophie Blee-Goldman, Alex Kokachev, bill, Bill Bejeck, Boyang Chen,
> > > Brian Bushree, Brian Byrne, Bruno Cadonna, Chia-Ping Tsai, Chris Egerton,
> > > Colin Patrick McCabe, David Jacot, David Kim, David Mao, Dhruvil Shah,
> > > Gunnar Morling, Guozhang Wang, huxi, Ismael Juma, Ivan Yurchenko, Jason
> > > Gustafson, John Roesler, Konstantine Karantasis, Lev Zemlyanov, Manikumar
> > > Reddy, Matthew Wong, Matthias J. Sax, Michael Gyarmathy, Michael Viamari,
> > > Nigel Liang, Rajini Sivaram, Randall Hauch, Tomislav, Vikas Singh, Xin
> > Wang
> > >
> > > We welcome your help and feedback. For more information on how to
> > > report problems, and to get involved, visit the project website at
> > > https://kafka.apache.org/
> > >
> > > Thank you!
> > >
> > >
> > > Regards,
> > >
> > > Bill Bejeck
> > >
> >
> >
>
> --
> -- Guozhang


Re: Add a customized logo for Kafka Streams

2020-03-13 Thread Becket Qin
I also like this one!

Jiangjie (Becket) Qin

On Fri, Mar 13, 2020 at 9:07 AM Matthias J. Sax  wrote:

> I personally love it!
>
> -Matthias
>
> On 3/12/20 11:31 AM, Sophie Blee-Goldman wrote:
> > How reasonable of it. Let's try this again:
> > Streams Logo option 2
> >
> <
> https://docs.google.com/drawings/d/1WoWn0kF3E7dbL1FGYT8_-bIeT2IjfYORAm6gg8Ils4k/edit?usp=sharing
> >
> >
> > On Thu, Mar 12, 2020 at 9:34 AM Guozhang Wang 
> wrote:
> >
> >> Hi Sophie,
> >>
> >> I cannot find the attachment from your previous email --- in fact, ASF
> >> mailing list usually blocks all attachments for security reasons. If you
> >> can share a link to the image (google drawings etc) in your email that
> >> would be great.
> >>
> >> Guozhang
> >>
> >> On Wed, Mar 11, 2020 at 1:02 PM Sophie Blee-Goldman <
> sop...@confluent.io>
> >> wrote:
> >>
> >>> Just to throw another proposal out there and inspire some debate,
> here's
> >>> a similar-but-different
> >>> idea (inspired by John + some sketches I found on google):
> >>>
> >>> *~~ See attachment, inlined image is too large for the mailing list ~~*
> >>>
> >>> This one's definitely more fun, my only concern is that it doesn't
> really
> >>> scale well. At the lower end
> >>> of sizes the otters will be pretty difficult to see; and I had to
> stretch
> >>> out the Kafka circles even at
> >>> the larger end just to fit them through.
> >>>
> >>> But maybe with a cleaner drawing and some color it'll still look
> good and
> >>> be recognizable + distinct
> >>> enough when small.
> >>>
> >>> Any thoughts? Any binding and/or non-binding votes?
> >>>
> >>> On Sun, Mar 8, 2020 at 1:00 AM Sophie Blee-Goldman <
> sop...@confluent.io>
> >>> wrote:
> >>>
>  Seems the mailing list may have filtered the inlined prototype logo,
>  attaching it here instead
> 
>  On Sat, Mar 7, 2020 at 11:54 PM Sophie Blee-Goldman
> 
>  wrote:
> 
> > Matthias makes a good point about being careful not to position
> Streams
> > as
> > outside of Apache Kafka. One obvious thing we could do it just
> include
> > the
> > Kafka logo as-is in the Streams logo, somehow.
> >
> > I have some unqualified opinions on what that might look like:
> > A good logo is simple and clean, so incorporating the Kafka logo as a
> > minor
> > detail within a more complicated image is probably not the best way
> to
> > get
> > the quick and easy comprehension/recognition that we're going for.
> >
> > That said I'd throw out the idea of just attaching something to the
> > Kafka logo,
> > perhaps a stream-dwelling animal, perhaps a (river) otter? It could
> be
> > "swimming" left of the Kafka logo, with its head touching the upper
> > circle and
> > its tail touching the bottom one. Like Streams, it starts with Kafka
> > and ends
> > with Kafka (ie reading input topics and writing to output topics).
> >
> > Without further ado, here's my very rough prototype for the Kafka
> > Streams logo:
> >
> > [image: image.png]
> > Obviously the real thing would be colored and presumably done by
> someone
> > with actual artist talent/experience (or at least photoshop ability).
> >
> > Thoughts?
> >
> > On Sat, Mar 7, 2020, 1:08 PM Matthias J. Sax 
> wrote:
> >
> > Boyang,
> >
> > thanks for starting this discussion. I like the idea in general
> > however we need to be a little careful IMHO -- as you mentioned Kafka
> > is one project and thus we should avoid the impression that Kafka
> > Streams is not part of Apache Kafka.
> >
> > Besides this, many projects use animals that are often very adorable.
> > Maybe we could find a cute Streams related mascot? :)
> >
> > I would love to hear opinions especially from the PMC if having a logo
> > for Kafka Streams is a viable thing to do.
> >
> >
> > -Matthias
> >
> > On 3/3/20 1:01 AM, Patrik Kleindl wrote:
>  Hi Boyang Great idea, that would help in some discussions. To
> throw
>  in a first idea: https://imgur.com/a/UowXaMk best regards Patrik
> 
>  On Mon, 2 Mar 2020 at 18:23, Boyang Chen
>   wrote:
> 
> > Hey Apache Kafka committers and community folks,
> >
> > over the years Kafka Streams has been widely adopted and tons of
> > blog posts and tech talks have been trying to introduce it to
> > people with need of stream processing. As it is part of Apache
> > Kafka project, there is always an awkward situation where Kafka
> > Streams could not be campaigned as a standalone streaming engine,
> > and makes people confused about its relation to Kafka.
> >
> > So, do we want to introduce a customized logo just for Streams?
> > The immediate benefit is when people are making technical
> > decisions, we could list Streams as a logo just like Flink and
> > Spark Streaming, instead of putting Kafka 

Jenkins build is back to normal : kafka-trunk-jdk11 #1233

2020-03-13 Thread Apache Jenkins Server
See