[jira] [Created] (KAFKA-10648) Add Prefix Scan support to State Stores

2020-10-26 Thread Sagar Rao (Jira)
Sagar Rao created KAFKA-10648:
-

 Summary: Add Prefix Scan support to State Stores
 Key: KAFKA-10648
 URL: https://issues.apache.org/jira/browse/KAFKA-10648
 Project: Kafka
  Issue Type: Improvement
Reporter: Sagar Rao
Assignee: Sagar Rao


This issue is related to the changes mentioned in:

[https://cwiki.apache.org/confluence/display/KAFKA/KIP-614%3A+Add+Prefix+Scan+support+for+State+Stores]

 

which seeks to add prefix scan support to State stores. Currently, only RocksDB 
and InMemory key value stores are being supported.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-10-26 Thread Satish Duggana
Hi,
KIP is updated with 1) topic deletion lifecycle and its related items
2) Protocol changes(mainly related to ListOffsets) and other minor
changes.
Please go through them and let us know your comments.

Thanks,
Satish.

On Mon, Sep 28, 2020 at 9:10 PM Satish Duggana  wrote:
>
> Hi Dhruvil,
> Thanks for looking into the KIP and sending your comments. Sorry for
> the late reply, missed it in the mail thread.
>
> 1. Could you describe how retention would work with this KIP and which
> threads are responsible for driving this work? I believe there are 3 kinds
> of retention processes we are looking at:
>   (a) Regular retention for data in tiered storage as per configured `
> retention.ms` / `retention.bytes`.
>   (b) Local retention for data in local storage as per configured `
> local.log.retention.ms` / `local.log.retention.bytes`
>   (c) Possibly regular retention for data in local storage, if the tiering
> task is lagging or for data that is below the log start offset.
>
> Local log retention is done by the existing log cleanup tasks. These
> are not done for segments that are not yet copied to remote storage.
> Remote log cleanup is done by the leader partition’s RLMTask.
>
> 2. When does a segment become eligible to be tiered? Is it as soon as the
> segment is rolled and the end offset is less than the last stable offset as
> mentioned in the KIP? I wonder if we need to consider other parameters too,
> like the highwatermark so that we are guaranteed that what we are tiering
> has been committed to the log and accepted by the ISR.
>
> AFAIK, last stable offset is always <= highwatermark. This will make
> sure we are always tiering the message segments which have been
> accepted by ISR and transactionally completed.
>
>
> 3. The section on "Follower Fetch Scenarios" is useful but is a bit
> difficult to parse at the moment. It would be useful to summarize the
> changes we need in the ReplicaFetcher.
>
> It may become difficult for users to read/follow if we add code changes here.
>
> 4. Related to the above, it's a bit unclear how we are planning on
> restoring the producer state for a new replica. Could you expand on that?
>
> It is mentioned in the KIP BuildingRemoteLogAuxState is introduced to
> build the state like leader epoch sequence and producer snapshots
> before it starts fetching the data from the leader. We will make it
> clear in the KIP.
>
>
> 5. Similarly, it would be worth summarizing the behavior on unclean leader
> election. There are several scenarios to consider here: data loss from
> local log, data loss from remote log, data loss from metadata topic, etc.
> It's worth describing these in detail.
>
> We mentioned the cases about unclean leader election in the follower
> fetch scenarios.
> If there are errors while fetching data from remote store or metadata
> store, it will work the same way as it works with local log. It
> returns the error back to the caller. Please let us know if I am
> missing your point here.
>
>
> 7. For a READ_COMMITTED FetchRequest, how do we retrieve and return the
> aborted transaction metadata?
>
> When a fetch for a remote log is accessed, we will fetch aborted
> transactions along with the segment if it is not found in the local
> index cache. This includes the case of transaction index not existing
> in the remote log segment. That means, the cache entry can be empty or
> have a list of aborted transactions.
>
>
> 8. The `LogSegmentData` class assumes that we have a log segment, offset
> index, time index, transaction index, producer snapshot and leader epoch
> index. How do we deal with cases where we do not have one or more of these?
> For example, we may not have a transaction index or producer snapshot for a
> particular segment. The former is optional, and the latter is only kept for
> up to the 3 latest segments.
>
> This is a good point,  we discussed this in the last meeting.
> Transaction index is optional and we will copy them only if it exists.
> We want to keep all the producer snapshots at each log segment rolling
> and they can be removed if the log copying is successful and it still
> maintains the existing latest 3 segments, We only delete the producer
> snapshots which have been copied to remote log segments on leader.
> Follower will keep the log segments beyond the segments which have not
> been copied to remote storage. We will update the KIP with these
> details.
>
> Thanks,
> Satish.
>
> On Thu, Sep 17, 2020 at 1:47 AM Dhruvil Shah  wrote:
> >
> > Hi Satish, Harsha,
> >
> > Thanks for the KIP. Few questions below:
> >
> > 1. Could you describe how retention would work with this KIP and which
> > threads are responsible for driving this work? I believe there are 3 kinds
> > of retention processes we are looking at:
> >   (a) Regular retention for data in tiered storage as per configured `
> > retention.ms` / `retention.bytes`.
> >   (b) Local retention for data in local storage as per configured `
> > local.log.retention.ms` / 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #182

2020-10-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9381: Fix publishing valid scaladoc for streams-scala (#9486)


--
[...truncated 3.45 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotRequireParameters[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotRequireParameters[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldForwardDeprecatedInit STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldForwardDeprecatedInit PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #206

2020-10-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9381: Fix publishing valid scaladoc for streams-scala (#9486)


--
Started by an SCM change
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H42 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  
 > # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 94d21e3f8a64ea09449d7c4ce0e3eb4423dec369 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 94d21e3f8a64ea09449d7c4ce0e3eb4423dec369 # timeout=10
Commit message: "KAFKA-9381: Fix publishing valid scaladoc for streams-scala 
(#9486)"
 > git rev-list --no-walk f1a7097ccd79ecba2c0640766d64ba0f1e3e313d # timeout=10
[kafka-trunk-jdk15] $ /bin/sh -xe /tmp/jenkins955672263398676046.sh
+ rm -rf 

[kafka-trunk-jdk15] $ /bin/sh -xe /tmp/jenkins8853121250783576829.sh
+ ./gradlew --no-daemon --continue -PmaxParallelForks=2 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean test -PscalaVersion=2.12
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/6.7/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing

FAILURE: Build failed with an exception.

* What went wrong:
Gradle could not start your build.
> Could not create service of type ResourceSnapshotterCacheService using 
> GradleUserHomeServices.createResourceSnapshotterCacheService().
   > Timeout waiting to lock file hash cache 
(/home/jenkins/.gradle/caches/6.7/fileHashes). It is currently in use by 
another Gradle instance.
 Owner PID: 55136
 Our PID: 47970
 Owner Operation: 
 Our operation: 
 Lock file: /home/jenkins/.gradle/caches/6.7/fileHashes/fileHashes.lock

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 1m 3s
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Not sending mail to unregistered user git...@hugo-hirsch.de


Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #39

2020-10-26 Thread Apache Jenkins Server
See 


Changes:

[Guozhang Wang] KAFKA-10616: Always call prepare-commit before suspending for 
active tasks (#9464)


--
[...truncated 3.44 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #207

2020-10-26 Thread Apache Jenkins Server
See 


Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H42 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  
 > # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 94d21e3f8a64ea09449d7c4ce0e3eb4423dec369 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 94d21e3f8a64ea09449d7c4ce0e3eb4423dec369 # timeout=10
Commit message: "KAFKA-9381: Fix publishing valid scaladoc for streams-scala 
(#9486)"
 > git rev-list --no-walk 94d21e3f8a64ea09449d7c4ce0e3eb4423dec369 # timeout=10
[kafka-trunk-jdk15] $ /bin/sh -xe /tmp/jenkins7774132497918713604.sh
+ rm -rf 

[kafka-trunk-jdk15] $ /bin/sh -xe /tmp/jenkins987415125933050850.sh
+ ./gradlew --no-daemon --continue -PmaxParallelForks=2 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean test -PscalaVersion=2.12
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/6.7/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing

FAILURE: Build failed with an exception.

* What went wrong:
Gradle could not start your build.
> Could not create service of type ResourceSnapshotterCacheService using 
> GradleUserHomeServices.createResourceSnapshotterCacheService().
   > Timeout waiting to lock file hash cache 
(/home/jenkins/.gradle/caches/6.7/fileHashes). It is currently in use by 
another Gradle instance.
 Owner PID: 55136
 Our PID: 48771
 Owner Operation: 
 Our operation: 
 Lock file: /home/jenkins/.gradle/caches/6.7/fileHashes/fileHashes.lock

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 1m 4s
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Not sending mail to unregistered user git...@hugo-hirsch.de


Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #181

2020-10-26 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #173

2020-10-26 Thread Apache Jenkins Server
See 




Re: [jira] [Resolved] (KAFKA-9381) Javadocs + Scaladocs not published on maven central

2020-10-26 Thread Mathew Morales
Unsubscribe

On Mon, Oct 26, 2020 at 5:49 PM Mathew Morales 
wrote:

> Stop
>
> On Mon, Oct 26, 2020 at 5:49 PM Bill Bejeck (Jira) 
> wrote:
>
>>
>>  [
>> https://issues.apache.org/jira/browse/KAFKA-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
>> ]
>>
>> Bill Bejeck resolved KAFKA-9381.
>> 
>> Resolution: Fixed
>>
>> Resolved via [https://github.com/apache/kafka/pull/9486.]
>>
>>
>>
>> Merged to trunk and cherry-picked to 2.7
>>
>> > Javadocs + Scaladocs not published on maven central
>> > ---
>> >
>> > Key: KAFKA-9381
>> > URL: https://issues.apache.org/jira/browse/KAFKA-9381
>> > Project: Kafka
>> >  Issue Type: Bug
>> >  Components: documentation, streams
>> >Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1,
>> 2.1.0, 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1, 2.5.0, 2.4.1
>> >Reporter: Julien Jean Paul Sirocchi
>> >Assignee: Bill Bejeck
>> >Priority: Blocker
>> > Fix For: 2.7.0
>> >
>> >
>> > As per title, empty (aside for MANIFEST, LICENCE and NOTICE)
>> javadocs/scaladocs jars on central for any version (kafka nor scala), e.g.
>> > [
>> http://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-scala_2.12/2.3.1/
>> ]
>>
>>
>>
>> --
>> This message was sent by Atlassian Jira
>> (v8.3.4#803005)
>>
> --
> Mathew Morales
> (602) 826-6582
> www.MathewMorales.com
>
> --
Mathew Morales
(602) 826-6582
www.MathewMorales.com


Re: [jira] [Resolved] (KAFKA-9381) Javadocs + Scaladocs not published on maven central

2020-10-26 Thread Mathew Morales
Stop

On Mon, Oct 26, 2020 at 5:49 PM Bill Bejeck (Jira)  wrote:

>
>  [
> https://issues.apache.org/jira/browse/KAFKA-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
> ]
>
> Bill Bejeck resolved KAFKA-9381.
> 
> Resolution: Fixed
>
> Resolved via [https://github.com/apache/kafka/pull/9486.]
>
>
>
> Merged to trunk and cherry-picked to 2.7
>
> > Javadocs + Scaladocs not published on maven central
> > ---
> >
> > Key: KAFKA-9381
> > URL: https://issues.apache.org/jira/browse/KAFKA-9381
> > Project: Kafka
> >  Issue Type: Bug
> >  Components: documentation, streams
> >Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1,
> 2.1.0, 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1, 2.5.0, 2.4.1
> >Reporter: Julien Jean Paul Sirocchi
> >Assignee: Bill Bejeck
> >Priority: Blocker
> > Fix For: 2.7.0
> >
> >
> > As per title, empty (aside for MANIFEST, LICENCE and NOTICE)
> javadocs/scaladocs jars on central for any version (kafka nor scala), e.g.
> > [
> http://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-scala_2.12/2.3.1/
> ]
>
>
>
> --
> This message was sent by Atlassian Jira
> (v8.3.4#803005)
>
-- 
Mathew Morales
(602) 826-6582
www.MathewMorales.com


[jira] [Resolved] (KAFKA-9381) Javadocs + Scaladocs not published on maven central

2020-10-26 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-9381.

Resolution: Fixed

Resolved via [https://github.com/apache/kafka/pull/9486.]

 

Merged to trunk and cherry-picked to 2.7

> Javadocs + Scaladocs not published on maven central
> ---
>
> Key: KAFKA-9381
> URL: https://issues.apache.org/jira/browse/KAFKA-9381
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1, 2.5.0, 2.4.1
>Reporter: Julien Jean Paul Sirocchi
>Assignee: Bill Bejeck
>Priority: Blocker
> Fix For: 2.7.0
>
>
> As per title, empty (aside for MANIFEST, LICENCE and NOTICE) 
> javadocs/scaladocs jars on central for any version (kafka nor scala), e.g.
> [http://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-scala_2.12/2.3.1/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-406: GlobalStreamThread should honor custom reset policy

2020-10-26 Thread Sophie Blee-Goldman
I don't believe there is any KIP yet for the state machine changes, feel
free to grab the
next KIP number.

I don't think it matters too much whether we list this one as "Under
Discussion" or "Blocked".
But it might be preferable to put it as "Blocked" so people know that there
are actual plans
to do this one, and it's just waiting on some other ongoing work.

Thanks for picking this up!


On Mon, Oct 26, 2020 at 7:57 AM Navinder Brar
 wrote:

> Hi,
>
> I have updated the KIP-406 with the discussions that we have had above. Is
> there any KIP proposed yet to change the state machine so that I can link
> it to the KIP?
>
> Also, is there any suggestion whether this KIP should be labeled as
> Under-discussion or Blocked on the KIPs page?
>
> Thanks,
> Navinder
>
> On Thursday, 8 October, 2020, 11:31:28 pm IST, Navinder Brar <
> navinder_b...@yahoo.com.invalid> wrote:
>
>  Thanks, Sophie, Guozhang, and Matthias for sharing your thoughts. I am
> glad that another meaningful KIP is coming out of this discussion. I am
> good towards parking this KIP, till we can make the changes towards the
> RESTORING state we have discussed above. I will update this KIP with the
> closure we currently have, i.e. assuming we will make the change to add
> global stores also to the RESTORING phase so that active tasks don't start
> processing when the state is RESTORING.
>
> Regards,
> Navinder
> On Wednesday, 7 October, 2020, 11:39:05 pm IST, Matthias J. Sax <
> mj...@apache.org> wrote:
>
>  I synced with John in-person and he emphasized his concerns about
> breaking code if we change the state machine. From an impl point of
> view, I am concerned that maintaining two state machines at the same
> time, might be very complex. John had the idea though, that we could
> actually do an internal translation: Internally, we switch the state
> machine to the new one, but translate new-stated to old-state before
> doing the callback? (We only need two separate "state enums" and we add
> a new method to register callbacks for the new state enums and deprecate
> the existing method).
>
> However, also with regard to the work Guozhang pointed out, I am
> wondering if we should split out a independent KIP just for the state
> machine changes? It seems complex enough be itself. We would hold-off
> this KIP until the state machine change is done and resume it afterwards?
>
> Thoughts?
>
> -Matthias
>
> On 10/6/20 8:55 PM, Guozhang Wang wrote:
> > Sorry I'm late to the party.
> >
> > Matthias raised a point to me regarding the recent development of moving
> > restoration from stream threads to separate restore threads and allowing
> > the stream threads to process any processible tasks even when some other
> > tasks are still being restored by the restore threads:
> >
> > https://issues.apache.org/jira/browse/KAFKA-10526
> > https://issues.apache.org/jira/browse/KAFKA-10577
> >
> > That would cause the restoration of non-global states to be more similar
> to
> > global states such that some tasks would be processed even though the
> state
> > of the stream thread is not yet in RUNNING (because today we only transit
> > to it when ALL assigned tasks have completed restoration and are
> > processible).
> >
> > Also, as Sophie already mentioned, today during REBALANCING (in stream
> > thread level, it is PARTITION_REVOKED -> PARTITION_ASSIGNED) some tasks
> may
> > still be processed, and because of KIP-429 the RUNNING ->
> PARTITION_REVOKED
> > -> PARTITION_ASSIGNED can be within a single call and hence be very
> > "transient", whereas PARTITION_ASSIGNED -> RUNNING could still take time
> as
> > it only do the transition when all tasks are processible.
> >
> > So I think it makes sense to add a RESTORING state at the stream client
> > level, defined as "at least one of the state stores assigned to this
> > client, either global or non-global, is still restoring", and emphasize
> > that during this state the client may still be able to process records,
> > just probably not in full-speed.
> >
> > As for REBALANCING, I think it is a bit less relevant to this KIP but
> > here's a dump of my thoughts: if we can capture the period when "some
> tasks
> > do not belong to any clients and hence processing is not full-speed" it
> > would still be valuable, but unfortunately right now since
> > onPartitionRevoked is not triggered each time on all clients, today's
> > transition would just make a lot of very short REBALANCING state period
> > which is not very useful really. So if we still want to keep that state
> > maybe we can consider the following tweak: at the thread level, we
> replace
> > PARTITION_REVOKED / PARTITION_ASSIGNED with just a single REBALANCING
> > state, and we will transit to this state upon onPartitionRevoked, but we
> > will only transit out of this state upon onAssignment when the assignor
> > decides there's no follow-up rebalance immediately (note we also schedule
> > future rebalances for workload balancing, but that 

Re: [VOTE] KIP-665 Kafka Connect Hash SMT

2020-10-26 Thread Brandon Brown
I’ve update the KIP with suggestions from Gunnar. I’d like to bring this up for 
a vote. 

Brandon Brown
> On Oct 22, 2020, at 12:53 PM, Brandon Brown  wrote:
> 
> Hey Gunnar,
> 
> Those are great questions!
> 
> 1) I went with it only selecting top level fields since it seems like that’s 
> the way most of the out of the box SMTS work, however I could see a lot of 
> value in it supporting nested fields. 
> 2) I had not thought about adding salt but I think that would be a valid 
> option as well. 
> 
> I think I’ll update the KIP to reflect those suggestions. One more, do you 
> think this should allow a regex for fields or stick with the explicit naming 
> of the fields?
> 
> Thanks for the great feedback
> 
> Brandon Brown
> 
>> On Oct 22, 2020, at 12:40 PM, Gunnar Morling 
>>  wrote:
>> 
>> Hey Brandon,
>> 
>> I think that's an interesting idea, we got something as a built-in
>> connector feature in Debezium, too [1]. Two questions:
>> 
>> * Can "field" select nested fields, e.g. "after.email"?
>> * Did you consider an option for specifying salt for the hash functions?
>> 
>> --Gunnar
>> 
>> [1]
>> https://debezium.io/documentation/reference/connectors/mysql.html#mysql-property-column-mask-hash
>> 
>> 
>> 
>>> Am Do., 22. Okt. 2020 um 12:53 Uhr schrieb Brandon Brown <
>>> bran...@bbrownsound.com>:
>>> 
>>> Gonna give this another little bump. :)
>>> 
>>> Brandon Brown
>>> 
 On Oct 15, 2020, at 12:51 PM, Brandon Brown 
>>> wrote:
 
 
 As I mentioned in the KIP, this transformer is slightly different from
>>> the current MaskField SMT.
 
> Currently there exists a MaskField SMT but that would completely remove
>>> the value by setting it to an equivalent null value. One problem with this
>>> would be that you’d not be able to know in the case of say a password going
>>> through the mask transform it would become "" which could mean that no
>>> password was present in the message, or it was removed. However this hash
>>> transformer would remove this ambiguity if that makes sense. The proposed
>>> hash functions would be MD5, SHA1, SHA256. which are all supported via
>>> MessageDigest.
 
 Given this take on things do you still think there would be value in
>>> this smt?
 
 
 Brandon Brown
> On Oct 15, 2020, at 12:36 PM, Ning Zhang 
>>> wrote:
> 
> Hello, I think this SMT feature is parallel to
>>> https://docs.confluent.io/current/connect/transforms/index.html
> 
>>> On 2020/10/15 15:24:51, Brandon Brown 
>>> wrote:
>> Bumping this thread.
>> Please take a look at the KIP and vote or let me know if you have any
>>> feedback.
>> 
>> KIP:
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-665%3A+Kafka+Connect+Hash+SMT
>> 
>> Proposed: https://github.com/apache/kafka/pull/9057
>> 
>> Thanks
>> 
>> Brandon Brown
>> 
 On Oct 8, 2020, at 10:30 PM, Brandon Brown 
>>> wrote:
>>> 
>>> Just wanted to give another bump on this and see if anyone had any
>>> comments.
>>> 
>>> Thanks!
>>> 
>>> Brandon Brown
>>> 
 On Oct 1, 2020, at 9:11 AM, "bran...@bbrownsound.com" <
>>> bran...@bbrownsound.com> wrote:
 
 Hey Kafka Developers,
 
 I’ve created the following KIP and updated it based on feedback from
>>> Mickael. I was wondering if we could get a vote on my proposal and move
>>> forward with the proposed pr.
 
 Thanks so much!
 -Brandon
>> 
>>> 


[jira] [Resolved] (KAFKA-10616) StreamThread killed by "IllegalStateException: The processor is already closed"

2020-10-26 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-10616.
---
Resolution: Fixed

> StreamThread killed by "IllegalStateException: The processor is already 
> closed"
> ---
>
> Key: KAFKA-10616
> URL: https://issues.apache.org/jira/browse/KAFKA-10616
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 2.7.0
>
>
> Application is hitting "java.lang.IllegalStateException: The processor is 
> already closed". Over the course of about a day, this exception killed 21/100 
> of the queries (StreamThreads). The (slightly trimmed) stacktrace:
>  
> {code:java}
> java.lang.RuntimeException: Caught an exception while closing caching window 
> store for store Aggregate-Aggregate-Materialize at 
> org.apache.kafka.streams.state.internals.ExceptionUtils.throwSuppressed(ExceptionUtils.java:39)
>  at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.close(CachingWindowStore.java:432)
>  at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.close(ProcessorStateManager.java:527)
>  at 
> org.apache.kafka.streams.processor.internals.StreamTask.closeDirty(StreamTask.java:499)
>  at 
> org.apache.kafka.streams.processor.internals.TaskManager.handleLostAll(TaskManager.java:626)
>  … Caused by: java.lang.IllegalStateException: The processor is already 
> closed at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.throwIfClosed(ProcessorNode.java:172)
>  at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:178)
>  at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:273)
>  at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:252)
>  at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:214)
>  at 
> org.apache.kafka.streams.kstream.internals.TimestampedCacheFlushListener.apply(TimestampedCacheFlushListener.java:45)
>  at 
> org.apache.kafka.streams.kstream.internals.TimestampedCacheFlushListener.apply(TimestampedCacheFlushListener.java:28)
>  at 
> org.apache.kafka.streams.state.internals.MeteredWindowStore.lambda$setFlushListener$1(MeteredWindowStore.java:110)
>  at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.putAndMaybeForward(CachingWindowStore.java:118)
>  at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.lambda$initInternal$0(CachingWindowStore.java:93)
>  at 
> org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:151)
>  at 
> org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:109)
>  at 
> org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:116)
>  at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.lambda$close$1(CachingWindowStore.java:427)
>  at 
> org.apache.kafka.streams.state.internals.ExceptionUtils.executeAll(ExceptionUtils.java:28)
>  at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.close(CachingWindowStore.java:426)
> {code}
>  
> I'm guessing we close the topology before closing the state states, so 
> records that get flushed during the caching store's close() will run into an 
> already-closed processor. During a clean close we should always flush before 
> closing anything (during prepareCommit()), but since this was a 
> handleLostAll() we would just skip right to suspend() and close the topology.
> Presumably the right thing to do here is to flush the caches before closing 
> anything during a dirty close.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #204

2020-10-26 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #180

2020-10-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix NPE in KafkaAdminClient.describeUserScramCredentials (#9374)

[github] MINOR; DescribeUserScramCredentialsRequest API should handle request 
with users equals to `null` (#9504)


--
[...truncated 6.89 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@786c4d68,
 timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@786c4d68,
 timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@38817a3a,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@38817a3a,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@52b23835,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@52b23835,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@73934268,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@73934268,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@690e467f,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@690e467f,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5aeda1ce,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5aeda1ce,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@195d4936,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@195d4936,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7034d877,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7034d877,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6b9255b0,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6b9255b0,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 

Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-10-26 Thread David Jacot
Hi Bill,

We have found a small regression:
https://issues.apache.org/jira/browse/KAFKA-10647.
This was introduced while we migrated the consumer protocol to using the
auto-generated
protocol. I have opened a PR to fix it (one line):
https://github.com/apache/kafka/pull/9506.

Best,
David

On Mon, Oct 26, 2020 at 4:23 PM Bill Bejeck  wrote:

> Hi David,
>
> I agree that these small issues should be included in 2.7.
>
> Thanks,
> Bill
>
> On Mon, Oct 26, 2020 at 10:58 AM David Jacot  wrote:
>
> > Hi Bill,
> >
> > We have found two small issues related to the newly
> > introduced describeUserScramCredentials API:
> > 1) https://github.com/apache/kafka/pull/9374
> > 2) https://github.com/apache/kafka/pull/9504
> >
> > While not a regression, I'd like to get them in 2.7 if possible to avoid
> > releasing a new API with known
> > bugs.
> >
> > Best,
> > David
> >
> > On Thu, Oct 22, 2020 at 8:39 PM Bruno Cadonna 
> wrote:
> >
> > > Hi Bill,
> > >
> > > I took a second look at the git history and now it actually seems to be
> > > a regression. Probably, a change in August that extended the error
> codes
> > > introduced this bug.
> > >
> > > Best,
> > > Bruno
> > >
> > > On 22.10.20 19:50, Bill Bejeck wrote:
> > > > Hi Bruno,
> > > >
> > > > While technically it's not a regression, I think this is an important
> > fix
> > > > with a low-risk to include, so we can leave it as a blocker.
> > > >
> > > > Thanks,
> > > > Bill
> > > >
> > > >
> > > > On Thu, Oct 22, 2020 at 1:25 PM Bruno Cadonna 
> > > wrote:
> > > >
> > > >> Hi Bill,
> > > >>
> > > >> we encountered the following bug in our soak testing cluster.
> > > >>
> > > >> https://issues.apache.org/jira/browse/KAFKA-10631
> > > >>
> > > >> I classified the bug as a blocker because it caused the death of a
> > > >> stream thread. It does not seem to be a regression, though.
> > > >>
> > > >> I opened a PR to fix the bug here:
> > > >>
> > > >> https://github.com/apache/kafka/pull/9479
> > > >>
> > > >> Feel free to downgrade the priority to "Major" if you think it is
> not
> > a
> > > >> blocker.
> > > >>
> > > >> Best,
> > > >> Bruno
> > > >>
> > > >> On 22.10.20 17:49, Bill Bejeck wrote:
> > > >>> Hi All,
> > > >>>
> > > >>> We've hit code freeze.  The current status for cutting an RC is
> there
> > > is
> > > >>> one blocker issue.  It looks like there is a fix in the works, so
> > > >>> hopefully, it will get merged early next week.
> > > >>>
> > > >>> At that point, if there are no other blockers, I proceed with the
> RC
> > > >>> process.
> > > >>>
> > > >>> Thanks,
> > > >>> Bill
> > > >>>
> > > >>> On Wed, Oct 7, 2020 at 12:10 PM Bill Bejeck 
> > wrote:
> > > >>>
> > >  Hi Anna,
> > > 
> > >  I've updated the table to only show KAFKA-10023 as going into 2.7
> > > 
> > >  Thanks,
> > >  Bill
> > > 
> > >  On Tue, Oct 6, 2020 at 6:51 PM Anna Povzner 
> > > wrote:
> > > 
> > > > Hi Bill,
> > > >
> > > > Regarding KIP-612, only the first half of the KIP will get into
> 2.7
> > > > release: Broker-wide and per-listener connection rate limits,
> > > including
> > > > corresponding configs and metric (KAFKA-10023). I see that the
> > table
> > > in
> > > > the
> > > > release plan tags KAFKA-10023 as "old", not sure what it refers
> to.
> > > >> Note
> > > > that while KIP-612 was approved prior to 2.6 release, none of the
> > > > implementation went into 2.6 release.
> > > >
> > > > The second half of the KIP that adds per-IP connection rate
> > limiting
> > > >> will
> > > > need to be postponed (KAFKA-10024) till the following release.
> > > >
> > > > Thanks,
> > > > Anna
> > > >
> > > > On Tue, Oct 6, 2020 at 2:30 PM Bill Bejeck 
> > > wrote:
> > > >
> > > >> Hi Kowshik,
> > > >>
> > > >> Given that the new feature is contained in the PR and the
> tooling
> > is
> > > >> follow-on work (minor work, but that's part of the submitted
> PR),
> > I
> > > > think
> > > >> this is fine.
> > > >>
> > > >> Thanks,
> > > >> BIll
> > > >>
> > > >> On Tue, Oct 6, 2020 at 5:00 PM Kowshik Prakasam <
> > > >> kpraka...@confluent.io
> > > >>
> > > >> wrote:
> > > >>
> > > >>> Hey Bill,
> > > >>>
> > > >>> For KIP-584 ,
> we
> > > are
> > > > in
> > > >> the
> > > >>> process of reviewing/merging the write path PR into AK trunk:
> > > >>> https://github.com/apache/kafka/pull/9001 . As far as the KIP
> > > goes,
> > > > this
> > > >>> PR
> > > >>> is a major milestone. The PR merge will hopefully be done
> before
> > > EOD
> > > >>> tomorrow in time for the feature freeze. Beyond this PR, couple
> > > >> things
> > > >> are
> > > >>> left to be completed for this KIP: (1) tooling support and (2)
> > > >> implementing
> > > >>> support for feature version deprecation in the broker . In
> > > >> particular,

[jira] [Created] (KAFKA-10647) Only serialize owned partition when consumer protocol version >= 0

2020-10-26 Thread David Jacot (Jira)
David Jacot created KAFKA-10647:
---

 Summary: Only serialize owned partition when consumer protocol 
version >= 0 
 Key: KAFKA-10647
 URL: https://issues.apache.org/jira/browse/KAFKA-10647
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: David Jacot
Assignee: David Jacot


A regression got introduced by https://github.com/apache/kafka/pull/8897. The 
owned partition field must be ignored for version < 1 otherwise the 
serialization fails with an unsupported version exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #172

2020-10-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR; DescribeUserScramCredentialsRequest API should handle request 
with users equals to `null` (#9504)


--
[...truncated 3.42 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #171

2020-10-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix NPE in KafkaAdminClient.describeUserScramCredentials (#9374)


--
[...truncated 3.42 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest 

[jira] [Created] (KAFKA-10646) Support dynamic config for "delete.topic.enable"

2020-10-26 Thread Prateek Agarwal (Jira)
Prateek Agarwal created KAFKA-10646:
---

 Summary: Support dynamic config for "delete.topic.enable"
 Key: KAFKA-10646
 URL: https://issues.apache.org/jira/browse/KAFKA-10646
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 1.1.2, 2.5.2
Reporter: Prateek Agarwal


Topic deletion in Kafka removes data and the operation is not reversible (i.e. 
there is no "undelete" operation). Hence, we keep the flag 
"delete.topic.enable" as "False" in our configs, so that someone accidentally 
doesn't delete a topic, which can lead to incidents.

But sometimes, there are legit client use-cases, where they deprecate a topic 
and those topics need to be deleted from the cluster. Currently, the process to 
do this operation is very cumbersome:
{code:java}
1. Change the `server.properties` config on each broker with: 
`delete.topic.enable=true`
2. Roll the cluster
3. Run Kafka Admin commands to delete the topics
4. Revert the `server.properties` config change for `delete.topic.enable`
5. Roll the cluster again{code}
There are other hacky workarounds as well, like:
{code:java}
1. Set Topic retention of the topic-to-be-deleted to a small value to flush the 
data
2. Invoke `zkcli.sh rmr /brokers/topic/topic-to-be-deleted`
3. When Topic metadata is no longer available, rm the topic dirs on the 
brokers{code}
The above process is pretty risky and can lead to unavailability of one or more 
topics if any mistake happens in the commands.

 

Proposed solution:
{code:java}
Make the `delete.topic.enable` config dynamic so that the flag can be modified 
without broker restarts {code}
 

Example runs:

1) Enable Topic Deletion Cluster wide
{code:java}
bin/kafka-configs.sh --bootstrap-server localhost:9092 \ 
 --alter --add-config 'delete.topic.enable=true' \
 --entity-default --entity-type brokers{code}
2) Enable Topic Deletion on Single Broker with broker ID: 0
{code:java}
bin/kafka-configs.sh --bootstrap-server localhost:9092 \ 
 --alter --add-config 'delete.topic.enable=true' \
 --entity-name 0 --entity-type brokers {code}
 

This will be similar to other broker dynamic configs currently available like 
{{log.cleaner.threads}} which can be defined {{cluster-wide}} OR on a 
per-broker level.

The precedence will look like so:
 * Dynamic per-broker config stored in ZooKeeper
 * Dynamic cluster-wide default config stored in ZooKeeper
 * Static broker config from {{server.properties}}

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-10-26 Thread Bill Bejeck
Hi David,

I agree that these small issues should be included in 2.7.

Thanks,
Bill

On Mon, Oct 26, 2020 at 10:58 AM David Jacot  wrote:

> Hi Bill,
>
> We have found two small issues related to the newly
> introduced describeUserScramCredentials API:
> 1) https://github.com/apache/kafka/pull/9374
> 2) https://github.com/apache/kafka/pull/9504
>
> While not a regression, I'd like to get them in 2.7 if possible to avoid
> releasing a new API with known
> bugs.
>
> Best,
> David
>
> On Thu, Oct 22, 2020 at 8:39 PM Bruno Cadonna  wrote:
>
> > Hi Bill,
> >
> > I took a second look at the git history and now it actually seems to be
> > a regression. Probably, a change in August that extended the error codes
> > introduced this bug.
> >
> > Best,
> > Bruno
> >
> > On 22.10.20 19:50, Bill Bejeck wrote:
> > > Hi Bruno,
> > >
> > > While technically it's not a regression, I think this is an important
> fix
> > > with a low-risk to include, so we can leave it as a blocker.
> > >
> > > Thanks,
> > > Bill
> > >
> > >
> > > On Thu, Oct 22, 2020 at 1:25 PM Bruno Cadonna 
> > wrote:
> > >
> > >> Hi Bill,
> > >>
> > >> we encountered the following bug in our soak testing cluster.
> > >>
> > >> https://issues.apache.org/jira/browse/KAFKA-10631
> > >>
> > >> I classified the bug as a blocker because it caused the death of a
> > >> stream thread. It does not seem to be a regression, though.
> > >>
> > >> I opened a PR to fix the bug here:
> > >>
> > >> https://github.com/apache/kafka/pull/9479
> > >>
> > >> Feel free to downgrade the priority to "Major" if you think it is not
> a
> > >> blocker.
> > >>
> > >> Best,
> > >> Bruno
> > >>
> > >> On 22.10.20 17:49, Bill Bejeck wrote:
> > >>> Hi All,
> > >>>
> > >>> We've hit code freeze.  The current status for cutting an RC is there
> > is
> > >>> one blocker issue.  It looks like there is a fix in the works, so
> > >>> hopefully, it will get merged early next week.
> > >>>
> > >>> At that point, if there are no other blockers, I proceed with the RC
> > >>> process.
> > >>>
> > >>> Thanks,
> > >>> Bill
> > >>>
> > >>> On Wed, Oct 7, 2020 at 12:10 PM Bill Bejeck 
> wrote:
> > >>>
> >  Hi Anna,
> > 
> >  I've updated the table to only show KAFKA-10023 as going into 2.7
> > 
> >  Thanks,
> >  Bill
> > 
> >  On Tue, Oct 6, 2020 at 6:51 PM Anna Povzner 
> > wrote:
> > 
> > > Hi Bill,
> > >
> > > Regarding KIP-612, only the first half of the KIP will get into 2.7
> > > release: Broker-wide and per-listener connection rate limits,
> > including
> > > corresponding configs and metric (KAFKA-10023). I see that the
> table
> > in
> > > the
> > > release plan tags KAFKA-10023 as "old", not sure what it refers to.
> > >> Note
> > > that while KIP-612 was approved prior to 2.6 release, none of the
> > > implementation went into 2.6 release.
> > >
> > > The second half of the KIP that adds per-IP connection rate
> limiting
> > >> will
> > > need to be postponed (KAFKA-10024) till the following release.
> > >
> > > Thanks,
> > > Anna
> > >
> > > On Tue, Oct 6, 2020 at 2:30 PM Bill Bejeck 
> > wrote:
> > >
> > >> Hi Kowshik,
> > >>
> > >> Given that the new feature is contained in the PR and the tooling
> is
> > >> follow-on work (minor work, but that's part of the submitted PR),
> I
> > > think
> > >> this is fine.
> > >>
> > >> Thanks,
> > >> BIll
> > >>
> > >> On Tue, Oct 6, 2020 at 5:00 PM Kowshik Prakasam <
> > >> kpraka...@confluent.io
> > >>
> > >> wrote:
> > >>
> > >>> Hey Bill,
> > >>>
> > >>> For KIP-584 , we
> > are
> > > in
> > >> the
> > >>> process of reviewing/merging the write path PR into AK trunk:
> > >>> https://github.com/apache/kafka/pull/9001 . As far as the KIP
> > goes,
> > > this
> > >>> PR
> > >>> is a major milestone. The PR merge will hopefully be done before
> > EOD
> > >>> tomorrow in time for the feature freeze. Beyond this PR, couple
> > >> things
> > >> are
> > >>> left to be completed for this KIP: (1) tooling support and (2)
> > >> implementing
> > >>> support for feature version deprecation in the broker . In
> > >> particular,
> > >> (1)
> > >>> is important for this KIP and the code changes are external to
> the
> > > broker
> > >>> (since it is a separate tool we intend to build). As of now, we
> > won't
> > > be
> > >>> able to merge the tooling changes before feature freeze date.
> Would
> > > it be
> > >>> ok to merge the tooling changes before code freeze on 10/22? The
> > > tooling
> > >>> requirements are explained here:
> > >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584
> > >>> 
> > >>>
> > >>>
> > >>
> > >
> > >>
> >
> 

Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-10-26 Thread David Jacot
Hi Bill,

We have found two small issues related to the newly
introduced describeUserScramCredentials API:
1) https://github.com/apache/kafka/pull/9374
2) https://github.com/apache/kafka/pull/9504

While not a regression, I'd like to get them in 2.7 if possible to avoid
releasing a new API with known
bugs.

Best,
David

On Thu, Oct 22, 2020 at 8:39 PM Bruno Cadonna  wrote:

> Hi Bill,
>
> I took a second look at the git history and now it actually seems to be
> a regression. Probably, a change in August that extended the error codes
> introduced this bug.
>
> Best,
> Bruno
>
> On 22.10.20 19:50, Bill Bejeck wrote:
> > Hi Bruno,
> >
> > While technically it's not a regression, I think this is an important fix
> > with a low-risk to include, so we can leave it as a blocker.
> >
> > Thanks,
> > Bill
> >
> >
> > On Thu, Oct 22, 2020 at 1:25 PM Bruno Cadonna 
> wrote:
> >
> >> Hi Bill,
> >>
> >> we encountered the following bug in our soak testing cluster.
> >>
> >> https://issues.apache.org/jira/browse/KAFKA-10631
> >>
> >> I classified the bug as a blocker because it caused the death of a
> >> stream thread. It does not seem to be a regression, though.
> >>
> >> I opened a PR to fix the bug here:
> >>
> >> https://github.com/apache/kafka/pull/9479
> >>
> >> Feel free to downgrade the priority to "Major" if you think it is not a
> >> blocker.
> >>
> >> Best,
> >> Bruno
> >>
> >> On 22.10.20 17:49, Bill Bejeck wrote:
> >>> Hi All,
> >>>
> >>> We've hit code freeze.  The current status for cutting an RC is there
> is
> >>> one blocker issue.  It looks like there is a fix in the works, so
> >>> hopefully, it will get merged early next week.
> >>>
> >>> At that point, if there are no other blockers, I proceed with the RC
> >>> process.
> >>>
> >>> Thanks,
> >>> Bill
> >>>
> >>> On Wed, Oct 7, 2020 at 12:10 PM Bill Bejeck  wrote:
> >>>
>  Hi Anna,
> 
>  I've updated the table to only show KAFKA-10023 as going into 2.7
> 
>  Thanks,
>  Bill
> 
>  On Tue, Oct 6, 2020 at 6:51 PM Anna Povzner 
> wrote:
> 
> > Hi Bill,
> >
> > Regarding KIP-612, only the first half of the KIP will get into 2.7
> > release: Broker-wide and per-listener connection rate limits,
> including
> > corresponding configs and metric (KAFKA-10023). I see that the table
> in
> > the
> > release plan tags KAFKA-10023 as "old", not sure what it refers to.
> >> Note
> > that while KIP-612 was approved prior to 2.6 release, none of the
> > implementation went into 2.6 release.
> >
> > The second half of the KIP that adds per-IP connection rate limiting
> >> will
> > need to be postponed (KAFKA-10024) till the following release.
> >
> > Thanks,
> > Anna
> >
> > On Tue, Oct 6, 2020 at 2:30 PM Bill Bejeck 
> wrote:
> >
> >> Hi Kowshik,
> >>
> >> Given that the new feature is contained in the PR and the tooling is
> >> follow-on work (minor work, but that's part of the submitted PR), I
> > think
> >> this is fine.
> >>
> >> Thanks,
> >> BIll
> >>
> >> On Tue, Oct 6, 2020 at 5:00 PM Kowshik Prakasam <
> >> kpraka...@confluent.io
> >>
> >> wrote:
> >>
> >>> Hey Bill,
> >>>
> >>> For KIP-584 , we
> are
> > in
> >> the
> >>> process of reviewing/merging the write path PR into AK trunk:
> >>> https://github.com/apache/kafka/pull/9001 . As far as the KIP
> goes,
> > this
> >>> PR
> >>> is a major milestone. The PR merge will hopefully be done before
> EOD
> >>> tomorrow in time for the feature freeze. Beyond this PR, couple
> >> things
> >> are
> >>> left to be completed for this KIP: (1) tooling support and (2)
> >> implementing
> >>> support for feature version deprecation in the broker . In
> >> particular,
> >> (1)
> >>> is important for this KIP and the code changes are external to the
> > broker
> >>> (since it is a separate tool we intend to build). As of now, we
> won't
> > be
> >>> able to merge the tooling changes before feature freeze date. Would
> > it be
> >>> ok to merge the tooling changes before code freeze on 10/22? The
> > tooling
> >>> requirements are explained here:
> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584
> >>> 
> >>>
> >>>
> >>
> >
> >>
> %3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Toolingsupport
> >>>
> >>> I would love to hear thoughts from Boyang and Jun as well.
> >>>
> >>>
> >>> Thanks,
> >>> Kowshik
> >>>
> >>>
> >>>
> >>> On Mon, Oct 5, 2020 at 3:29 PM Bill Bejeck 
> >> wrote:
> >>>
>  Hi John,
> 
>  I've updated the list of expected KIPs for 2.7.0 with KIP-478.
> 
>  Thanks,
>  Bill
> 
>  On Mon, Oct 

Re: [DISCUSS] KIP-406: GlobalStreamThread should honor custom reset policy

2020-10-26 Thread Navinder Brar
Hi,

I have updated the KIP-406 with the discussions that we have had above. Is 
there any KIP proposed yet to change the state machine so that I can link it to 
the KIP?

Also, is there any suggestion whether this KIP should be labeled as 
Under-discussion or Blocked on the KIPs page? 

Thanks,
Navinder 

On Thursday, 8 October, 2020, 11:31:28 pm IST, Navinder Brar 
 wrote:  
 
 Thanks, Sophie, Guozhang, and Matthias for sharing your thoughts. I am glad 
that another meaningful KIP is coming out of this discussion. I am good towards 
parking this KIP, till we can make the changes towards the RESTORING state we 
have discussed above. I will update this KIP with the closure we currently 
have, i.e. assuming we will make the change to add global stores also to the 
RESTORING phase so that active tasks don't start processing when the state is 
RESTORING.

Regards,
Navinder 
    On Wednesday, 7 October, 2020, 11:39:05 pm IST, Matthias J. Sax 
 wrote:  
 
 I synced with John in-person and he emphasized his concerns about
breaking code if we change the state machine. From an impl point of
view, I am concerned that maintaining two state machines at the same
time, might be very complex. John had the idea though, that we could
actually do an internal translation: Internally, we switch the state
machine to the new one, but translate new-stated to old-state before
doing the callback? (We only need two separate "state enums" and we add
a new method to register callbacks for the new state enums and deprecate
the existing method).

However, also with regard to the work Guozhang pointed out, I am
wondering if we should split out a independent KIP just for the state
machine changes? It seems complex enough be itself. We would hold-off
this KIP until the state machine change is done and resume it afterwards?

Thoughts?

-Matthias

On 10/6/20 8:55 PM, Guozhang Wang wrote:
> Sorry I'm late to the party.
> 
> Matthias raised a point to me regarding the recent development of moving
> restoration from stream threads to separate restore threads and allowing
> the stream threads to process any processible tasks even when some other
> tasks are still being restored by the restore threads:
> 
> https://issues.apache.org/jira/browse/KAFKA-10526
> https://issues.apache.org/jira/browse/KAFKA-10577
> 
> That would cause the restoration of non-global states to be more similar to
> global states such that some tasks would be processed even though the state
> of the stream thread is not yet in RUNNING (because today we only transit
> to it when ALL assigned tasks have completed restoration and are
> processible).
> 
> Also, as Sophie already mentioned, today during REBALANCING (in stream
> thread level, it is PARTITION_REVOKED -> PARTITION_ASSIGNED) some tasks may
> still be processed, and because of KIP-429 the RUNNING -> PARTITION_REVOKED
> -> PARTITION_ASSIGNED can be within a single call and hence be very
> "transient", whereas PARTITION_ASSIGNED -> RUNNING could still take time as
> it only do the transition when all tasks are processible.
> 
> So I think it makes sense to add a RESTORING state at the stream client
> level, defined as "at least one of the state stores assigned to this
> client, either global or non-global, is still restoring", and emphasize
> that during this state the client may still be able to process records,
> just probably not in full-speed.
> 
> As for REBALANCING, I think it is a bit less relevant to this KIP but
> here's a dump of my thoughts: if we can capture the period when "some tasks
> do not belong to any clients and hence processing is not full-speed" it
> would still be valuable, but unfortunately right now since
> onPartitionRevoked is not triggered each time on all clients, today's
> transition would just make a lot of very short REBALANCING state period
> which is not very useful really. So if we still want to keep that state
> maybe we can consider the following tweak: at the thread level, we replace
> PARTITION_REVOKED / PARTITION_ASSIGNED with just a single REBALANCING
> state, and we will transit to this state upon onPartitionRevoked, but we
> will only transit out of this state upon onAssignment when the assignor
> decides there's no follow-up rebalance immediately (note we also schedule
> future rebalances for workload balancing, but that would still trigger
> transiting out of it). On the client level, we would enter REBALANCING when
> any threads enter REBALANCING and we would transit out of it when all
> transits out of it. In this case, it is possible that during a rebalance,
> only those clients that have to revoke some partition would enter the
> REBALANCING state while others that only get additional tasks would not
> enter this state at all.
> 
> With all that being said, I think the discussion around REBALANCING is less
> relevant to this KIP, and even for RESTORING I honestly think maybe we can
> make it in another KIP out of 406. It will, admittedly leave us in a
> temporary 

[jira] [Created] (KAFKA-10645) Forwarding a record from a punctuator sometimes it results in a NullPointerException

2020-10-26 Thread Filippo Machi (Jira)
Filippo Machi created KAFKA-10645:
-

 Summary: Forwarding a record from a punctuator sometimes it 
results in a NullPointerException
 Key: KAFKA-10645
 URL: https://issues.apache.org/jira/browse/KAFKA-10645
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.5.0
Reporter: Filippo Machi


Hello,
I am working on a java kafka stream application (v. 2.5.0) running on a 
kubernetes cluster.

It´s a springboot application running with java 8.

With the last upgrade to version 2.5.0 I started to see into the logs some 
NullPointerException that are happening when forwarding a record from a 
punctuator. 
This is the stacktrace of the exception


{code:java}
Caused by: org.apache.kafka.streams.errors.StreamsException: task [2_2] Abort 
sending since an error caught with a previous record (timestamp 1603721062667) 
to topic reply-reminder-push-sender due to java.lang.NullPointerException\tat 
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:240)\tat
 
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:111)\tat
 
org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:89)\tat
 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:201)\tat
 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:180)\tat
 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:133)\t...
 24 common frames omittedCaused by: java.lang.NullPointerException: null\tat 
org.apache.kafka.common.record.DefaultRecord.sizeOf(DefaultRecord.java:613)\tat 
org.apache.kafka.common.record.DefaultRecord.recordSizeUpperBound(DefaultRecord.java:633)\tat
 
org.apache.kafka.common.record.DefaultRecordBatch.estimateBatchSizeUpperBound(DefaultRecordBatch.java:534)\tat
 
org.apache.kafka.common.record.AbstractRecords.estimateSizeInBytesUpperBound(AbstractRecords.java:135)\tat
 
org.apache.kafka.common.record.AbstractRecords.estimateSizeInBytesUpperBound(AbstractRecords.java:125)\tat
 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:914)\tat
 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:862)\tat
 
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:181)\t...
 29 common frames omitted
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10644) Fix VotedToUnattached test error

2020-10-26 Thread dengziming (Jira)
dengziming created KAFKA-10644:
--

 Summary: Fix VotedToUnattached test error
 Key: KAFKA-10644
 URL: https://issues.apache.org/jira/browse/KAFKA-10644
 Project: Kafka
  Issue Type: Sub-task
  Components: unit tests
Reporter: dengziming


codes of `QuorumStateTest.testVotedToUnattachedHigherEpoch`  is not in 
consistent with its name, the method name is VotedToUnattached, but the code is 
UnattachedToUnattached:

```

state.initialize(new OffsetAndEpoch(0L, logEndEpoch));
state.transitionToUnattached(5);

long remainingElectionTimeMs = 
state.unattachedStateOrThrow().remainingElectionTimeMs(time.milliseconds());
time.sleep(1000);

state.transitionToUnattached(6);

```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #170

2020-10-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10092: Remove unnecessary contructor and exception in 
NioEchoServer (#8794)


--
[...truncated 6.84 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #203

2020-10-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10092: Remove unnecessary contructor and exception in 
NioEchoServer (#8794)


--
[...truncated 3.45 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #179

2020-10-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10092: Remove unnecessary contructor and exception in 
NioEchoServer (#8794)


--
[...truncated 3.45 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7eb0b825, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7eb0b825, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4bec5ee1, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4bec5ee1, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@408d3d4e, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@408d3d4e, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@580a320e, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@580a320e, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2380d8ba, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2380d8ba, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@a32817d, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@a32817d, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@23fcaa38, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@23fcaa38, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2a6f660b, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2a6f660b, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@f8f3ef4, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@f8f3ef4, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@11286f11, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@11286f11, 
timestamped = false, caching = true, logging =