[jira] [Resolved] (KAFKA-7652) Kafka Streams Session store performance degradation from 0.10.2.2 to 0.11.0.0

2019-02-12 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-7652.
--
   Resolution: Fixed
 Assignee: Guozhang Wang
Fix Version/s: 2.2.0

I think this issue should have been fixed with the three PRs listed above. 
[~jonathanpdx] would appreciate if you can try it out and validate if it is 
true.

> Kafka Streams Session store performance degradation from 0.10.2.2 to 0.11.0.0
> -
>
> Key: KAFKA-7652
> URL: https://issues.apache.org/jira/browse/KAFKA-7652
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0, 0.11.0.1, 0.11.0.2, 0.11.0.3, 1.1.1, 2.0.0, 
> 2.0.1
>Reporter: Jonathan Gordon
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.2.0
>
> Attachments: kafka_10_2_1_flushes.txt, kafka_11_0_3_flushes.txt
>
>
> I'm creating this issue in response to [~guozhang]'s request on the mailing 
> list:
> [https://lists.apache.org/thread.html/97d620f4fd76be070ca4e2c70e2fda53cafe051e8fc4505dbcca0321@%3Cusers.kafka.apache.org%3E]
> We are attempting to upgrade our Kafka Streams application from 0.10.2.1 but 
> experience a severe performance degradation. The highest amount of CPU time 
> seems spent in retrieving from the local cache. Here's an example thread 
> profile with 0.11.0.0:
> [https://i.imgur.com/l5VEsC2.png]
> When things are running smoothly we're gated by retrieving from the state 
> store with acceptable performance. Here's an example thread profile with 
> 0.10.2.1:
> [https://i.imgur.com/IHxC2cZ.png]
> Some investigation reveals that it appears we're performing about 3 orders 
> magnitude more lookups on the NamedCache over a comparable time period. I've 
> attached logs of the NamedCache flush logs for 0.10.2.1 and 0.11.0.3.
> We're using session windows and have the app configured for 
> commit.interval.ms = 30 * 1000 and cache.max.bytes.buffering = 10485760
> I'm happy to share more details if they would be helpful. Also happy to run 
> tests on our data.
> I also found this issue, which seems like it may be related:
> https://issues.apache.org/jira/browse/KAFKA-4904
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3380

2019-02-12 Thread Apache Jenkins Server
See 


Changes:

[cshapi] MINOR: Fixed a couple of typos in Config docs

--
[...truncated 4.61 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [DISCUSS] KIP-415: Incremental Cooperative Rebalancing in Kafka Connect

2019-02-12 Thread Guozhang Wang
Hello Konstantine,

Thanks for the marvelous effort writing up this KIP! I've made a pass over
it and here's some comments:

1) For other audience to better understand the gist of this proposal, I'd
suggest we add the following context before the "Changes to Connect's
Rebalancing Process" section:

"
The core in a group rebalance protocol is to have a synchronization barrier
such that every member of the group will coordinate on, such that before
everyone hit this barrier all the states will not be changed at all. In the
current rebalance protocol, this synchronization barrier is the reception
of the JoinGroup request: coordinator will not send any responses to any
members until it determines that all JoinGroup requests have been received.
And since right after this barrier the new assignment will be made and the
assigned partitions may no longer be re-assigned to the same member (i.e.
consumer) of the group, today we have to be conservative that all members
revoke all the resources they currently own before proceeding to the
synchronization barrier.

This KIP's key idea is to postpone the synchronization barrier to the
second rebalance's JoinGroup reception, so that in the first rebalance
since we know NO new assignment will ever be executed, members do not need
to revoke anything before joining the group. In other words, we are paying
more rebalances than the naive solution (at least two rebalances will be
required), but each rebalance now could be much lighter.
"

2) In this idea, the leader needs to be able to distinguish the "first"
rebalance where no new assignment will be executed, but only revocations
are indicated, and the "second" rebalance where there are some "either
revoked or left from leaving member" partitions to be assigned. What is not
clear to me is how to distinguish these two cases, and when it decides to
inject the delay (i.e. it is the first rebalance) v.s. not injecting
delays. Comparing the "*Non-first new member joins*" and "*Worker bounces*"
scenarios: in the former case, the leader would decide it is the first
rebalance and let W1 revoke some assignment, WITHOUT delay, while in the
latter case, when W2 rejoins (at this case it rejoined as a new member, so
from the coordinator and leader's point of view there should be no
difference compared to "W2 joins as a new member"), the leader assigns to
W2 with AT2 and BC0. Also what we did not illustrate in the KIP on consumer
failures in between rebalances: for another example, suppose in "*Non-first
new member joins*" W1 fails after revoked some partitions but before
triggers another rebalance, then when coordinator triggers another join
based on failure detection, how would the leader assign partitions? Would
it assign all five partitions immediately to W2 and W3 or would it inject
delays and not assign any to W2 and W3, or would it assign the ones
indicated for revocation to W2 and W3? Could you provide some pseudo code
on the leader logic, such that given the list of subscriptions, how would
the leader decides:

2.a) adds an delay or not;
2.b) assign new resources to some members or not;
2.c) revoke new resources to some members or not.

3) About compatibility, I'm also wondering how would downgrade be executed
here: suppose after upgrading the Connect jar and migrate to `cooperative`
mode, users discovered a bug and hence needs to downgrade back to older
versions that does not support `cooperative`.

4) This is sort of orthogonal to this KIP, but I'm also considering about
code sharing with the future Streams incremental rebalance protocol. For
Kafka Streams, one difference is that because of the state maintenance,
migrating tasks are heavier and hence we should consider bootstrapping the
assigned task before revoking it from the old client. So far it seems
Streams incremental rebalance protocol would be a bit different from the
Connect protocol proposed in KIP-415 here. What they may share in common
are a) flatbuffer utils for encoding metadata bytes, and b) consumer
members actively triggers another rebalance by sending join-group request.
So I'm wondering if we can push these two pieces of logic into the
AbstractCoordinator so it can be shared?

Guozhang



On Wed, Feb 6, 2019 at 9:58 PM Boyang Chen  wrote:

> Thanks Konstantine for the great summary! +1 for having a separate KIP
> discussing the trade-offs for using a new serialization format for the
> protocol encoding. We probably could discuss a wider options and benchmark
> on the performance before reaching a final decision.
>
> Best,
> Boyang
> 
> From: Konstantine Karantasis 
> Sent: Tuesday, February 5, 2019 4:23 AM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-415: Incremental Cooperative Rebalancing in
> Kafka Connect
>
> Hi all,
>
> Thank you for your comments so far.
> Now that KIP freeze and feature freeze are behind us for version 2.2, I'd
> like to bring this thread back at the top of the email stack, with the
> 

Re: [DISCUSS] Kafka 2.2.0 in February 2018

2019-02-12 Thread Ismael Juma
Hi Matthias,

I updated a few of the issues.

Ismael

On Mon, Feb 11, 2019, 2:31 PM Matthias J. Sax  Hello,
>
> this is a short reminder, that feature freeze for AK 2.2 release is end
> of this week, Friday 2/14.
>
> Currently, there are two blocker issues
>
>  - https://issues.apache.org/jira/browse/KAFKA-7909
>  - https://issues.apache.org/jira/browse/KAFKA-7481
>
> and five critical issues
>
>  - https://issues.apache.org/jira/browse/KAFKA-7915
>  - https://issues.apache.org/jira/browse/KAFKA-7565
>  - https://issues.apache.org/jira/browse/KAFKA-7556
>  - https://issues.apache.org/jira/browse/KAFKA-7304
>  - https://issues.apache.org/jira/browse/KAFKA-3955
>
> marked with "fixed version" 2.2. Please let me know, if I missed any
> other blocker/critical issue that is relevant for 2.2 release.
>
> I will start to move out all other non-closed Jiras out of the release
> after code freeze and check again on the critical issues.
>
> After code freeze, only blocker issues can be merged to 2.2 branch.
>
>
> Thanks a lot!
>
> -Matthias
>
> On 1/19/19 11:09 AM, Matthias J. Sax wrote:
> > Thanks you all!
> >
> > Added 291, 379, 389, and 420 for tracking.
> >
> >
> > -Matthias
> >
> >
> > On 1/19/19 6:32 AM, Dongjin Lee wrote:
> >> Hi Matthias,
> >>
> >> Thank you for taking the lead. KIP-389[^1] was accepted last week[^2],
> so
> >> it seems like to be included.
> >>
> >> Thanks,
> >> Dongjin
> >>
> >> [^1]:
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-389%3A+Introduce+a+configurable+consumer+group+size+limit
> >> [^2]:
> >>
> https://lists.apache.org/thread.html/53b84cc35c93eddbc67c8d0dd86aedb93050e45016dfe0fc7b82caaa@%3Cdev.kafka.apache.org%3E
> >>
> >> On Sat, Jan 19, 2019 at 9:04 PM Alex D 
> wrote:
> >>
> >>> KIP-379?
> >>>
> >>> On Fri, 18 Jan 2019, 22:33 Matthias J. Sax  wrote:
> >>>
>  Just a quick update on the release.
> 
> 
>  We have 22 KIP atm:
> 
>  81, 207, 258, 289, 313, 328, 331, 339, 341, 351, 359, 361, 367, 368,
>  371, 376, 377, 380, 386, 393, 394, 414
> 
>  Let me know if I missed any KIP that is targeted for 2.2 release.
> 
>  21 of those KIPs are accepted, and the vote for the last one is open
> and
>  can be closed on time.
> 
>  The KIP deadline is Jan 24, so if any late KIPs are coming in, the
> vote
>  must be started latest next Monday Jan 21, to be open for at least 72h
>  and to meet the deadline.
> 
>  Also keep the feature freeze deadline in mind (31 Jan).
> 
> 
>  Besides this, there are 91 open tickets and 41 ticket in progress. I
>  will start to go through those tickets soon to see what will make it
>  into 2.2 and what we need to defer. If you have any tickets assigned
> to
>  yourself that are target for 2.2 and you know you cannot make it, I
>  would appreciate if you could update those ticket yourself to help
>  streamlining the release process. Thanks a lot for you support!
> 
> 
>  -Matthias
> 
> 
>  On 1/8/19 7:27 PM, Ismael Juma wrote:
> > Thanks for volunteering Matthias! The plan sounds good to me.
> >
> > Ismael
> >
> > On Tue, Jan 8, 2019, 1:07 PM Matthias J. Sax   wrote:
> >
> >> Hi all,
> >>
> >> I would like to propose a release plan (with me being release
> manager)
> >> for the next time-based feature release 2.2.0 in February.
> >>
> >> The recent Kafka release history can be found at
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
> >>> .
> >> The release plan (with open issues and planned KIPs) for 2.2.0 can
> be
> >> found at
> >>
> 
> >>>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100827512
> >> .
> >>
> >>
> >> Here are the suggested dates for Apache Kafka 2.2.0 release:
> >>
> >> 1) KIP Freeze: Jan 24, 2019.
> >>
> >> A KIP must be accepted by this date in order to be considered for
> this
> >> release)
> >>
> >> 2) Feature Freeze: Jan 31, 2019
> >>
> >> Major features merged & working on stabilization, minor features
> have
> >> PR, release branch cut; anything not in this state will be
> >>> automatically
> >> moved to the next release in JIRA.
> >>
> >> 3) Code Freeze: Feb 14, 2019
> >>
> >> The KIP and feature freeze date is about 2-3 weeks from now. Please
> >>> plan
> >> accordingly for the features you want push into Apache Kafka 2.2.0
>  release.
> >>
> >> 4) Release Date: Feb 28, 2019 (tentative)
> >>
> >>
> >> -Matthias
> >>
> >>
> >
> 
> 
> >>>
> >>
> >>
> >
>
>


Re: [DISCUSS] Kafka 2.2.0 in February 2018

2019-02-12 Thread Srinivas Reddy
It seems KIP 374 is missing.

-
Srinivas

- Typed on tiny keys. pls ignore typos.{mobile app}

On Wed, 9 Jan, 2019, 05:07 Matthias J. Sax  Hi all,
>
> I would like to propose a release plan (with me being release manager)
> for the next time-based feature release 2.2.0 in February.
>
> The recent Kafka release history can be found at
> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan.
> The release plan (with open issues and planned KIPs) for 2.2.0 can be
> found at
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100827512
> .
>
>
> Here are the suggested dates for Apache Kafka 2.2.0 release:
>
> 1) KIP Freeze: Jan 24, 2019.
>
> A KIP must be accepted by this date in order to be considered for this
> release)
>
> 2) Feature Freeze: Jan 31, 2019
>
> Major features merged & working on stabilization, minor features have
> PR, release branch cut; anything not in this state will be automatically
> moved to the next release in JIRA.
>
> 3) Code Freeze: Feb 14, 2019
>
> The KIP and feature freeze date is about 2-3 weeks from now. Please plan
> accordingly for the features you want push into Apache Kafka 2.2.0 release.
>
> 4) Release Date: Feb 28, 2019 (tentative)
>
>
> -Matthias
>
>


Build failed in Jenkins: kafka-2.2-jdk8 #11

2019-02-12 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Improve log messages when authentications fail: (#6250)

--
[...truncated 266.56 KB...]
org.apache.kafka.common.network.SelectorTest > testConnectException STARTED

org.apache.kafka.common.network.SelectorTest > testConnectException PASSED

org.apache.kafka.common.network.SelectorTest > registerFailure STARTED

org.apache.kafka.common.network.SelectorTest > registerFailure PASSED

org.apache.kafka.common.network.SelectorTest > testMute STARTED

org.apache.kafka.common.network.SelectorTest > testMute PASSED

org.apache.kafka.common.network.SelectorTest > testCantSendWithInProgress 
STARTED

org.apache.kafka.common.network.SelectorTest > testCantSendWithInProgress PASSED

org.apache.kafka.common.network.SelectorTest > 
testCloseConnectionInClosingState STARTED

org.apache.kafka.common.network.SelectorTest > 
testCloseConnectionInClosingState PASSED

org.apache.kafka.common.network.SelectorTest > 
testCloseOldestConnectionWithOneStagedReceive STARTED

org.apache.kafka.common.network.SelectorTest > 
testCloseOldestConnectionWithOneStagedReceive PASSED

org.apache.kafka.common.network.SelectorTest > 
testOutboundConnectionsCountInConnectionCreationMetric STARTED

org.apache.kafka.common.network.SelectorTest > 
testOutboundConnectionsCountInConnectionCreationMetric PASSED

org.apache.kafka.common.network.SelectorTest > testImmediatelyConnectedCleaned 
STARTED

org.apache.kafka.common.network.SelectorTest > testImmediatelyConnectedCleaned 
PASSED

org.apache.kafka.common.network.SelectorTest > testExistingConnectionId STARTED

org.apache.kafka.common.network.SelectorTest > testExistingConnectionId PASSED

org.apache.kafka.common.network.SelectorTest > testCantSendWithoutConnecting 
STARTED

org.apache.kafka.common.network.SelectorTest > testCantSendWithoutConnecting 
PASSED

org.apache.kafka.common.network.SelectorTest > testCloseOldestConnection STARTED

org.apache.kafka.common.network.SelectorTest > testCloseOldestConnection PASSED

org.apache.kafka.common.network.SelectorTest > testServerDisconnect STARTED

org.apache.kafka.common.network.SelectorTest > testServerDisconnect PASSED

org.apache.kafka.common.network.SelectorTest > testIdleExpiryWithoutReadyKeys 
STARTED

org.apache.kafka.common.network.SelectorTest > testIdleExpiryWithoutReadyKeys 
PASSED

org.apache.kafka.common.network.SelectorTest > 
testCloseOldestConnectionWithMultipleStagedReceives STARTED

org.apache.kafka.common.network.SelectorTest > 
testCloseOldestConnectionWithMultipleStagedReceives PASSED

org.apache.kafka.common.network.SelectorTest > 
testInboundConnectionsCountInConnectionCreationMetric STARTED
ERROR: Could not install GRADLE_4_8_1_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:881)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:483)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:692)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:657)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:400)
at hudson.scm.SCM.poll(SCM.java:417)
at hudson.model.AbstractProject._poll(AbstractProject.java:1390)
at hudson.model.AbstractProject.poll(AbstractProject.java:1293)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:603)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:649)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
ERROR: Could not install GRADLE_4_8_1_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:881)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:483)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:692)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:657)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:400)
at hudson.scm.SCM.poll(SCM.java:417)
at hudson.model.AbstractProject._poll(AbstractProject.java:1390)
at hudson.model.AbstractProject.poll(AbstractProject.java:1293)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:603)
at 

Build failed in Jenkins: kafka-trunk-jdk8 #3379

2019-02-12 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Improve PlainSaslServer error message for empty tokens (#6249)

[github] MINOR: Improve log messages when authentications fail: (#6250)

[cshapi] KAFKA-7799; Use httpcomponents-client in RestServerTest.

--
[...truncated 3.96 MB...]

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = true] STARTED
ERROR: Could not install GRADLE_4_8_1_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:881)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:483)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:692)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:657)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:400)
at hudson.scm.SCM.poll(SCM.java:417)
at hudson.model.AbstractProject._poll(AbstractProject.java:1390)
at hudson.model.AbstractProject.poll(AbstractProject.java:1293)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:603)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:649)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = 

[jira] [Resolved] (KAFKA-7799) Fix flaky test RestServerTest.testCORSEnabled

2019-02-12 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7799.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

> Fix flaky test RestServerTest.testCORSEnabled
> -
>
> Key: KAFKA-7799
> URL: https://issues.apache.org/jira/browse/KAFKA-7799
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.3.0
>
>
> Starting to see this failure quite a lot, locally and on jenkins:
> {code}
> org.apache.kafka.connect.runtime.rest.RestServerTest.testCORSEnabled
> Failing for the past 7 builds (Since Failed#18600 )
> Took 0.7 sec.
> Error Message
> java.lang.AssertionError: expected: but was:
> Stacktrace
> java.lang.AssertionError: expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.kafka.connect.runtime.rest.RestServerTest.checkCORSRequest(RestServerTest.java:221)
>   at 
> org.apache.kafka.connect.runtime.rest.RestServerTest.testCORSEnabled(RestServerTest.java:84)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:326)
>   at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
>   at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
> {code}
> If it helps, I see an uncaught exception in the stdout:
> {code}
> [2019-01-08 19:35:23,664] ERROR Uncaught exception in REST call to 
> /connector-plugins/FileStreamSource/validate 
> (org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper:61)
> javax.ws.rs.NotFoundException: HTTP 404 Not Found
>   at 
> org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:274)
>   at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
>   at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
>   at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
>   at 
> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
>   at 
> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
>   at 
> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7924) Kafka broker does not list nodes first ~ 30s after startup

2019-02-12 Thread Gert van Dijk (JIRA)
Gert van Dijk created KAFKA-7924:


 Summary: Kafka broker does not list nodes first ~ 30s after startup
 Key: KAFKA-7924
 URL: https://issues.apache.org/jira/browse/KAFKA-7924
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.1.0
 Environment: Clean Docker environment, very simple everything single 
node.
Reporter: Gert van Dijk


Steps to reproduce:
# Start single Zookeeper instance.
# Start single Kafka instance.
# Validate that the line {noformat}INFO [KafkaServer id=0] started 
(kafka.server.KafkaServer){noformat} is printed.
# Connect using a Kafka client +with debug logging enabled+, right after seeing 
that line, _do not wait_.
# Observe that it is connected to the Kafka broker just fine, but that the 
broker lists an empty lists of nodes: {noformat}Updating cluster metadata to 
Cluster(id = xxx, nodes = [], [...]{noformat}
# Keep watching for about 30 seconds, seeing that log entry popping up hundreds 
of times. Nothing happens in Kafka/Zookeeper logs in the meantime, but then 
suddenly the Kafka server start reporting nodes to the connected client: 
{noformat}[...] nodes = [kafka:9092 (id: 0 rack: null)], [...]
o.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] 
Initiating connection to node [...]{noformat} and it connects all fine.

My expectation is that it should list itself rightaway after listening on the 
network as a broker. It should not take ~ 30 seconds for a Kafka client to be 
blocked waiting and timing out first.

Use case: I'm trying to create topics in a CI/CD pipeline and spinning up a 
clean Kafka for that. Right after it's started, it should be possible to create 
topics using an AdminClient, but currently experiencing {{TimeoutException: 
Timed out waiting for a node assignment}} errors unless I put a {{sleep 30}} 
between observing a Kafka reportedly ready and starting the topic creation 
process. Not very ideal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7923) Add unit test to verify Kafka-7401 in AK versions >= 2.0

2019-02-12 Thread Anna Povzner (JIRA)
Anna Povzner created KAFKA-7923:
---

 Summary: Add unit test to verify Kafka-7401 in AK versions >= 2.0
 Key: KAFKA-7923
 URL: https://issues.apache.org/jira/browse/KAFKA-7923
 Project: Kafka
  Issue Type: Test
Affects Versions: 2.1.0, 2.0.1
Reporter: Anna Povzner
Assignee: Anna Povzner


Kafka-7401 affected versions 1.0 and 1.1, which was fixed and the unit test was 
added. Versions 2.0 did not have that bug, because it was fixed as part of 
another change. To make sure we don't regress, we need to add a similar unit 
test that was added as part of Kafka-7401.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7401) Broker fails to start when recovering a segment from before the log start offset

2019-02-12 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7401.

   Resolution: Fixed
Fix Version/s: 1.1.2

> Broker fails to start when recovering a segment from before the log start 
> offset
> 
>
> Key: KAFKA-7401
> URL: https://issues.apache.org/jira/browse/KAFKA-7401
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.1.0, 1.1.1
>Reporter: Bob Barrett
>Assignee: Anna Povzner
>Priority: Major
> Fix For: 1.1.2
>
>
> If a segment needs to be recovered (for example, because of a missing index 
> file or uncompleted swap operation) and its base offset is less than the log 
> start offset, the broker will crash with the following error:
> Fatal error during KafkaServer startup. Prepare to shutdown 
> (kafka.server.KafkaServer)
>  java.lang.IllegalArgumentException: inconsistent range
>  at java.util.concurrent.ConcurrentSkipListMap$SubMap.(Unknown Source)
>  at java.util.concurrent.ConcurrentSkipListMap.subMap(Unknown Source)
>  at java.util.concurrent.ConcurrentSkipListMap.subMap(Unknown Source)
>  at kafka.log.Log$$anonfun$12.apply(Log.scala:1579)
>  at kafka.log.Log$$anonfun$12.apply(Log.scala:1578)
>  at scala.Option.map(Option.scala:146)
>  at kafka.log.Log.logSegments(Log.scala:1578)
>  at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:358)
>  at kafka.log.Log$$anonfun$completeSwapOperations$1.apply(Log.scala:389)
>  at kafka.log.Log$$anonfun$completeSwapOperations$1.apply(Log.scala:380)
>  at scala.collection.immutable.Set$Set1.foreach(Set.scala:94)
>  at kafka.log.Log.completeSwapOperations(Log.scala:380)
>  at kafka.log.Log.loadSegments(Log.scala:408)
>  at kafka.log.Log.(Log.scala:216)
>  at kafka.log.Log$.apply(Log.scala:1765)
>  at kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:260)
>  at 
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:340)
>  at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>  at java.util.concurrent.FutureTask.run(Unknown Source)
>  at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>  at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>  at java.lang.Thread.run(Unknown Source)
> Since these segments are outside the log range, we should delete them, or at 
> least not block broker startup because of them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Hackathons with funding of the EU

2019-02-12 Thread Jun Rao
Hi Guys,

I am passing along the following info from EU. If you are interested in the
event, please contact the coordinator. Thanks.

The EU via its EU-FOSSA 2 project have invited a number of communities
including Apache Kafka to consider taking one of their 3 planned Hackathons
this year, to be held in Brussels on 6/7 April, 4/5 May and 5/6 Oct. They
will also pay for transportation, accommodation and food for 35-50
individuals (plus/minus) for each event. Each event will be dedicated to
one open source community.

I am sending this email to establish the level of interest within the Kafka
community for such an event. Let me know. Here is some more background.

EU-FOSSA 2 Hackathons

The EU-FOSSA 2 project aims to improve security of open source software the
European institutions use, e.g. via Bug Bounties and other initiatives. The
project also aims to bring open source communities together via three
planned Hackathons in Brussels. At these Hackathons, the software community
can solve any problems they feel necessary, security or non-security.

Though Security is the main project theme, it is not a prerequisite that
the community fix bugs at the Hackathon or that they work on a specific
thing at all. The main idea is to bring them together if they think that’s
helpful to them - and ideally bring them together in Brussels with EU
institutions folk involved in that community.

Hackathon Ideas/Themes

   - hold competitions, and/or discuss other ways to benefit/strengthen the
   community, and in doing so ensure continuity and benefit for the open
   source community
   - look at say, software governance, risk management, release management,
   architecture, new features/roadmap, embracing new technologies/ideas to
   help improve the software or related subjects
   - Or any other idea on what the community needs or could benefit from

Contact: Miss Suwon Ham at s...@bemyapp.com

Jun


[DISCUSS] KIP-430 - Return Authorized Operations in Describe Responses

2019-02-12 Thread Rajini Sivaram
Hi all,

I have created a KIP to optionally request authorised operations on
resources when describing resources:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-430+-+Return+Authorized+Operations+in+Describe+Responses

This includes only information that users with Describe access can obtain
using other means and hence is consistent with our security model. It is
intended to made it easier for clients to obtain this information.

Feedback and suggestions welcome.

Thank you,

Rajini


[jira] [Created] (KAFKA-7922) Returned authorized operations in describe responses (KIP-430)

2019-02-12 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-7922:
-

 Summary: Returned authorized operations in describe responses 
(KIP-430)
 Key: KAFKA-7922
 URL: https://issues.apache.org/jira/browse/KAFKA-7922
 Project: Kafka
  Issue Type: New Feature
  Components: core
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram


Add an option to request authorized operations on resources when describing 
resources (topics, onsumer groups and cluster).

See 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-430+-+Return+Authorized+Operations+in+Describe+Responses
 for details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-2.0-jdk8 #225

2019-02-12 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-7897; Do not write epoch start offset for older message format

--
[...truncated 439.51 KB...]
kafka.zk.ReassignPartitionsZNodeTest > testDecodeValidJson STARTED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeValidJson PASSED

kafka.zk.KafkaZkClientTest > testZNodeChangeHandlerForDataChange STARTED

kafka.zk.KafkaZkClientTest > testZNodeChangeHandlerForDataChange PASSED

kafka.zk.KafkaZkClientTest > testCreateAndGetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testCreateAndGetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testLogDirGetters STARTED

kafka.zk.KafkaZkClientTest > testLogDirGetters PASSED

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment STARTED

kafka.zk.KafkaZkClientTest > testSetGetAndDeletePartitionReassignment PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndVersion PASSED

kafka.zk.KafkaZkClientTest > testGetChildren STARTED

kafka.zk.KafkaZkClientTest > testGetChildren PASSED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset STARTED

kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testClusterIdMethods STARTED

kafka.zk.KafkaZkClientTest > testClusterIdMethods PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr STARTED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr PASSED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress STARTED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress PASSED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths STARTED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters PASSED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetLogConfigs STARTED

kafka.zk.KafkaZkClientTest > testGetLogConfigs PASSED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath STARTED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath PASSED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods STARTED

kafka.zk.KafkaZkClientTest > 

Jenkins build is back to normal : kafka-trunk-jdk11 #277

2019-02-12 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #3378

2019-02-12 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Remove deprecated assertThat usage from KafkaLog4jAppenderTest

--
[...truncated 2.31 MB...]
org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReleaseLockIfExceptionWhenLoadingCheckpoints PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldNotConvertValuesIfStoreDoesNotImplementTimestampedBytesStore STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldNotConvertValuesIfStoreDoesNotImplementTimestampedBytesStore PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldNotRemoveOffsetsOfUnUpdatedTablesDuringCheckpoint STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldNotRemoveOffsetsOfUnUpdatedTablesDuringCheckpoint PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldDropOnNegativeTimestamp STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldDropOnNegativeTimestamp PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldNotThrowStreamsExceptionWhenValueDeserializationFailsWithSkipHandler 
STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldNotThrowStreamsExceptionWhenValueDeserializationFailsWithSkipHandler 
PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldThrowOnNegativeTimestamp STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldThrowOnNegativeTimestamp PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldThrowStreamsExceptionWhenKeyDeserializationFails STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldThrowStreamsExceptionWhenKeyDeserializationFails PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldThrowStreamsExceptionWhenValueDeserializationFails STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldThrowStreamsExceptionWhenValueDeserializationFails PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldNotThrowStreamsExceptionWhenKeyDeserializationFailsWithSkipHandler STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > 
shouldNotThrowStreamsExceptionWhenKeyDeserializationFailsWithSkipHandler PASSED

org.apache.kafka.streams.processor.internals.KeyValueStoreMaterializerTest > 
shouldCreateBuilderThatBuildsMeteredStoreWithCachingAndLoggingEnabled STARTED

org.apache.kafka.streams.processor.internals.KeyValueStoreMaterializerTest > 
shouldCreateBuilderThatBuildsMeteredStoreWithCachingAndLoggingEnabled PASSED

org.apache.kafka.streams.processor.internals.KeyValueStoreMaterializerTest > 
shouldCreateBuilderThatBuildsStoreWithCachingAndLoggingDisabled STARTED

org.apache.kafka.streams.processor.internals.KeyValueStoreMaterializerTest > 
shouldCreateBuilderThatBuildsStoreWithCachingAndLoggingDisabled PASSED

org.apache.kafka.streams.processor.internals.KeyValueStoreMaterializerTest > 
shouldCreateBuilderThatBuildsStoreWithCachingDisabled STARTED

org.apache.kafka.streams.processor.internals.KeyValueStoreMaterializerTest > 
shouldCreateBuilderThatBuildsStoreWithCachingDisabled PASSED

org.apache.kafka.streams.processor.internals.KeyValueStoreMaterializerTest > 
shouldCreateKeyValueStoreWithTheProvidedInnerStore STARTED

org.apache.kafka.streams.processor.internals.KeyValueStoreMaterializerTest > 
shouldCreateKeyValueStoreWithTheProvidedInnerStore PASSED

org.apache.kafka.streams.processor.internals.KeyValueStoreMaterializerTest > 
shouldCreateBuilderThatBuildsStoreWithLoggingDisabled STARTED

org.apache.kafka.streams.processor.internals.KeyValueStoreMaterializerTest > 
shouldCreateBuilderThatBuildsStoreWithLoggingDisabled PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
testMutiLevelSensorRemoval STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
testMutiLevelSensorRemoval PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
testThroughputMetrics STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
testThroughputMetrics PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
testTotalMetricDoesntDecrease STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
testTotalMetricDoesntDecrease PASSED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
testLatencyMetrics STARTED

org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImplTest > 
testLatencyMetrics 

[jira] [Created] (KAFKA-7921) Instable KafkaStreamsTest

2019-02-12 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7921:
--

 Summary: Instable KafkaStreamsTest
 Key: KAFKA-7921
 URL: https://issues.apache.org/jira/browse/KAFKA-7921
 Project: Kafka
  Issue Type: Bug
  Components: streams, unit tests
Reporter: Matthias J. Sax


{{KafkaStreamsTest}} failed multiple times, eg,
{quote}java.lang.AssertionError: Condition not met within timeout 15000. 
Streams never started.
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
at 
org.apache.kafka.streams.KafkaStreamsTest.shouldThrowOnCleanupWhileRunning(KafkaStreamsTest.java:556){quote}
or
{quote}java.lang.AssertionError: Condition not met within timeout 15000. 
Streams never started.
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
at 
org.apache.kafka.streams.KafkaStreamsTest.testStateThreadClose(KafkaStreamsTest.java:255){quote}
 
The preserved logs are as follows:

{quote}[2019-02-12 07:02:17,198] INFO Kafka version: 2.3.0-SNAPSHOT 
(org.apache.kafka.common.utils.AppInfoParser:109)
[2019-02-12 07:02:17,198] INFO Kafka commitId: 08036fa4b1e5b822 
(org.apache.kafka.common.utils.AppInfoParser:110)
[2019-02-12 07:02:17,199] INFO stream-client [clientId] State transition from 
CREATED to REBALANCING (org.apache.kafka.streams.KafkaStreams:263)
[2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
[2019-02-12 07:02:17,200] INFO stream-client [clientId] State transition from 
REBALANCING to PENDING_SHUTDOWN (org.apache.kafka.streams.KafkaStreams:263)
[2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
[2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] State 
transition from CREATED to STARTING 
(org.apache.kafka.streams.processor.internals.StreamThread:214)
[2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] State 
transition from CREATED to STARTING 
(org.apache.kafka.streams.processor.internals.StreamThread:214)
[2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
Informed to shut down 
(org.apache.kafka.streams.processor.internals.StreamThread:1192)
[2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-238] State 
transition from STARTING to PENDING_SHUTDOWN 
(org.apache.kafka.streams.processor.internals.StreamThread:214)
[2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
Informed to shut down 
(org.apache.kafka.streams.processor.internals.StreamThread:1192)
[2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] State 
transition from STARTING to PENDING_SHUTDOWN 
(org.apache.kafka.streams.processor.internals.StreamThread:214)
[2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
(org.apache.kafka.clients.Metadata:365)
[2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
(org.apache.kafka.clients.Metadata:365)
[2019-02-12 07:02:17,205] INFO [Consumer 
clientId=clientId-StreamThread-238-consumer, groupId=appId] Discovered group 
coordinator localhost:36122 (id: 2147483647 rack: null) 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
[2019-02-12 07:02:17,205] INFO [Consumer 
clientId=clientId-StreamThread-239-consumer, groupId=appId] Discovered group 
coordinator localhost:36122 (id: 2147483647 rack: null) 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
[2019-02-12 07:02:17,206] INFO [Consumer 
clientId=clientId-StreamThread-238-consumer, groupId=appId] Revoking previously 
assigned partitions [] 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
[2019-02-12 07:02:17,206] INFO [Consumer 
clientId=clientId-StreamThread-239-consumer, groupId=appId] Revoking previously 
assigned partitions [] 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
[2019-02-12 07:02:17,206] INFO [Consumer 
clientId=clientId-StreamThread-238-consumer, groupId=appId] (Re-)joining group 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
[2019-02-12 07:02:17,206] INFO [Consumer 
clientId=clientId-StreamThread-239-consumer, groupId=appId] (Re-)joining group 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
[2019-02-12 07:02:17,208] INFO [Consumer 
clientId=clientId-StreamThread-239-consumer, groupId=appId] (Re-)joining group 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
[2019-02-12 07:02:17,208] INFO [Consumer 
clientId=clientId-StreamThread-238-consumer, groupId=appId] (Re-)joining group 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
[2019-02-12 07:02:17,278] INFO Cluster ID: 

[DISCUSS] KIP-427: Add AtMinIsr topic partition category (new metric & TopicCommand option)

2019-02-12 Thread Kevin Lu
Hi All,

Getting the discussion thread started for KIP-427 in case anyone is free
right now.

I’d like to propose a new category of topic partitions *AtMinIsr* which are
partitions that only have the minimum number of in sync replicas left in
the ISR set (as configured by min.insync.replicas).

This would add two new metrics *ReplicaManager.AtMinIsrPartitionCount *&
*Partition.AtMinIsr*, and a new TopicCommand option*
--at-min-isr-partitions* to help in monitoring and alerting.

KIP link: KIP-427: Add AtMinIsr topic partition category (new metric &
TopicCommand option)


Please take a look and let me know what you think.

Regards,
Kevin


Re: [DISCUSSION] KIP-412: Extend Admin API to support dynamic application log levels

2019-02-12 Thread Stanislav Kozlovski
Hey there everybody,

If there aren't any further comments, I will consider starting a VOTE
thread in the following days.

Best,
Stanislav

On Thu, Jan 10, 2019 at 3:31 PM Ryanne Dolan  wrote:

> Makes sense, thanks.
>
> Ryanne
>
> On Wed, Jan 9, 2019, 11:28 PM Stanislav Kozlovski  wrote:
>
> > Sorry about cutting the last message short. I was meaning to say that in
> > the future we would be able to introduce finer-grained logging
> > configuration (e.g enable debug logs for operations pertaining to this
> > topic) and that would be easier to do if we are to know what the target
> > resource of an IncrementalAlterConfig request is.
> >
> > Separating the resource types also allows us to not return a huge
> > DescribeConfigs response on the BROKER resource type - the logging
> > configurations can be quite verbose.
> >
> > I hope that answers your question
> >
> > Best,
> > Stanislav
> >
> > On Wed, Jan 9, 2019 at 3:26 PM Stanislav Kozlovski <
> stanis...@confluent.io
> > >
> > wrote:
> >
> > > Hey Ryanne, thanks for taking a look at the KIP!
> > >
> > > I think that it is useful to specify the distinction between a standard
> > > Kafka config and the log level configs. The log level can be looked at
> > as a
> > > separate resource as it does not change the behavior of the Kafka
> broker
> > in
> > > any way.
> > > In terms of practical benefits, separating the two eases this KIP's
> > > implementation and user's implementation of AlterConfigPolicy (e.g deny
> > all
> > > requests that try to alter log level) significantly. We would also be
> > able
> > > to introduce a
> > >
> > > On Wed, Jan 9, 2019 at 1:48 AM Ryanne Dolan 
> > wrote:
> > >
> > >> > To differentiate between the normal Kafka config settings and the
> > >> application's log level settings, we will introduce a new resource
> type
> > -
> > >> BROKER_LOGGERS
> > >>
> > >> Stanislav, can you explain why log level wouldn't be a "normal Kafka
> > >> config
> > >> setting"?
> > >>
> > >> Ryanne
> > >>
> > >> On Tue, Jan 8, 2019, 4:26 PM Stanislav Kozlovski <
> > stanis...@confluent.io
> > >> wrote:
> > >>
> > >> > Hey there everybody,
> > >> >
> > >> > I'd like to start a discussion about KIP-412. Please take a look at
> > the
> > >> KIP
> > >> > if you can, I would appreciate any feedback :)
> > >> >
> > >> > KIP: KIP-412
> > >> > <
> > >> >
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-412%3A+Extend+Admin+API+to+support+dynamic+application+log+levels
> > >> > >
> > >> > JIRA: KAFKA-7800 
> > >> >
> > >> > --
> > >> > Best,
> > >> > Stanislav
> > >> >
> > >>
> > >
> > >
> > > --
> > > Best,
> > > Stanislav
> > >
> >
> >
> > --
> > Best,
> > Stanislav
> >
>


-- 
Best,
Stanislav


Build failed in Jenkins: kafka-trunk-jdk8 #3377

2019-02-12 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update JUnit to 4.13 and annotate log cleaner integration test

--
[...truncated 2.30 MB...]
> Task :streams:upgrade-system-tests-10:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-10:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:compileTestJava
> Task :streams:upgrade-system-tests-10:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:testClasses
> Task :streams:upgrade-system-tests-10:checkstyleTest
> Task :streams:upgrade-system-tests-10:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:test
> Task :streams:upgrade-system-tests-11:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-11:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-11:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-11:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-11:compileTestJava
> Task :streams:upgrade-system-tests-11:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-11:testClasses
> Task :streams:upgrade-system-tests-11:checkstyleTest
> Task :streams:upgrade-system-tests-11:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-11:test
> Task :streams:upgrade-system-tests-20:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-20:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-20:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-20:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-20:compileTestJava
> Task :streams:upgrade-system-tests-20:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-20:testClasses
> Task :streams:upgrade-system-tests-20:checkstyleTest
> Task :streams:upgrade-system-tests-20:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-20:test
> Task :streams:upgrade-system-tests-21:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-21:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-21:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-21:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-21:compileTestJava
> Task :streams:upgrade-system-tests-21:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-21:testClasses
> Task :streams:upgrade-system-tests-21:checkstyleTest
> Task :streams:upgrade-system-tests-21:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-21:test
> Task :streams:streams-scala:spotbugsMain

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords STARTED
ERROR: Could not install GRADLE_4_8_1_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:881)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:483)
at 

Build failed in Jenkins: kafka-trunk-jdk11 #276

2019-02-12 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update JUnit to 4.13 and annotate log cleaner integration test

--
[...truncated 2.31 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Jenkins build is back to normal : kafka-trunk-jdk8 #3376

2019-02-12 Thread Apache Jenkins Server
See