Re: [RESULTS] [VOTE] Release Kafka version 2.4.0

2019-12-13 Thread Ismael Juma
Thanks for driving the release Manikumar!

Ismael

On Fri, Dec 13, 2019, 5:42 PM Manikumar  wrote:

> This vote passes with 6 +1 votes (3 bindings) and no 0 or -1 votes.
>
> +1 votes
> PMC Members:
> * Gwen Shapira
> * Jun Rao
> * Guozhang Wang
>
> Committers:
> * Mickael Maison
>
> Community:
> * Adam Bellemare
> * Israel Ekpo
>
> 0 votes
> * No votes
>
> -1 votes
> * No votes
>
> Vote thread:
> https://markmail.org/message/qlira627sqbmmzz4
>
> I'll continue with the release process and the release announcement will
> follow in the next few days.
>
> Manikumar
>


Build failed in Jenkins: kafka-trunk-jdk11 #1023

2019-12-13 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-6049: Add auto-repartitioning for cogroup (#7792)


--
[...truncated 2.77 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED


Jenkins build is back to normal : kafka-trunk-jdk8 #4104

2019-12-13 Thread Apache Jenkins Server
See 




[RESULTS] [VOTE] Release Kafka version 2.4.0

2019-12-13 Thread Manikumar
This vote passes with 6 +1 votes (3 bindings) and no 0 or -1 votes.

+1 votes
PMC Members:
* Gwen Shapira
* Jun Rao
* Guozhang Wang

Committers:
* Mickael Maison

Community:
* Adam Bellemare
* Israel Ekpo

0 votes
* No votes

-1 votes
* No votes

Vote thread:
https://markmail.org/message/qlira627sqbmmzz4

I'll continue with the release process and the release announcement will
follow in the next few days.

Manikumar


Build failed in Jenkins: kafka-trunk-jdk11 #1022

2019-12-13 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] KAFKA-8855; Collect and Expose Client's Name and Version in the 
Brokers


--
[...truncated 5.60 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

[jira] [Created] (KAFKA-9299) Over eager optimization

2019-12-13 Thread Walker Carlson (Jira)
Walker Carlson created KAFKA-9299:
-

 Summary: Over eager optimization
 Key: KAFKA-9299
 URL: https://issues.apache.org/jira/browse/KAFKA-9299
 Project: Kafka
  Issue Type: Task
  Components: streams
Reporter: Walker Carlson


There are a few cases where the optimizer will attempt an optimization that can 
cause a copartitioning failure. Known case of this are related to join and 
cogroup, however could also effect merge or others. 

Take for example three input topics A, B and C  with 2, 3 and 4 partitions 
respectively.

B' = B.map();

B'.join(A)

B'.join(C)

 

the optimizer will push up the repartition upstream and with will cause the 
copartitioning to fail.

Can be seen with the following test:

@Test
public void shouldInsertRepartitionsTopicForCogroupsUsedTwice() {
final StreamsBuilder builder = new StreamsBuilder();

final Properties properties = new Properties();
properties.setProperty(StreamsConfig.TOPOLOGY_OPTIMIZATION, 
StreamsConfig.OPTIMIZE);

final KStream stream1 = builder.stream("one", 
stringConsumed);

final KGroupedStream groupedOne = stream1.map((k, v) -> 
new KeyValue<>(v, k)).groupByKey(Grouped.as("foo"));

final CogroupedKStream one = 
groupedOne.cogroup(STRING_AGGREGATOR);
one.aggregate(STRING_INITIALIZER);
one.aggregate(STRING_INITIALIZER);

final String topologyDescription = 
builder.build(properties).describe().toString();

System.err.println(topologyDescription);
}

Topologies:
   Sub-topology: 0
Source: KSTREAM-SOURCE-00 (topics: [one])
  --> KSTREAM-MAP-01
Processor: KSTREAM-MAP-01 (stores: [])
  --> foo-repartition-filter
  <-- KSTREAM-SOURCE-00
Processor: foo-repartition-filter (stores: [])
  --> foo-repartition-sink
  <-- KSTREAM-MAP-01
Sink: foo-repartition-sink (topic: foo-repartition)
  <-- foo-repartition-filter

  Sub-topology: 1
Source: foo-repartition-source (topics: [foo-repartition])
  --> COGROUPKSTREAM-AGGREGATE-06, 
COGROUPKSTREAM-AGGREGATE-12
Processor: COGROUPKSTREAM-AGGREGATE-06 (stores: 
[COGROUPKSTREAM-AGGREGATE-STATE-STORE-02])
  --> COGROUPKSTREAM-MERGE-07
  <-- foo-repartition-source
Processor: COGROUPKSTREAM-AGGREGATE-12 (stores: 
[COGROUPKSTREAM-AGGREGATE-STATE-STORE-08])
  --> COGROUPKSTREAM-MERGE-13
  <-- foo-repartition-source
Processor: COGROUPKSTREAM-MERGE-07 (stores: [])
  --> none
  <-- COGROUPKSTREAM-AGGREGATE-06
Processor: COGROUPKSTREAM-MERGE-13 (stores: [])
  --> none
  <-- COGROUPKSTREAM-AGGREGATE-12




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9298) Reuse of a mapped stream causes an Invalid Topology

2019-12-13 Thread Walker Carlson (Jira)
Walker Carlson created KAFKA-9298:
-

 Summary: Reuse of a mapped stream causes an Invalid Topology
 Key: KAFKA-9298
 URL: https://issues.apache.org/jira/browse/KAFKA-9298
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Walker Carlson


Can be found with in the KStreamKStreamJoinTest.java
@Test
public void optimizerIsEager() {
final StreamsBuilder builder = new StreamsBuilder();
final KStream stream1 = builder.stream("topic", 
Consumed.with(Serdes.String(), Serdes.String()));
final KStream stream2 = builder.stream("topic2", 
Consumed.with(Serdes.String(), Serdes.String()));
final KStream stream3 = builder.stream("topic3", 
Consumed.with(Serdes.String(), Serdes.String()));
final KStream newStream = stream1.map((k, v) -> new 
KeyValue<>(v, k));
newStream.join(stream2,
(value1, value2) -> value1 + value2,
JoinWindows.of(ofMillis(100)),
StreamJoined.with(Serdes.String(), Serdes.String(), 
Serdes.String()));
newStream.join(stream3,
(value1, value2) -> value1 + value2,
JoinWindows.of(ofMillis(100)),
StreamJoined.with(Serdes.String(), Serdes.String(), 
Serdes.String()));

System.err.println(builder.build().describe().toString());
}

results in 

Invalid topology: Topic KSTREAM-MAP-03-repartition has already been 
registered by another source.
org.apache.kafka.streams.errors.TopologyException: Invalid topology: Topic 
KSTREAM-MAP-03-repartition has already been registered by another 
source.
at 
org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.validateTopicNotAlreadyRegistered(InternalTopologyBuilder.java:578)
at 
org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.addSource(InternalTopologyBuilder.java:378)
at 
org.apache.kafka.streams.kstream.internals.graph.OptimizableRepartitionNode.writeToTopology(OptimizableRepartitionNode.java:100)
at 
org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.buildAndOptimizeTopology(InternalStreamsBuilder.java:303)
at 
org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:562)
at 
org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:551)
at 
org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest.optimizerIsEager(KStreamKStreamJoinTest.java:136)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 

Re: [VOTE] 2.4.0 RC4

2019-12-13 Thread Guozhang Wang
Hello Manikumar,

I verified the unit tests on scala 2.13 binary, web docs and java docs. +1
(binding).


Guozhang

On Fri, Dec 13, 2019 at 10:11 AM Jun Rao  wrote:

> Hi, Manikumar,
>
> Thanks for preparing the release. Verified quickstart on scala 2.13
> binary. +1 from me.
>
> Jun
>
> On Thu, Dec 12, 2019 at 10:30 PM Gwen Shapira  wrote:
>
> > +1 (binding)
> >
> > Validated signatures, tests and ran some test workloads.
> >
> > Thank you so much for driving this. Mani.
> >
> > On Mon, Dec 9, 2019 at 9:32 AM Manikumar 
> > wrote:
> > >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the fifth candidate for release of Apache Kafka 2.4.0.
> > >
> > > This release includes many new features, including:
> > > - Allow consumers to fetch from closest replica
> > > - Support for incremental cooperative rebalancing to the consumer
> > rebalance
> > > protocol
> > > - MirrorMaker 2.0 (MM2), a new multi-cluster, cross-datacenter
> > replication
> > > engine
> > > - New Java authorizer Interface
> > > - Support for non-key joining in KTable
> > > - Administrative API for replica reassignment
> > > - Sticky partitioner
> > > - Return topic metadata and configs in CreateTopics response
> > > - Securing Internal connect REST endpoints
> > > - API to delete consumer offsets and expose it via the AdminClient.
> > >
> > > Release notes for the 2.4.0 release:
> > > https://home.apache.org/~manikumar/kafka-2.4.0-rc4/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Thursday, December 12, 9am PT
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > https://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > https://home.apache.org/~manikumar/kafka-2.4.0-rc4/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >
> > > * Javadoc:
> > > https://home.apache.org/~manikumar/kafka-2.4.0-rc4/javadoc/
> > >
> > > * Tag to be voted upon (off 2.4 branch) is the 2.4.0 tag:
> > > https://github.com/apache/kafka/releases/tag/2.4.0-rc4
> > >
> > > * Documentation:
> > > https://kafka.apache.org/24/documentation.html
> > >
> > > * Protocol:
> > > https://kafka.apache.org/24/protocol.html
> > >
> > > Thanks,
> > > Manikumar
> >
>


-- 
-- Guozhang


Re: [VOTE] 2.4.0 RC4

2019-12-13 Thread Jun Rao
Hi, Manikumar,

Thanks for preparing the release. Verified quickstart on scala 2.13
binary. +1 from me.

Jun

On Thu, Dec 12, 2019 at 10:30 PM Gwen Shapira  wrote:

> +1 (binding)
>
> Validated signatures, tests and ran some test workloads.
>
> Thank you so much for driving this. Mani.
>
> On Mon, Dec 9, 2019 at 9:32 AM Manikumar 
> wrote:
> >
> > Hello Kafka users, developers and client-developers,
> >
> > This is the fifth candidate for release of Apache Kafka 2.4.0.
> >
> > This release includes many new features, including:
> > - Allow consumers to fetch from closest replica
> > - Support for incremental cooperative rebalancing to the consumer
> rebalance
> > protocol
> > - MirrorMaker 2.0 (MM2), a new multi-cluster, cross-datacenter
> replication
> > engine
> > - New Java authorizer Interface
> > - Support for non-key joining in KTable
> > - Administrative API for replica reassignment
> > - Sticky partitioner
> > - Return topic metadata and configs in CreateTopics response
> > - Securing Internal connect REST endpoints
> > - API to delete consumer offsets and expose it via the AdminClient.
> >
> > Release notes for the 2.4.0 release:
> > https://home.apache.org/~manikumar/kafka-2.4.0-rc4/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Thursday, December 12, 9am PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~manikumar/kafka-2.4.0-rc4/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~manikumar/kafka-2.4.0-rc4/javadoc/
> >
> > * Tag to be voted upon (off 2.4 branch) is the 2.4.0 tag:
> > https://github.com/apache/kafka/releases/tag/2.4.0-rc4
> >
> > * Documentation:
> > https://kafka.apache.org/24/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/24/protocol.html
> >
> > Thanks,
> > Manikumar
>


Kafka Node crashes giving ERROR Uncaught exception in scheduled task 'kafka-log-retention'

2019-12-13 Thread prateek shekhar
Hello,

We have installed a version of Confluent Kafka 5.2.1 for our 3 node Kafka
cluster and 3 node Zookeeper cluster.

We have a  number of topics in the Kafka cluster and processes which
continuously write data into some of these topics. Along with these topics
we have some test topic which are rarely used. But for the past couple of
months we are seeing some weird behavior in these test topics which leads
to crashing of Kafka node. It seems that the Kafka Scheduler throws
Exception and crashes the service when it could not find a log to be
deleted. I am not able to understand that even when there is no data, why
does Kafka Scheduler looks for deleting a particular log segment from a
topic's log directory.

Attaching logged error details where the original topic names have been
replaced with different names. Also I have copied and pasted the error
below.

[2019-12-02 21:25:59,269] ERROR Error while deleting segments for
test-debug-0 in dir /tmp/kafka-logs (kafka.server.LogDirFailureChannel:76)
java.nio.file.NoSuchFileException:
/tmp/kafka-logs/test-debug-0/.log
at
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:409)
at
sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
at java.nio.file.Files.move(Files.java:1395)
at
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:805)
at
org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:224)
at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:488)
at kafka.log.Log.asyncDeleteSegment(Log.scala:1924)
at kafka.log.Log.deleteSegment(Log.scala:1909)
at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1455)
at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1455)
at
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1455)
at
scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:23)
at kafka.log.Log.maybeHandleIOException(Log.scala:2013)
at kafka.log.Log.deleteSegments(Log.scala:1446)
at kafka.log.Log.deleteOldSegments(Log.scala:1441)
at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1519)
at kafka.log.Log.deleteOldSegments(Log.scala:1509)
at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:913)
at
kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:910)
at scala.collection.immutable.List.foreach(List.scala:392)
at kafka.log.LogManager.cleanupLogs(LogManager.scala:910)
at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:395)
at
kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.nio.file.NoSuchFileException:
/tmp/kafka-logs/test-debug-0/.log ->
/tmp/kafka-logs/test-debug-0/.log.deleted
at
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396)
at
sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
at java.nio.file.Files.move(Files.java:1395)
at
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:802)
... 30 more
[2019-12-02 21:25:59,271] ERROR Uncaught exception in scheduled task
'kafka-log-retention' (kafka.utils.KafkaScheduler:76)
org.apache.kafka.common.errors.KafkaStorageException: Error while deleting
segments for test-debug-0 in dir /tmp/kafka-logs
Caused by: java.nio.file.NoSuchFileException:
/tmp/kafka-logs/test-debug-0/.log
at
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 

kafka-9202: serde in ConsoleConsumer PR available

2019-12-13 Thread Jorg Heymans
Hi,

Apologies for trying to get some attention on the dev list, but I was wondering 
if the PR linked to above issue can be reviewed and perhaps merged ? IMO it 
seems rather trivial, so maybe it just slipped through the cracks somehow.

https://issues.apache.org/jira/browse/KAFKA-9202


Thanks,
Jorg


Re: [VOTE] KIP-514 Add a bounded flush() API to Kafka Producer

2019-12-13 Thread radai
i also have a PR :-)
https://github.com/apache/kafka/pull/7569

On Thu, Dec 12, 2019 at 9:50 PM Gwen Shapira  wrote:
>
> You got 3 binding votes (Joel, Harsha, Ismael) - the vote passed on Nov 7.
>
> Happy hacking!
>
> On Thu, Dec 12, 2019 at 11:35 AM radai  wrote:
> >
> > so can we call this passed ?
> >
> > On Thu, Nov 7, 2019 at 7:43 AM Satish Duggana  
> > wrote:
> > >
> > > +1 (non-binding)
> > >
> > > On Thu, Nov 7, 2019 at 8:58 PM Ismael Juma  wrote:
> > > >
> > > > +1 (binding)
> > > >
> > > > On Thu, Oct 24, 2019 at 9:33 PM radai  
> > > > wrote:
> > > >
> > > > > Hello,
> > > > >
> > > > > I'd like to initiate a vote on KIP-514.
> > > > >
> > > > > links:
> > > > > the kip -
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-514%3A+Add+a+bounded+flush%28%29+API+to+Kafka+Producer
> > > > > the PR - https://github.com/apache/kafka/pull/7569
> > > > >
> > > > > Thank you
> > > > >


Re: [DISCUSS] KIP-542: Partition Reassignment Throttling

2019-12-13 Thread Stanislav Kozlovski
Hey Viktor,

I intuitively think that reassignment is a form of (extra) replication, so
I think the non-additive version sounds more natural to me. Would be good
to see what others think

Thanks for summing up what you changed in the KIP's wording here.

Best,
Stanislav

On Fri, Dec 13, 2019 at 3:39 PM Viktor Somogyi-Vass 
wrote:

> Hey Stan,
>
> 1. Yes.
>
> 2. Yes and no :). My earlier suggestion was exactly that. In the last reply
> to you I meant that if the replication throttle is 20 and the reassignment
> throttle is 10 then we'd still have 20 total throttle but 10 of that can be
> used for general replication and 10 again for reassignment. I think that
> either your version or this one can be good solutions, the main difference
> is how do you think about reassignment.
> If we think that reassignment is a special kind of replication then it
> might make sense to treat it as that and for me it sounds logical that we
> sort of count under the replication quota. It also protects you from
> setting too high value as you won't be able to configure something higher
> than replication.throttled.rate (but then what's left for general
> replication). On the other side users may have to increase their
> replication.throttled.rate if they want to increase or set their
> reassignment quota. This doesn't really play when you treat them under
> non-related quotas but you have to keep your total quota
> (replication+reassignment) in mind. Also I don't usually see customers
> using replication throttling during normal operation so for them it might
> be better to use the additive version (where 20 + 10 = 30) from an
> operational perspective (less things to configure).
> I'll add this non-additive version to the rejected alternatives.
>
> Considering the mentioned drawbacks I think it's better to go with the
> additive one.
> The updated KIP:
> "The possible configuration variations are:
> - replication.throttled.rate is set but reassignment.throttled.rate isn't
> (or -1): any kind of replication (so including reassignment) can take up to
> replication.throttled.rate bytes.
> - replication.throttled.rate and reassignment.throttled.rate both set: both
> can use a bandwidth up to the configured limit and the total replication
> limit will be reassignment.throttled.rate + replication.throttled.rate
> - replication.throttled.rate is not set but reassignment.throttled.rate is
> set: in this case general replication has no bandwidth limits but
> reassignment.throttled.rate has the configured limits.
> - neither replication.throttled.rate nor reassignment.throttled.rate are
> set (or -1): no throttling is set on any replication."
>
> 3. Yea, the motivation section might be a bit poorly worded as both you and
> Ismael pointed out problems, so let me rephrase it:
> "A user is able to specify the partition and the throttle rate but it will
> be applied to all non-ISR replication traffic. This is can be undesirable
> because during reassignment it also applies to non-reassignment replication
> and causes a replica to be throttled if it falls out of ISR. Also if
> leadership changes during reassignment, the throttles also have to be
> changed manually."
>
> Viktor
>
> On Tue, Dec 10, 2019 at 8:16 PM Stanislav Kozlovski <
> stanis...@confluent.io>
> wrote:
>
> > Hey Viktor,
> >
> > I like your latest idea regarding the replication/reassignment configs
> > interplay - I think it makes sense for replication to always be higher. A
> > small matrix of possibilities in the KIP may be useful to future readers
> > (users)
> > To be extra clear:
> > 1. if reassignment.throttle is -1, reassignment traffic is counted with
> > replication traffic against replication.throttle
> > 2. if replication.throttle is 20 and reassignment.throttle is 10, we
> have a
> > 30 total throttle
> > Is my understanding correct?
> >
> > Regarding the KIP - the motivation states
> >
> > > So a user is able to specify the partition and the throttle rate but it
> > will be applied to all non-ISR replication traffic. This is undesirable
> > because if a node that is being throttled falls out of ISR it would
> further
> > prevent it from catching up.
> >
> > This KIP does not solve this problem, right?
> > Or did you mean to mention the problem where reassignment replicas would
> > eat up the throttle and further limit the non-ISR "original" replicas
> from
> > catching up?
> >
> > Best,
> > Stanislav
> >
> > On Tue, Dec 10, 2019 at 9:09 AM Viktor Somogyi-Vass <
> > viktorsomo...@gmail.com>
> > wrote:
> >
> > > This config will only be applied to those replicas which are
> reassigning
> > > and not yet in ISR. When they become ISR then reassignment throttling
> > stops
> > > altogether and won't apply when they fall out of ISR. Specifically
> > > the validity of the config spans from the point when a reassignment is
> > > propagated by the adding_replicas field in the LeaderAndIsr request
> until
> > > the broker gets another LeaderAndIsr request saying that the 

Re: [DISCUSS] KIP-542: Partition Reassignment Throttling

2019-12-13 Thread Viktor Somogyi-Vass
Hey Stan,

1. Yes.

2. Yes and no :). My earlier suggestion was exactly that. In the last reply
to you I meant that if the replication throttle is 20 and the reassignment
throttle is 10 then we'd still have 20 total throttle but 10 of that can be
used for general replication and 10 again for reassignment. I think that
either your version or this one can be good solutions, the main difference
is how do you think about reassignment.
If we think that reassignment is a special kind of replication then it
might make sense to treat it as that and for me it sounds logical that we
sort of count under the replication quota. It also protects you from
setting too high value as you won't be able to configure something higher
than replication.throttled.rate (but then what's left for general
replication). On the other side users may have to increase their
replication.throttled.rate if they want to increase or set their
reassignment quota. This doesn't really play when you treat them under
non-related quotas but you have to keep your total quota
(replication+reassignment) in mind. Also I don't usually see customers
using replication throttling during normal operation so for them it might
be better to use the additive version (where 20 + 10 = 30) from an
operational perspective (less things to configure).
I'll add this non-additive version to the rejected alternatives.

Considering the mentioned drawbacks I think it's better to go with the
additive one.
The updated KIP:
"The possible configuration variations are:
- replication.throttled.rate is set but reassignment.throttled.rate isn't
(or -1): any kind of replication (so including reassignment) can take up to
replication.throttled.rate bytes.
- replication.throttled.rate and reassignment.throttled.rate both set: both
can use a bandwidth up to the configured limit and the total replication
limit will be reassignment.throttled.rate + replication.throttled.rate
- replication.throttled.rate is not set but reassignment.throttled.rate is
set: in this case general replication has no bandwidth limits but
reassignment.throttled.rate has the configured limits.
- neither replication.throttled.rate nor reassignment.throttled.rate are
set (or -1): no throttling is set on any replication."

3. Yea, the motivation section might be a bit poorly worded as both you and
Ismael pointed out problems, so let me rephrase it:
"A user is able to specify the partition and the throttle rate but it will
be applied to all non-ISR replication traffic. This is can be undesirable
because during reassignment it also applies to non-reassignment replication
and causes a replica to be throttled if it falls out of ISR. Also if
leadership changes during reassignment, the throttles also have to be
changed manually."

Viktor

On Tue, Dec 10, 2019 at 8:16 PM Stanislav Kozlovski 
wrote:

> Hey Viktor,
>
> I like your latest idea regarding the replication/reassignment configs
> interplay - I think it makes sense for replication to always be higher. A
> small matrix of possibilities in the KIP may be useful to future readers
> (users)
> To be extra clear:
> 1. if reassignment.throttle is -1, reassignment traffic is counted with
> replication traffic against replication.throttle
> 2. if replication.throttle is 20 and reassignment.throttle is 10, we have a
> 30 total throttle
> Is my understanding correct?
>
> Regarding the KIP - the motivation states
>
> > So a user is able to specify the partition and the throttle rate but it
> will be applied to all non-ISR replication traffic. This is undesirable
> because if a node that is being throttled falls out of ISR it would further
> prevent it from catching up.
>
> This KIP does not solve this problem, right?
> Or did you mean to mention the problem where reassignment replicas would
> eat up the throttle and further limit the non-ISR "original" replicas from
> catching up?
>
> Best,
> Stanislav
>
> On Tue, Dec 10, 2019 at 9:09 AM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com>
> wrote:
>
> > This config will only be applied to those replicas which are reassigning
> > and not yet in ISR. When they become ISR then reassignment throttling
> stops
> > altogether and won't apply when they fall out of ISR. Specifically
> > the validity of the config spans from the point when a reassignment is
> > propagated by the adding_replicas field in the LeaderAndIsr request until
> > the broker gets another LeaderAndIsr request saying that the new replica
> is
> > added and in ISR. Furthermore the config will be applied only the actual
> > leader and follower so if the leader changes in the meanwhile the
> > throttling changes with it (again based on the LeaderAndIsr requests).
> >
> > For instance when a new broker is added to offload some partitions there,
> > it will be safer to use this config instead of general fetch throttling
> for
> > this very reason: when an existing partition that is being reassigned
> falls
> > out of ISR then it will be propagated via the LeaderAndIsr 

Build failed in Jenkins: kafka-2.4-jdk8 #107

2019-12-13 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: add UPGRADE_FROM to config docs (#7825)


--
[...truncated 5.47 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #4103

2019-12-13 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: add UPGRADE_FROM to config docs (#7825)


--
[...truncated 2.77 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest 

[jira] [Created] (KAFKA-9297) CreateTopic API do not work with older version of the request/response

2019-12-13 Thread David Jacot (Jira)
David Jacot created KAFKA-9297:
--

 Summary: CreateTopic API do not work with older version of the 
request/response
 Key: KAFKA-9297
 URL: https://issues.apache.org/jira/browse/KAFKA-9297
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: David Jacot
Assignee: David Jacot


The create topic api do not work with older version of the api. It can be 
reproduced by trying to create a topic with `kafka-topics.sh` from 2.3. It 
timeouts.

The latest version of the response has introduced new fields with default 
values. When those fields are not supported by the version used by the client, 
the serialization mechanism expect to have the default values and throw 
otherwise. The current implementation in KafkaApis set them regardless of the 
version used.

It seems that it has been introduced in KIP-525.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)