Build failed in Jenkins: kafka-2.4-jdk8 #123

2020-01-07 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-9335: Fix StreamPartitionAssignor regression in repartition 
topics

[jason] KAFKA-9065; Fix endless loop when loading group/transaction metadata


--
[...truncated 5.49 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED


Jenkins build is back to normal : kafka-trunk-jdk11 #1058

2020-01-07 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-8146) WARNING: An illegal reflective access operation has occurred

2020-01-07 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-8146.
--
Resolution: Fixed

Warnings mentioned in the description are fixed in Kafka 2.4.0 release.

> WARNING: An illegal reflective access operation has occurred
> 
>
> Key: KAFKA-8146
> URL: https://issues.apache.org/jira/browse/KAFKA-8146
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, core
>Affects Versions: 2.1.1
> Environment: Java 11
> Kafka v2.1.1
>Reporter: Abhi
>Priority: Major
>
> Hi,
> I am running Kafka v2.1.1 and see below warnings at the startup of server and 
> clients. What is the cause of these warnings and how they can be avoided or 
> fixed?
> *Client side:*
> WARNING: Illegal reflective access by 
> org.apache.kafka.common.network.SaslChannelBuilder 
> (file:/local/kafka/kafka_installation/kafka_2.12-2.1.1/libs/kafka-clients-2.1.1.jar)
>  to method sun.security.krb5.Config.getInstance()
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.kafka.common.network.SaslChannelBuilder
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> *Server side:*
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.zookeeper.server.util.KerberosUtil 
> (file:/local/kafka/kafka_installation/kafka_2.12-2.1.1/libs/zookeeper-3.4.13.jar)
>  to method sun.security.krb5.Config.getInstance()
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.zookeeper.server.util.KerberosUtil
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: about default value of struct of connect api

2020-01-07 Thread Lisheng Wang
hello guys,

update my question,

i made a test, code as below, i want to set a default value of address to
person

SchemaBuilder schemaBuilder = SchemaBuilder.struct().name("address")
.field("province", SchemaBuilder.STRING_SCHEMA)
.field("city", SchemaBuilder.STRING_SCHEMA);
Struct defaultValue = new Struct(schemaBuilder.build())
.put("province", "")
.put("city", "");
Schema dataSchema = SchemaBuilder.struct().name("person")
.field("address",
schemaBuilder.defaultValue(defaultValue).build()).build();
Struct struct = new Struct(dataSchema);
System.out.println(struct.toString());

i got exception as below

Exception in thread "main"
org.apache.kafka.connect.errors.SchemaBuilderException: Invalid default
value
at
org.apache.kafka.connect.data.SchemaBuilder.defaultValue(SchemaBuilder.java:131)

Caused by: org.apache.kafka.connect.errors.DataException: Struct schemas do
not match.
at
org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:251)
at
org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:213)
at
org.apache.kafka.connect.data.SchemaBuilder.defaultValue(SchemaBuilder.java:129)
... 1 more

i digged code of ConnectSchema.validateValue and found when type is STRUCT,
then will check class of schema, but one is  SchemaBuilder, another
is ConnectSchema

 case STRUCT:
Struct struct = (Struct) value;
if (!struct.schema().equals(schema))
throw new DataException("Struct schemas do not match.");
struct.validate();
break;

the method of equals is

@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ConnectSchema schema = (ConnectSchema) o;
return Objects.equals(optional, schema.optional) &&
Objects.equals(version, schema.version) &&
Objects.equals(name, schema.name) &&
Objects.equals(doc, schema.doc) &&
Objects.equals(type, schema.type) &&
Objects.deepEquals(defaultValue, schema.defaultValue) &&
Objects.equals(fields, schema.fields) &&
Objects.equals(keySchema, schema.keySchema) &&
Objects.equals(valueSchema, schema.valueSchema) &&
Objects.equals(parameters, schema.parameters);
}

can anyone help how to set default value of "STRUCT" type with connect api?

Thanks

Best,
Lisheng


Lisheng Wang  于2020年1月6日周一 下午3:31写道:

> hello kafka devs
>
> i'm facing a problem that how to set a default value of struct.
>
> i'm following https://docs.confluent.io/current/connect/devguide.html
>
> Schema schema = SchemaBuilder.struct().name(NAME)
> .field("name", Schema.STRING_SCHEMA)
> .field("age", Schema.INT_SCHEMA)
> .field("admin", new
> SchemaBuilder.boolean().defaultValue(false).build())
> .build();
>
> Struct struct = new Struct(schema)
> .put("name", "Barbara Liskov")
> .put("age", 75)
> .build();
>
> below is my code, i dont know how to set default value when schema type is
> struct
>
> Schema schema1 = SchemaBuilder.struct().name("info")
> .field("address", Schema.STRING_SCHEMA)
> .field("code",
> Schema.STRING_SCHEMA).defaultValue("").build();
>
> Thanks!
>
>
>
>
>
> Best,
> Lisheng
>


[jira] [Created] (KAFKA-9384) Loop improvements

2020-01-07 Thread highluck (Jira)
highluck created KAFKA-9384:
---

 Summary:  Loop improvements
 Key: KAFKA-9384
 URL: https://issues.apache.org/jira/browse/KAFKA-9384
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 2.4.0
Reporter: highluck
 Fix For: 2.4.0


Loop improvements

[streams/src/main/java/org/apache/kafka/streams/state/internals/AbstractSegments.java|https://github.com/apache/kafka/pull/7277/files#diff-337cba828184b5a77a6b1472697b758a]

I think we can improve this loop

 
{code:java}
// 코드 자리 표시자
final long[] segmentIds = new long[list.length];
for (int i = 0; i < list.length; i++) {
segmentIds[i] = segmentIdFromSegmentName(list[i], dir);
}

// open segments in the id order
Arrays.sort(segmentIds);
for (final long segmentId : segmentIds) {
if (segmentId >= 0) {
getOrCreateSegment(segmentId, context);
}
}

{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9383) Expose Consumer Group Metadata for Transactional Producer

2020-01-07 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9383:
--

 Summary: Expose Consumer Group Metadata for Transactional Producer
 Key: KAFKA-9383
 URL: https://issues.apache.org/jira/browse/KAFKA-9383
 Project: Kafka
  Issue Type: Sub-task
Reporter: Boyang Chen
Assignee: Boyang Chen


As stated in the KIP, we need a mechanism to get latest consumer group metadata 
for proper commit fencing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1057

2020-01-07 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9330: Skip `join` when `AdminClient.close` is called in callback

[matthias] KAFKA-6614: configure internal topics with


--
[...truncated 5.66 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

Jenkins build is back to normal : kafka-trunk-jdk8 #4137

2020-01-07 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-9382) Allow consumers to commit offset in the middle of a rebalance

2020-01-07 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-9382:


 Summary: Allow consumers to commit offset in the middle of a 
rebalance
 Key: KAFKA-9382
 URL: https://issues.apache.org/jira/browse/KAFKA-9382
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Guozhang Wang
Assignee: Jason Gustafson


After we've done KAFKA-8421 we should consider letting consumers to allow 
committing in the middle of a rebalance --- at the moment it will throw a 
non-fatal rebalance-in-progress exception --- so that users do not need to 
worry and handle this transient error when unnecessary. It involves:

1) On client side, not checking the "REBALANCING" state and throw immediately.
2) On client side, capture and handle illegal generation if in "REBALANCING" 
and retry with the current assigned partitions.
3) On client side, use different connections for join/sync-group requests and 
for committing / fetching offsets.
4) On broker side, during CompletingRebalance accept the commit request with 
correct generation id instead of returning REBALANCE_IN_PROGRESS error code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9065) Loading offsets and group metadata loops forever

2020-01-07 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9065.

Fix Version/s: 2.4.1
   Resolution: Fixed

> Loading offsets and group metadata loops forever
> 
>
> Key: KAFKA-9065
> URL: https://issues.apache.org/jira/browse/KAFKA-9065
> Project: Kafka
>  Issue Type: Bug
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 2.4.1
>
>
> When the metadata manager loads the groups and the offsets of a partition of 
> the __consumer-offsets topic, `GroupMetadataManager.doLoadGroupsAndOffsets` 
> could loop forever if the start offset of the partition is smaller than the 
> end offset and no records are effectively read from the partition.
> While the conditions leading to this issue are not clear, I've got the case 
> where a partition was having two segments which were both empty in a cluster. 
> This could theoretically happen when all the tombstones in the first are 
> expired and the second is truncated or when the partition is accidentally 
> corrupted.
> As a side effect, the `doLoadGroupsAndOffsets` spins forever, blocks the 
> single thread of the scheduler, blocks the loading of all the groups and 
> offsets which are after in the queue, and blocks the expiration of the 
> offsets.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9335) java.lang.IllegalArgumentException: Number of partitions must be at least 1.

2020-01-07 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9335.
--
Resolution: Fixed

> java.lang.IllegalArgumentException: Number of partitions must be at least 1.
> 
>
> Key: KAFKA-9335
> URL: https://issues.apache.org/jira/browse/KAFKA-9335
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: Nitay Kufert
>Assignee: Boyang Chen
>Priority: Blocker
>  Labels: bug
> Fix For: 2.4.1
>
>
> Hey,
> When trying to upgrade our Kafka streams client to 2.4.0 (from 2.3.1) we 
> encountered the following exception: 
> {code:java}
> java.lang.IllegalArgumentException: Number of partitions must be at least 1.
> {code}
> It's important to notice that the exact same code works just fine at 2.3.1.
>  
> I have created a "toy" example which reproduces this exception:
> [https://gist.github.com/nitayk/50da33b7bcce19ad0a7f8244d309cb8f]
> and I would love to get some insight regarding why its happening / ways to 
> get around it
>  
> Thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9381) kafka-streams-scala: Javadocs + Scaladocs not published on maven central

2020-01-07 Thread Julien Jean Paul Sirocchi (Jira)
Julien Jean Paul Sirocchi created KAFKA-9381:


 Summary: kafka-streams-scala: Javadocs + Scaladocs not published 
on maven central
 Key: KAFKA-9381
 URL: https://issues.apache.org/jira/browse/KAFKA-9381
 Project: Kafka
  Issue Type: Bug
  Components: documentation, streams
Reporter: Julien Jean Paul Sirocchi


As per title, empty (aside for MANIFEST, LICENCE and NOTICE) javadocs/scaladocs 
jars on central for any version (kafka nor scala), e.g.

[http://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-scala_2.12/2.3.1/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4136

2020-01-07 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9330: Skip `join` when `AdminClient.close` is called in callback


--
[...truncated 2.78 MB...]

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task 

[jira] [Created] (KAFKA-9380) Do Not Accept Null Values in MemberAssignment Constructor

2020-01-07 Thread David Mollitor (Jira)
David Mollitor created KAFKA-9380:
-

 Summary: Do Not Accept Null Values in MemberAssignment Constructor
 Key: KAFKA-9380
 URL: https://issues.apache.org/jira/browse/KAFKA-9380
 Project: Kafka
  Issue Type: Improvement
Reporter: David Mollitor


'nulls suck'

[https://github.com/google/guava/wiki/UsingAndAvoidingNullExplained]

Just exit quickly if a null is passed into MemberAssignment Constructor.  No 
need to check for null values.  I've looked at all references and they 
correctly pass in an empty Collection instead of null.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6614) kafka-streams to configure internal topics message.timestamp.type=CreateTime

2020-01-07 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-6614.

Fix Version/s: 2.5.0
   Resolution: Fixed

> kafka-streams to configure internal topics message.timestamp.type=CreateTime
> 
>
> Key: KAFKA-6614
> URL: https://issues.apache.org/jira/browse/KAFKA-6614
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Dmitry Vsekhvalnov
>Assignee: Sophie Blee-Goldman
>Priority: Minor
>  Labels: newbie
> Fix For: 2.5.0
>
>
> After fixing KAFKA-4785 all internal topics using built-in 
> *RecordMetadataTimestampExtractor* to read timestamps.
> Which doesn't seem to work correctly out of box with kafka brokers configured 
> with *log.message.timestamp.type=LogAppendTime* when using custom message 
> timestamp extractor.
> Example use-case windowed grouping + aggregation on late data:
> {code:java}
> KTable, Long> summaries = in
>    .groupBy((key, value) -> ..)
>    .windowedBy(TimeWindows.of(TimeUnit.HOURS.toMillis(1l)))
>    .count();{code}
> when processing late events:
>  # custom timestamp extractor will pick up timestamp in the past from message 
> (let's say hour ago)
>  # re-partition topic during grouping phase will be written back to kafka 
> using timestamp from (1)
>  # kafka broker will ignore provided timestamp in (2) to favor ingestion time
>  # streams lib will read re-partitioned topic back with 
> RecordMetadataTimestampExtractor
>  # and will get ingestion timestamp (3), which usually close to "now"
>  # window start/end will be incorrectly set based on "now" instead of 
> original timestamp from payload
> Understand there are ways to configure per-topic timestamp type in kafka 
> brokers to solve this, but it will be really nice if kafka-streams library 
> can take care of it itself.
> To follow "least-surprise" principle.  If library relies on timestamp.type 
> for topic it manages it should enforce it.
> CC [~guozhang] based on user group email discussion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9379) Flaky Test TopicCommandWithAdminClientTest.testCreateAlterTopicWithRackAware

2020-01-07 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-9379:
--

 Summary: Flaky Test 
TopicCommandWithAdminClientTest.testCreateAlterTopicWithRackAware
 Key: KAFKA-9379
 URL: https://issues.apache.org/jira/browse/KAFKA-9379
 Project: Kafka
  Issue Type: Bug
  Components: admin, core, unit tests
Reporter: Matthias J. Sax


[https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/116/testReport/junit/kafka.admin/TopicCommandWithAdminClientTest/testCreateAlterTopicWithRackAware/]
{quote}java.lang.IllegalArgumentException: Topic 
'testCreateAlterTopicWithRackAware-1Ski7jYwdP' does not exist as expected at 
kafka.admin.TopicCommand$.kafka$admin$TopicCommand$$ensureTopicExists(TopicCommand.scala:503)
 at 
kafka.admin.TopicCommand$AdminClientTopicService.alterTopic(TopicCommand.scala:257)
 at 
kafka.admin.TopicCommandWithAdminClientTest.testCreateAlterTopicWithRackAware(TopicCommandWithAdminClientTest.scala:476){quote}
STDOUT
{quote}[2020-01-07 17:50:17,384] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition testAlterPartitionCount-MixWA3fYA3-1 at offset 
0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-01-07 17:50:17,631] ERROR 
[ReplicaFetcher replicaId=4, leaderId=1, fetcherId=0] Error for partition 
testAlterPartitionCount-MixWA3fYA3-2 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-01-07 17:50:24,498] ERROR 
[ReplicaFetcher replicaId=0, leaderId=3, fetcherId=0] Error for partition 
testDescribeAtMinIsrPartitions-7B6hJOHCF7-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-01-07 17:50:24,506] ERROR 
[ReplicaFetcher replicaId=1, leaderId=3, fetcherId=0] Error for partition 
testDescribeAtMinIsrPartitions-7B6hJOHCF7-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-01-07 17:50:24,506] ERROR 
[ReplicaFetcher replicaId=5, leaderId=3, fetcherId=0] Error for partition 
testDescribeAtMinIsrPartitions-7B6hJOHCF7-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-01-07 17:50:24,506] ERROR 
[ReplicaFetcher replicaId=4, leaderId=3, fetcherId=0] Error for partition 
testDescribeAtMinIsrPartitions-7B6hJOHCF7-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-01-07 17:50:24,507] ERROR 
[ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Error for partition 
testDescribeAtMinIsrPartitions-7B6hJOHCF7-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-01-07 17:51:29,090] ERROR 
[ReplicaFetcher replicaId=1, leaderId=4, fetcherId=0] Error for partition 
kafka.testTopic1-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-01-07 17:51:29,204] ERROR 
[ReplicaFetcher replicaId=1, leaderId=5, fetcherId=0] Error for partition 
__consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-01-07 17:51:29,255] ERROR 
[ReplicaFetcher replicaId=4, leaderId=0, fetcherId=0] Error for partition 
__consumer_offsets-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-01-07 17:52:24,546] ERROR 
[ReplicaFetcher replicaId=0, leaderId=4, fetcherId=0] Error for partition 
testCreateAlterTopicWithRackAware-1Ski7jYwdP-2 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-01-07 17:52:24,547] ERROR 
[ReplicaFetcher replicaId=0, leaderId=4, fetcherId=0] Error for partition 
testCreateAlterTopicWithRackAware-1Ski7jYwdP-14 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2020-01-07 17:52:24,550] ERROR 
[ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
testCreateAlterTopicWithRackAware-1Ski7jYwdP-13 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 

Build failed in Jenkins: kafka-trunk-jdk8 #4135

2020-01-07 Thread Apache Jenkins Server
See 


Changes:

[mimaison] KAFKA-9337: Simplify MirrorMaker2 sample config (#7872)


--
[...truncated 2.78 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task 

[jira] [Resolved] (KAFKA-9330) Calling AdminClient.close in the AdminClient completion callback causes deadlock

2020-01-07 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9330.

Fix Version/s: 2.5.0
   Resolution: Fixed

> Calling AdminClient.close in the AdminClient completion callback causes 
> deadlock
> 
>
> Key: KAFKA-9330
> URL: https://issues.apache.org/jira/browse/KAFKA-9330
> Project: Kafka
>  Issue Type: Bug
>Reporter: Vikas Singh
>Assignee: Vikas Singh
>Priority: Major
> Fix For: 2.5.0
>
>
> The close method calls `Thread.join` to wait for AdminClient thread to die, 
> but that doesn't happen as the thread calling join is the AdminClient thread. 
> This causes the thread to block forever, causing a deadlock where it forever 
> waits for itself to die. 
> `AdminClient.close` should check if the thread calling close is same as 
> current thread, then skip the join. The thread will check for close condition 
> in the main loop and exit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9378) Revisit the sub-topology initialization logic

2020-01-07 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9378:
--

 Summary: Revisit the sub-topology initialization logic
 Key: KAFKA-9378
 URL: https://issues.apache.org/jira/browse/KAFKA-9378
 Project: Kafka
  Issue Type: Improvement
Reporter: Boyang Chen


In https://issues.apache.org/jira/browse/KAFKA-9335, we have seen certain 
topologies violates the work in PR 
[https://github.com/apache/kafka/pull/7495/]. We need to understand the root 
cause and propose a fix to it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9377) Refactor StreamsPartitionAssignor Repartition Count logic

2020-01-07 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9377:
--

 Summary: Refactor StreamsPartitionAssignor Repartition Count logic
 Key: KAFKA-9377
 URL: https://issues.apache.org/jira/browse/KAFKA-9377
 Project: Kafka
  Issue Type: Improvement
Reporter: Boyang Chen
Assignee: Boyang Chen






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9337) Simplifying standalone mm2-connect config

2020-01-07 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-9337.
---
Fix Version/s: 2.5.0
   Resolution: Fixed

> Simplifying standalone mm2-connect config
> -
>
> Key: KAFKA-9337
> URL: https://issues.apache.org/jira/browse/KAFKA-9337
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect, mirrormaker
>Affects Versions: 2.4.0
>Reporter: karan kumar
>Assignee: karan kumar
>Priority: Minor
> Fix For: 2.5.0
>
>
> One of the nice things about kafka is setting up in the local environment is 
> really simple. I was giving a try to the latest feature ie MM2 and found it 
> took me some time to get a minimal setup running. 
> Default config provided assumes that there will already be 3 brokers running 
> due to the default replication factor of the admin topics the mm2 connector 
> creates. 
> This got me thinking that most of the people would follow the same approach I 
> followed. 
> 1. Start a single broker cluster on 9092 
> 2. Start another single cluster broker on, let's say, 10002 
> 3. Start mm2 by"./bin/connect-mirror-maker.sh 
> ./config/connect-mirror-maker.properties" 
> What happened was I had to supply a lot more configs 
> This jira is created post discussion on the mailing list:
> https://lists.apache.org/thread.html/%3ccajxudh13kw3nam3ho69wrozsyovwue1nxf9hkcbawc9r-3d...@mail.gmail.com%3E
> cc [~ryannedolan]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Kafka 2.4.0 & Mirror Maker 2.0 Error

2020-01-07 Thread Péter Sinóros-Szabó
Hi,

I just tested with 2.4.0 release, I still get the same error.
Filed the jira ticket: https://issues.apache.org/jira/browse/KAFKA-9376

Thanks,
Peter

On Mon, 6 Jan 2020 at 21:26, Ryanne Dolan  wrote:

> I just downloaded the 2.4.0 release tarball and didn't run into any issues.
> Peter, Jamie, can one of you file a jira ticket if you are still seeing
> this? Thanks!
>
> Ryanne
>
> On Fri, Dec 27, 2019 at 12:04 PM Ryanne Dolan 
> wrote:
>
> > Thanks Peter, I'll take a look.
> >
> > Ryanne
> >
> > On Fri, Dec 27, 2019, 7:48 AM Péter Sinóros-Szabó
> >  wrote:
> >
> >> Hi,
> >>
> >> I see the same.
> >> I just downloaded the Kafka zip and I run:
> >>
> >> ~/kafka-2.4.0-rc3$ ./bin/connect-mirror-maker.sh
> >> config/connect-mirror-maker.properties
> >>
> >> Peter
> >>
> >> On Mon, 16 Dec 2019 at 17:14, Ryanne Dolan 
> wrote:
> >>
> >> > Hey Jamie, are you running the MM2 connectors on an existing Connect
> >> > cluster, or with the connet-mirror-maker.sh driver? Given your
> question
> >> > about plugin.path I'm guessing the former. Is the Connect cluster
> >> running
> >> > 2.4.0 as well? The jars should land in the Connect runtime without any
> >> need
> >> > to modify the plugin.path or copy jars around.
> >> >
> >> > Ryanne
> >> >
> >> > On Mon, Dec 16, 2019, 6:23 AM Jamie 
> >> wrote:
> >> >
> >> > > Hi All,
> >> > > I'm trying to set up mirror maker 2.0 with Kafka 2.4.0 however, I'm
> >> > > receiving the following errors on startup:
> >> > > ERROR Plugin class loader for connector
> >> > > 'org.apache.kafka.connect.mirror.MirrorSourceConnector' was not
> found.
> >> > > Returning:
> >> > >
> >>
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader@187eb9a8
> >> > > (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> >> > > ERROR Plugin class loader for connector
> >> > > 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' was not
> >> > > found. Returning:
> >> > >
> >>
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader@187eb9a8
> >> > > (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> >> > > ERROR Plugin class loader for connector
> >> > > 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' was not
> >> > > found. Returning:
> >> > >
> >>
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader@187eb9a8
> >> > > (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> >> > >
> >> > > I've checked the jar file containing these class file is in the
> class
> >> > > path.
> >> > > Is there anything I need to add to plugin.path for the connect
> >> properties
> >> > > when running mirror maker?
> >> > > Many Thanks,
> >> > > Jamie
> >> >
> >>
> >>
> >> --
> >>  - Sini
> >>
> >
>


-- 
 - Sini


[jira] [Created] (KAFKA-9376) Plugin class loader not found using MM2

2020-01-07 Thread Jira
Sinóros-Szabó Péter created KAFKA-9376:
--

 Summary: Plugin class loader not found using MM2
 Key: KAFKA-9376
 URL: https://issues.apache.org/jira/browse/KAFKA-9376
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 2.4.0
Reporter: Sinóros-Szabó Péter


I am using MM2 (release 2.4.0 with scala 2.12) I geta bunch of classloader 
errors. MM2 seems to be working, but I do not know if all of it components are 
working as expected as this is the first time I use MM2.

I run MM2 with the following command:
{code:java}
./bin/connect-mirror-maker.sh config/connect-mirror-maker.properties
{code}
Errors are:
{code:java}
[2020-01-07 15:06:17,892] ERROR Plugin class loader for connector: 
'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' was not found. 
Returning: 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader@6ebf0f36 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:165)
[2020-01-07 15:06:17,889] ERROR Plugin class loader for connector: 
'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' was not found. 
Returning: 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader@6ebf0f36 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:165)
[2020-01-07 15:06:17,904] INFO ConnectorConfig values:
 config.action.reload = restart
 connector.class = org.apache.kafka.connect.mirror.MirrorHeartbeatConnector
 errors.log.enable = false
 errors.log.include.messages = false
 errors.retry.delay.max.ms = 6
 errors.retry.timeout = 0
 errors.tolerance = none
 header.converter = null
 key.converter = null
 name = MirrorHeartbeatConnector
 tasks.max = 1
 transforms = []
 value.converter = null
 (org.apache.kafka.connect.runtime.ConnectorConfig:347)
[2020-01-07 15:06:17,904] INFO EnrichedConnectorConfig values:
 config.action.reload = restart
 connector.class = org.apache.kafka.connect.mirror.MirrorHeartbeatConnector
 errors.log.enable = false
 errors.log.include.messages = false
 errors.retry.delay.max.ms = 6
 errors.retry.timeout = 0
 errors.tolerance = none
 header.converter = null
 key.converter = null
 name = MirrorHeartbeatConnector
 tasks.max = 1
 transforms = []
 value.converter = null
 (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:347)
[2020-01-07 15:06:17,905] INFO TaskConfig values:
 task.class = class org.apache.kafka.connect.mirror.MirrorHeartbeatTask
 (org.apache.kafka.connect.runtime.TaskConfig:347)
[2020-01-07 15:06:17,905] INFO Instantiated task MirrorHeartbeatConnector-0 
with version 1 of type org.apache.kafka.connect.mirror.MirrorHeartbeatTask 
(org.apache.kafka.connect.runtime.Worker:434){code}
After a while, these errors are not logged any more.

Config is:
{code:java}
clusters = eucmain, euwbackup
eucmain.bootstrap.servers = kafka1:9092,kafka2:9092
euwbackup.bootstrap.servers = 172.30.197.203:9092,172.30.213.104:9092
eucmain->euwbackup.enabled = true
eucmain->euwbackup.topics = .*
eucmain->euwbackup.topics.blacklist = ^(kafka|kmf|__|pricing).*
eucmain->euwbackup.rename.topics = false
rename.topics = false
eucmain->euwbackup.sync.topic.acls.enabled = false
sync.topic.acls.enabled = false{code}
Using OpenJDK 8 or 11, I get the same error.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.4-jdk8 #122

2020-01-07 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] HOTFIX: fix system test race condition (#7836)


--
[...truncated 5.48 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED


Re: [DISCUSS] KIP-553: Enable TLSv1.3 by default and disable all protocols except [TLSV1.2, TLSV1.3]

2020-01-07 Thread Rajini Sivaram
Hi Nikolay,

There a couple of things you could do:

1) Run all system tests that use SSL with TLSv1.3. I had run a subset, but
it will be good to run all of them. You can do this locally using docker
with JDK 11 by updating the files in tests/docker. You will need to update
tests/kafkatest/services/security/security_config.py to enable only
TLSv1.3. Instructions for running system tests using docker are in
https://github.com/apache/kafka/blob/trunk/tests/README.md.
2) For integration tests, we run a small number of tests using TLSv1.3 if
the tests are run using JDK 11 and above. We need to do this for system
tests as well. There is an open JIRA:
https://issues.apache.org/jira/browse/KAFKA-9319. Feel free to assign this
to yourself if you have time to do this.

Regards,

Rajini


On Tue, Jan 7, 2020 at 5:15 AM Николай Ижиков  wrote:

> Hello, Rajini.
>
> Can you, please, clarify, what should be done?
> I can try to do tests by myself.
>
> > 6 янв. 2020 г., в 21:29, Rajini Sivaram 
> написал(а):
> >
> > Hi Brajesh.
> >
> > No one is working on this yet, but will follow up with the Confluent
> tools
> > team to see when this can be done.
> >
> > On Mon, Jan 6, 2020 at 3:29 PM Brajesh Kumar 
> wrote:
> >
> >> Hello Rajini,
> >>
> >> What is the plan to run system tests using JDK 11? Is someone working on
> >> this?
> >>
> >> On Mon, Jan 6, 2020 at 3:00 PM Rajini Sivaram 
> >> wrote:
> >>
> >>> Hi Nikolay,
> >>>
> >>> We can leave the KIP open and restart the discussion once system tests
> >> are
> >>> running.
> >>>
> >>> Thanks,
> >>>
> >>> Rajini
> >>>
> >>> On Mon, Jan 6, 2020 at 2:46 PM Николай Ижиков 
> >> wrote:
> >>>
>  Hello, Rajini.
> 
>  Thanks, for the feedback.
> 
>  Should I mark this KIP as declined?
>  Or just wait for the system tests results?
> 
> > 6 янв. 2020 г., в 17:26, Rajini Sivaram 
>  написал(а):
> >
> > Hi Nikolay,
> >
> > Thanks for the KIP. We currently run system tests using JDK 8 and
> >> hence
>  we
> > don't yet have full system test results with TLS 1.3 which requires
> >> JDK
>  11.
> > We should wait until that is done before enabling TLS1.3 by default.
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Mon, Dec 30, 2019 at 5:36 AM Николай Ижиков 
>  wrote:
> >
> >> Hello, Team.
> >>
> >> Any feedback on this KIP?
> >> Do we need this in Kafka?
> >>
> >>> 24 дек. 2019 г., в 18:28, Nikolay Izhikov 
> >> написал(а):
> >>>
> >>> Hello,
> >>>
> >>> I'd like to start a discussion of KIP.
> >>> Its goal is to enable TLSv1.3 and disable obsolete versions by
> >>> default.
> >>>
> >>>
> >>
> 
> >>>
> >>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956
> >>>
> >>> Your comments and suggestions are welcome.
> >>>
> >>
> >>
> 
> 
> >>>
> >>
> >>
> >> --
> >> Regards,
> >> Brajesh Kumar
> >>
>
>


[jira] [Resolved] (KAFKA-8135) Kafka Producer deadlocked on flush call with intermittent broker unavailability

2020-01-07 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-8135.
---
Resolution: Duplicate

> Kafka Producer deadlocked on flush call with intermittent broker 
> unavailability
> ---
>
> Key: KAFKA-8135
> URL: https://issues.apache.org/jira/browse/KAFKA-8135
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.1.0
>Reporter: Guozhang Wang
>Assignee: Rajini Sivaram
>Priority: Major
>
> In KIP-91 we added the config {{delivery.timeout.ms}} to replace {{retries}}, 
> and the value is default to 2 minutes. We've observed that when it was set to 
> MAX_VALUE (e.g. in Kafka Streams, when EOS is turned on), at some times the 
> {{broker.flush}} call would be blocked during the time when its destination 
> brokers are undergoing some unavailability:
> {code}
> java.lang.Thread.State: WAITING (parking)
> at jdk.internal.misc.Unsafe.park(java.base@10.0.2/Native Method)
> - parking to wait for  <0x0006aeb21a00> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(java.base@10.0.2/Unknown 
> Source)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(java.base@10.0.2/Unknown
>  Source)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(java.base@10.0.2/Unknown
>  Source)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(java.base@10.0.2/Unknown
>  Source)
> at java.util.concurrent.CountDownLatch.await(java.base@10.0.2/Unknown 
> Source)
> at 
> org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:76)
> at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:693)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:1066)
> at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.flush(RecordCollectorImpl.java:259)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.flushState(StreamTask.java:520)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:470)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:458)
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:286)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.commitAll(TaskManager.java:412)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:1057)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:911)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:805)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:774)
> {code}
> And even after the broker went back to normal, producers would still be 
> blocked. One suspicion is that when broker's not able to handle the request 
> in time, the responses are dropped somehow inside the Sender, and hence 
> whoever waiting on this response would be blocked forever.
> We've observed such scenarios when 1) broker's transiently failed for a 
> while, 2) network partitioned transiently, and 3) broker's bad config like 
> ACL caused it to not be able to handle requests for a while.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9375) Add thread names to kafka connect threads

2020-01-07 Thread karan kumar (Jira)
karan kumar created KAFKA-9375:
--

 Summary: Add thread names to kafka connect threads
 Key: KAFKA-9375
 URL: https://issues.apache.org/jira/browse/KAFKA-9375
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 2.3.1, 2.4.0
Reporter: karan kumar






--
This message was sent by Atlassian Jira
(v8.3.4#803005)