Build failed in Jenkins: kafka-trunk-jdk8 #4065

2019-11-22 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix producer timeouts in log divergence test (#7728)

[vvcephei] MINOR: Updated StreamTableJoinIntegrationTest to use TTD (#7722)

[github] KAFKA-9123 Test a large number of replicas (#7621)


--
[...truncated 2.75 MB...]
org.apache.kafka.connect.converters.LongConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

> Task :generator:test

org.apache.kafka.message.StructRegistryTest > testDuplicateCommonStructError 
STARTED

org.apache.kafka.message.StructRegistryTest > testDuplicateCommonStructError 
PASSED

org.apache.kafka.message.StructRegistryTest > testReSpecifiedCommonStructError 
STARTED

org.apache.kafka.message.StructRegistryTest > testReSpecifiedCommonStructError 
PASSED

org.apache.kafka.message.StructRegistryTest > testCommonStructs STARTED

org.apache.kafka.message.StructRegistryTest > testCommonStructs PASSED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidTaggedVersionsNotASubetOfVersions STARTED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidTaggedVersionsNotASubetOfVersions PASSED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidNullDefaultForPotentiallyNonNullableArray STARTED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidNullDefaultForPotentiallyNonNullableArray PASSED

org.apache.kafka.message.MessageDataGeneratorTest > testInvalidFieldName STARTED

org.apache.kafka.message.MessageDataGeneratorTest > testInvalidFieldName PASSED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidTaggedVersionsWithoutTag STARTED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidTaggedVersionsWithoutTag PASSED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidFlexibleVersionsRange STARTED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidFlexibleVersionsRange PASSED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidNullDefaultForInt STARTED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidNullDefaultForInt PASSED

org.apache.kafka.message.MessageDataGeneratorTest > testDuplicateTags STARTED

org.apache.kafka.message.MessageDataGeneratorTest > testDuplicateTags PASSED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidTaggedVersionsRange STARTED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidTaggedVersionsRange PASSED

org.apache.kafka.message.MessageDataGeneratorTest > testNullDefaults STARTED

org.apache.kafka.message.MessageDataGeneratorTest > testNullDefaults PASSED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidTagWithoutTaggedVersions STARTED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidTagWithoutTaggedVersions PASSED

org.apache.kafka.message.MessageDataGeneratorTest > testInvalidNegativeTag 
STARTED

org.apache.kafka.message.MessageDataGeneratorTest > testInvalidNegativeTag 
PASSED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidSometimesNullableTaggedField STARTED

org.apache.kafka.message.MessageDataGeneratorTest > 
testInvalidSometimesNullableTaggedField PASSED

org.apache.kafka.message.IsNullConditionalTest > testNotNullCheck STARTED

org.apache.kafka.message.IsNullConditionalTest > testNotNullCheck PASSED

org.apache.kafka.message.IsNullConditionalTest > testNeverNullWithBlockScope 
STARTED

org.apache.kafka.message.IsNullConditionalTest > testNeverNullWithBlockScope 
PASSED

org.apache.kafka.message.IsNullConditionalTest > testNeverNull STARTED

org.apache.kafka.message.IsNullConditionalTest > testNeverNull PASSED

org.apache.kafka.message.IsNullConditionalTest > testAnotherNullCheck STARTED

org.apache.kafka.message.IsNullConditionalTest > testAnotherNullCheck PASSED

org.apache.kafka.message.IsNullConditionalTest > testNullCheck STARTED

org.apache.kafka.message.IsNullConditionalTest > testNullCheck PASSED

org.apache.kafka.message.EntityTypeTest > testVerifyTypeMismatches STARTED

org.apache.kafka.message.EntityTypeTest > testVerifyTypeMismatches PASSED

org.apache.kafka.message.EntityTypeTest > testVerifyTypeMatches STARTED

org.apache.kafka.message.EntityTypeTest > testVerifyTypeMatches PASSED

org.apache.kafka.message.EntityTypeTest > testUnknownEntityType STARTED

org.apache.kafka.message.EntityTypeTest > testUnknownEntityType PASSED

org.apache.kafka.message.CodeBufferTest > testWrite STARTED

org.apache.kafka.message.CodeBufferTest > testWrite PASSED

org.apache.kafka.message.CodeBufferTest > testIndentMustBeNonNegative STARTED

org.apache.kafka.message.CodeBufferTest > testIndentMustBeNonNegative PASSED

org.apache.kafka.message.CodeBufferTest > testEquals STARTED

org.apache.kafka.message.CodeBufferTest > testEquals PASSED

org.apache.kafka.message.VersionsTest > testIntersections 

Build failed in Jenkins: kafka-trunk-jdk11 #981

2019-11-22 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9123 Test a large number of replicas (#7621)


--
[...truncated 2.75 MB...]

org.apache.kafka.connect.transforms.RegexRouterTest > identity PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > addPrefix STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > addPrefix PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > addSuffix STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > addSuffix PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > slice STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > slice PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > staticReplacement STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > staticReplacement PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion PASSED

org.apache.kafka.connect.transforms

Build failed in Jenkins: kafka-2.4-jdk8 #85

2019-11-22 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9051: Prematurely complete source offset read requests for 
stopped


--
[...truncated 2.69 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

Jenkins build is back to normal : kafka-trunk-jdk8 #4064

2019-11-22 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-548 Add Option to enforce rack-aware custom partition reassignment execution

2019-11-22 Thread Satish Bellapu
Hi Vahid,
After re-thinking on this, i have following updates on the KIP, with aligning 
to the other options on ReassignPartitionsCommand,

The --execute command by default take rack awareness into consideration, and if 
the custom generated reassignment planner has conflicts along with the racks 
then it will throw the error msg with appropriate reason and conflict of 
partitions along with the racks info. The users need to explicitly choose the 
option --disable-rack-aware if they want to ignore the rack awareness. 

By this change the usage of options in --execute command will be aligned with 
--generate option, the rack awareness will be consider by default both for 
--generate as well as --execute unless explicitly set to --disable-rack-aware.

Let me know whats your thoughts on the same.

--sbellapu

On 2019/11/22 16:32:08, Vahid Hashemian  wrote: 
> Thanks Satish for drafting the KIP. Looks good overall. I would suggest
> emphasizing on the default option for --disable-rack-aware option when used
> with --execute option.
> Also, it would be great to also emphasize that the new format for
> --disable-rack-aware (which now takes a true/false value) would not impact
> the existing usages (e.g. with --generate option) that did not require a
> value for the option.
> 
> Victor, to answer your first question, in my experience the assignment json
> file is not always created by the same command (through --generate option):
> 
>- Sometimes when a broker is not healthy we manually update the existing
>assignment to change partition replicas to reduce load on the degraded
>broker.
>- In generating full partition assignment plan we also want use some
>custom assignment strategy to have more control over partition placements
>and do not use the default strategy used by Kafka.
> 
> In these scenarios, it would be very helpful to have the option of
> enforcing rack awareness with the command's --execute option.
> 
> Regards,
> --Vahid
> 
> On Fri, Nov 22, 2019 at 2:57 AM Viktor Somogyi-Vass 
> wrote:
> 
> > Hi Satish,
> >
> > Couple of questions/suggestions:
> > 1. You say that when you execute the planned reassignment then it would
> > throw an error if the generated reassignment doesn't comply with the
> > rack-aware requirement. Opposed to this: why don't you have the --generate
> > option to generate a rack-aware reassignment plan? This way users won't
> > have to do the extra round.
> > 2. Please move your KIP under
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> > ,
> > people will have a hard time finding it if it's under KIP-36.
> > (@Stan fyi:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-548+Add+Option+to+enforce+rack-aware+custom+partition+reassignment+execution
> > )
> >
> > Thanks,
> > Viktor
> >
> > On Fri, Nov 22, 2019 at 11:37 AM Stanislav Kozlovski <
> > stanis...@confluent.io>
> > wrote:
> >
> > > Hello Satish,
> > >
> > > Could you provide a link to the KIP? I am unable to find it in the KIP
> > > parent page
> > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> > >
> > > Thanks,
> > > Stanislav
> > >
> > > On Fri, Nov 22, 2019 at 8:21 AM Satish Bellapu 
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > This [KIP-548] is basically extending the capabilities of
> > > > "kafka-reassign-partitions" tool by adding rack-aware verification
> > option
> > > > when used along with custom or manually generated reassignment planner
> > > with
> > > > --execute scenario.
> > > >
> > > > @sbellapu.
> > > >
> > >
> > >
> > > --
> > > Best,
> > > Stanislav
> > >
> >
> 
> 
> -- 
> 
> Thanks!
> --Vahid
> 


[jira] [Created] (KAFKA-9228) Reconfigured converters and clients may not be propagated to connector tasks

2019-11-22 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-9228:


 Summary: Reconfigured converters and clients may not be propagated 
to connector tasks
 Key: KAFKA-9228
 URL: https://issues.apache.org/jira/browse/KAFKA-9228
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.3.0, 2.4.0, 2.3.1, 2.3.2
Reporter: Chris Egerton
Assignee: Chris Egerton


If an existing connector is reconfigured but the only changes are to its 
converters and/or Kafka clients (enabled as of 
[KIP-458|https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy]),
 the changes will not propagate to its tasks unless the connector also 
generates task configs that differ from the existing task configs. Even after 
this point, if the connector tasks are reconfigured, they will still not pick 
up on the new converter and/or Kafka client configs.

This is because the {{DistributedHerder}} only writes new task configurations 
to the connect config topic [if the connector-provided task configs differ from 
the task configs already in the config 
topic|https://github.com/apache/kafka/blob/e499c960e4f9cfc462f1a05a110d79ffa1c5b322/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L1285-L1332],
 and neither of those contain converter or Kafka client configs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #980

2019-11-22 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix producer timeouts in log divergence test (#7728)

[vvcephei] MINOR: Updated StreamTableJoinIntegrationTest to use TTD (#7722)


--
[...truncated 2.76 MB...]
org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMergerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeCount STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeCount PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeAggregated STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeAggregated PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldCountSessionWindowedWithCachingEnabled STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldCountSessionWindowedWithCachingEnabled PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnCountIfMaterializedIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnCountIfMaterializedIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfAggregatorIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfAggregatorIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnReduceIfReducerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnReduceIfReducerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldAggregateSessionWindowed STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldAggregateSessionWindowed PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfAggregatorIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfAggregatorIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldCountSessionWindowedWithCachingDisabled STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldCountSessionWindowedWithCachingDisabled PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfInitializerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfInitializerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfMergerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfMergerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedReduceIfMaterializedIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedReduceIfMaterializedIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeReduced STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeReduced PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
shouldOverlapIfOtherWindowIsWithinThisWindow STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
shouldOverlapIfOtherWindowIsWithinThisWindow PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
shouldNotOverlapIfOtherWindowIsBeforeThisWindow STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
shouldNotOverlapIfOtherWindowIsBeforeThisWindow PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
cannotCompareSessionWindowWithDifferentWindowType STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
cannotCompareSessionWindowWithDifferentWindowType PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
shouldOverlapIfOtherWindowContainsThisWindow STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowTest > 
shouldOverlapIfOtherWindowContainsThisWindow PASSE

[jira] [Resolved] (KAFKA-9051) Source task source offset reads can block graceful shutdown

2019-11-22 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9051.
--
Resolution: Fixed

> Source task source offset reads can block graceful shutdown
> ---
>
> Key: KAFKA-9051
> URL: https://issues.apache.org/jira/browse/KAFKA-9051
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.2, 1.1.1, 2.0.1, 2.1.1, 2.3.0, 2.2.1, 2.4.0, 2.5.0
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.3, 2.5.0, 2.3.2, 2.4.1
>
>
> When source tasks request source offsets from the framework, this results in 
> a call to 
> [Future.get()|https://github.com/apache/kafka/blob/8966d066bd2f80c6d8f270423e7e9982097f97b9/connect/runtime/src/main/java/org/apache/kafka/connect/storage/OffsetStorageReaderImpl.java#L79]
>  with no timeout. In distributed workers, the future is blocked on a 
> successful [read to the 
> end|https://github.com/apache/kafka/blob/8966d066bd2f80c6d8f270423e7e9982097f97b9/connect/runtime/src/main/java/org/apache/kafka/connect/storage/KafkaOffsetBackingStore.java#L136]
>  of the source offsets topic, which in turn will [poll that topic 
> indefinitely|https://github.com/apache/kafka/blob/8966d066bd2f80c6d8f270423e7e9982097f97b9/connect/runtime/src/main/java/org/apache/kafka/connect/util/KafkaBasedLog.java#L287]
>  until the latest messages for every partition of that topic have been 
> consumed.
> This normally completes in a reasonable amount of time. However, if the 
> connectivity between the Connect worker and the Kafka cluster is degraded or 
> dropped in the middle of one of these reads, it will block until connectivity 
> is restored and the request completes successfully.
> If a task is stopped (due to a manual restart via the REST API, a rebalance, 
> worker shutdown, etc.) while blocked on a read of source offsets during its 
> {{start}} method, not only will it fail to gracefully stop, but the framework 
> [will not even invoke its stop 
> method|https://github.com/apache/kafka/blob/8966d066bd2f80c6d8f270423e7e9982097f97b9/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L183]
>  until its {{start}} method (and, as a result, the source offset read 
> request) [has 
> completed|https://github.com/apache/kafka/blob/8966d066bd2f80c6d8f270423e7e9982097f97b9/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L202-L206].
>  This prevents the task from being able to clean up any resources it has 
> allocated and can lead to OOM errors, excessive thread creation, and other 
> problems.
>  
> I've confirmed that this affects every release of Connect back through 1.0 at 
> least; I've tagged the most recent bug fix release of every major/minor 
> version from then on in the {{Affects Version/s}} field to avoid just putting 
> every version in that field.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #979

2019-11-22 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8509; Add downgrade system test (#7724)


--
[...truncated 2.75 MB...]

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToNumericJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > decimalToNumericJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > numericDecimalToConnect 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > numericDecimalToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJsonWithoutSchema 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > decimalToJsonWithoutSchema 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > testStringHeaderToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > testStringHeaderToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structSchemaIdentical STARTED

org.apache.kafka.connect.json.JsonConverterTest > structSchemaIdentical 

[VOTE] KIP-533: Add default api timeout to AdminClient

2019-11-22 Thread Jason Gustafson
I'd like to start a vote on KIP-533:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-533%3A+Add+default+api+timeout+to+AdminClient
.

+1 from me

Thanks,
Jason


Re: [DISCUSS] KIP-548 Add Option to enforce rack-aware custom partition reassignment execution

2019-11-22 Thread Satish Bellapu
Hi Viktor,
-1, @Vahid already provided the info.
-2, Moved the KIP to the parent path instead of KIP-36, thanks for finding out.

--sbellapu

On 2019/11/22 10:57:35, Viktor Somogyi-Vass  wrote: 
> Hi Satish,
> 
> Couple of questions/suggestions:
> 1. You say that when you execute the planned reassignment then it would
> throw an error if the generated reassignment doesn't comply with the
> rack-aware requirement. Opposed to this: why don't you have the --generate
> option to generate a rack-aware reassignment plan? This way users won't
> have to do the extra round.
> 2. Please move your KIP under
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals 
> ,
> people will have a hard time finding it if it's under KIP-36.
> (@Stan fyi:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-548+Add+Option+to+enforce+rack-aware+custom+partition+reassignment+execution
> )
> 
> Thanks,
> Viktor
> 
> On Fri, Nov 22, 2019 at 11:37 AM Stanislav Kozlovski 
> wrote:
> 
> > Hello Satish,
> >
> > Could you provide a link to the KIP? I am unable to find it in the KIP
> > parent page
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> >
> > Thanks,
> > Stanislav
> >
> > On Fri, Nov 22, 2019 at 8:21 AM Satish Bellapu 
> > wrote:
> >
> > > Hi All,
> > >
> > > This [KIP-548] is basically extending the capabilities of
> > > "kafka-reassign-partitions" tool by adding rack-aware verification option
> > > when used along with custom or manually generated reassignment planner
> > with
> > > --execute scenario.
> > >
> > > @sbellapu.
> > >
> >
> >
> > --
> > Best,
> > Stanislav
> >
> 


Re: [DISCUSS] KIP-548 Add Option to enforce rack-aware custom partition reassignment execution

2019-11-22 Thread Satish Bellapu
Here is the link for the [KIP-548] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-548+Add+Option+to+enforce+rack-aware+custom+partition+reassignment+execution

On 2019/11/22 10:37:07, Stanislav Kozlovski  wrote: 
> Hello Satish,
> 
> Could you provide a link to the KIP? I am unable to find it in the KIP
> parent page
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> 
> Thanks,
> Stanislav
> 
> On Fri, Nov 22, 2019 at 8:21 AM Satish Bellapu 
> wrote:
> 
> > Hi All,
> >
> > This [KIP-548] is basically extending the capabilities of
> > "kafka-reassign-partitions" tool by adding rack-aware verification option
> > when used along with custom or manually generated reassignment planner with
> > --execute scenario.
> >
> > @sbellapu.
> >
> 
> 
> -- 
> Best,
> Stanislav
> 


[jira] [Resolved] (KAFKA-8509) Add downgrade system tests

2019-11-22 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8509.

Resolution: Fixed

> Add downgrade system tests
> --
>
> Key: KAFKA-8509
> URL: https://issues.apache.org/jira/browse/KAFKA-8509
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> We've been bitten a few times by downgrade incompatibilities. It should be 
> straightforward to adapt our current upgrade system tests to support 
> downgrades as well. The basic procedure should be: 
>  # Roll the cluster with the updated binary, keep IBP on old version
>  # Verify produce/consume
>  # Roll the cluster with the old binary, keep IBP the same.
>  # Verify produce/consume



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9227) Broker restart/snapshot times increase after upgrade from 1.1.0 to 2.3.1

2019-11-22 Thread Nicholas Feinberg (Jira)
Nicholas Feinberg created KAFKA-9227:


 Summary: Broker restart/snapshot times increase after upgrade from 
1.1.0 to 2.3.1
 Key: KAFKA-9227
 URL: https://issues.apache.org/jira/browse/KAFKA-9227
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.3.1
 Environment: Ubuntu 18, EC2, d2.8xlarge
Reporter: Nicholas Feinberg


I've been looking at upgrading my cluster from 1.1.0 to 2.3.1. While testing, 
I've noticed that shutting brokers down seems to take consistently longer on 
2.3.1. Specifically, the process of 'creating snapshots' seems to take several 
times longer than it did on 1.1.0. On a small testing setup, the time needed to 
create snapshots and shut down goes from ~20s to ~120s; with production-scale 
data, it goes from ~2min to ~30min.

The test hosts run about 384 partitions each (7 topics * 128 partitions each * 
3x replication / 7 brokers). The largest prod cluster has about 1344 
partitions/broker; the smallest and slowest has 2560.

In our largest prod cluster (16 d2.8xlarge broker cluster, 200k msg/s, 300 
MB/s), our restart cycles take about 3 minutes on 1.1.0 (counting ISR-rejoin 
time) and about 30 minutes on 2.3.1. The only other change we made between 
versions was increasing heap size from 8G to 16G.

To allow myself to roll back, I'm still using the 1.1 versions of the 
inter-broker protocol and the message format - is it possible that those could 
slow things down in 2.3.1? If not, any ideas what else could be at fault, or 
what I could do to narrow down the issue further?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-548 Add Option to enforce rack-aware custom partition reassignment execution

2019-11-22 Thread Vahid Hashemian
Thanks Satish for drafting the KIP. Looks good overall. I would suggest
emphasizing on the default option for --disable-rack-aware option when used
with --execute option.
Also, it would be great to also emphasize that the new format for
--disable-rack-aware (which now takes a true/false value) would not impact
the existing usages (e.g. with --generate option) that did not require a
value for the option.

Victor, to answer your first question, in my experience the assignment json
file is not always created by the same command (through --generate option):

   - Sometimes when a broker is not healthy we manually update the existing
   assignment to change partition replicas to reduce load on the degraded
   broker.
   - In generating full partition assignment plan we also want use some
   custom assignment strategy to have more control over partition placements
   and do not use the default strategy used by Kafka.

In these scenarios, it would be very helpful to have the option of
enforcing rack awareness with the command's --execute option.

Regards,
--Vahid

On Fri, Nov 22, 2019 at 2:57 AM Viktor Somogyi-Vass 
wrote:

> Hi Satish,
>
> Couple of questions/suggestions:
> 1. You say that when you execute the planned reassignment then it would
> throw an error if the generated reassignment doesn't comply with the
> rack-aware requirement. Opposed to this: why don't you have the --generate
> option to generate a rack-aware reassignment plan? This way users won't
> have to do the extra round.
> 2. Please move your KIP under
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> ,
> people will have a hard time finding it if it's under KIP-36.
> (@Stan fyi:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-548+Add+Option+to+enforce+rack-aware+custom+partition+reassignment+execution
> )
>
> Thanks,
> Viktor
>
> On Fri, Nov 22, 2019 at 11:37 AM Stanislav Kozlovski <
> stanis...@confluent.io>
> wrote:
>
> > Hello Satish,
> >
> > Could you provide a link to the KIP? I am unable to find it in the KIP
> > parent page
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> >
> > Thanks,
> > Stanislav
> >
> > On Fri, Nov 22, 2019 at 8:21 AM Satish Bellapu 
> > wrote:
> >
> > > Hi All,
> > >
> > > This [KIP-548] is basically extending the capabilities of
> > > "kafka-reassign-partitions" tool by adding rack-aware verification
> option
> > > when used along with custom or manually generated reassignment planner
> > with
> > > --execute scenario.
> > >
> > > @sbellapu.
> > >
> >
> >
> > --
> > Best,
> > Stanislav
> >
>


-- 

Thanks!
--Vahid


Build failed in Jenkins: kafka-trunk-jdk11 #978

2019-11-22 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: Add validation in MockAdminClient for replication factor 
(#7712)


--
[...truncated 2.74 MB...]
org.apache.kafka.connect.mirror.MirrorClientTest > remoteTopicsSeparatorTest 
STARTED

org.apache.kafka.connect.mirror.MirrorClientTest > remoteTopicsSeparatorTest 
PASSED

org.apache.kafka.connect.mirror.MirrorClientTest > checkpointsTopicsTest STARTED

org.apache.kafka.connect.mirror.MirrorClientTest > checkpointsTopicsTest PASSED

org.apache.kafka.connect.mirror.MirrorClientTest > replicationHopsTest STARTED

org.apache.kafka.connect.mirror.MirrorClientTest > replicationHopsTest PASSED

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Pro

[jira] [Resolved] (KAFKA-9223) RebalanceSourceConnectorsIntegrationTest disrupting builds with System::exit

2019-11-22 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9223.
--
Fix Version/s: 2.3.2
   Resolution: Fixed

Thanks, @C0urante. This PR changes one of the Connect integration tests to call 
{{maskExitProcedures(true)}}, but thanks to the previously merged 
https://github.com/apache/kafka/pull/7028 the default for this was changed to 
true on the 2.4 and trunk branches. We should have backported 
https://github.com/apache/kafka/pull/7028 to the 2.3 branch at the time, and 
that seems like the preferred approach because all Connect integration tests 
start and stop one or more Connect workers and we don't want those workers to 
fail and call {{System.exit()}}.

So, I'm closing this without merging because I just backported 
https://github.com/apache/kafka/pull/7028 to the 2.3 branch.

> RebalanceSourceConnectorsIntegrationTest disrupting builds with System::exit
> 
>
> Key: KAFKA-9223
> URL: https://issues.apache.org/jira/browse/KAFKA-9223
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0, 2.3.2
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.3.2
>
>
> The RebalanceSourceConnectorsIntegrationTest causes builds to fail sometimes 
> by ungracefully shutting down its embedded Connect workers, which in turn 
> call System::exit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9226) Section on deletion of segment files is out of date

2019-11-22 Thread Jira
Sönke Liebau created KAFKA-9226:
---

 Summary: Section on deletion of segment files is out of date
 Key: KAFKA-9226
 URL: https://issues.apache.org/jira/browse/KAFKA-9226
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.3.0
Reporter: Sönke Liebau


The section on segment deletion in the documentation seems to be a bit out of 
date.

https://kafka.apache.org/documentation/#impl_deletes

I noticed:
* pluggable deletion policies - can't find those
* deletion of segment by file access time - that's changed to record timestamp
* future mentions of size based cleanup policies - those have been implemented



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4063

2019-11-22 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: Add validation in MockAdminClient for replication factor 
(#7712)


--
[...truncated 2.74 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafk

Re: [DISCUSS] KIP-548 Add Option to enforce rack-aware custom partition reassignment execution

2019-11-22 Thread Viktor Somogyi-Vass
Hi Satish,

Couple of questions/suggestions:
1. You say that when you execute the planned reassignment then it would
throw an error if the generated reassignment doesn't comply with the
rack-aware requirement. Opposed to this: why don't you have the --generate
option to generate a rack-aware reassignment plan? This way users won't
have to do the extra round.
2. Please move your KIP under
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals ,
people will have a hard time finding it if it's under KIP-36.
(@Stan fyi:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-548+Add+Option+to+enforce+rack-aware+custom+partition+reassignment+execution
)

Thanks,
Viktor

On Fri, Nov 22, 2019 at 11:37 AM Stanislav Kozlovski 
wrote:

> Hello Satish,
>
> Could you provide a link to the KIP? I am unable to find it in the KIP
> parent page
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
>
> Thanks,
> Stanislav
>
> On Fri, Nov 22, 2019 at 8:21 AM Satish Bellapu 
> wrote:
>
> > Hi All,
> >
> > This [KIP-548] is basically extending the capabilities of
> > "kafka-reassign-partitions" tool by adding rack-aware verification option
> > when used along with custom or manually generated reassignment planner
> with
> > --execute scenario.
> >
> > @sbellapu.
> >
>
>
> --
> Best,
> Stanislav
>


Re: [DISCUSS] KIP-548 Add Option to enforce rack-aware custom partition reassignment execution

2019-11-22 Thread Stanislav Kozlovski
Hello Satish,

Could you provide a link to the KIP? I am unable to find it in the KIP
parent page
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals

Thanks,
Stanislav

On Fri, Nov 22, 2019 at 8:21 AM Satish Bellapu 
wrote:

> Hi All,
>
> This [KIP-548] is basically extending the capabilities of
> "kafka-reassign-partitions" tool by adding rack-aware verification option
> when used along with custom or manually generated reassignment planner with
> --execute scenario.
>
> @sbellapu.
>


-- 
Best,
Stanislav


Build failed in Jenkins: kafka-2.3-jdk8 #141

2019-11-22 Thread Apache Jenkins Server
See 


Changes:

[rhauch] MINOR: Embedded connect cluster should mask exit procedures by default


--
[...truncated 2.95 MB...]
kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
STARTED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
PASSED

kafka.log.LogValidatorTest > testCompressedV1 STARTED

kafka.log.LogValidatorTest > testCompressedV1 PASSED

kafka.log.LogValidatorTest > testCompressedV2 STARTED

kafka.log.LogValidatorTest > testCompressedV2 PASSED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
STARTED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed PASSED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed PASSED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion STARTED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted STARTED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients STARTED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed PASSED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed STARTED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testCompressedBatchWithoutRecordsNotAllowed STARTED

kafka.log.LogValidatorTest > testCompressedBatchWithoutRecordsNotAllowed PASSED

kafka.log.LogValidatorTest > testInvalidInnerMagicVersion STARTED

kafka.log.LogValidatorTest > test

Jenkins build is back to normal : kafka-1.1-jdk8 #282

2019-11-22 Thread Apache Jenkins Server
See 



[jira] [Created] (KAFKA-9225) kafka fail to run on linux-aarch64

2019-11-22 Thread jiamei xie (Jira)
jiamei xie created KAFKA-9225:
-

 Summary: kafka fail to run on linux-aarch64
 Key: KAFKA-9225
 URL: https://issues.apache.org/jira/browse/KAFKA-9225
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: jiamei xie
 Attachments: compat_report.html

Steps to reproduce:

1. Download Kafka latest source code

2. Build it with GRALE

3. Run [stream quick 
start|[https://kafka.apache.org/23/documentation/streams/quickstart]]

when running

bin/kafka-run-class.sh org.apache.kafka.streams.examples.wordcount.WordCountDemo

It crashed with the following error message:

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in 
[jar:file:/home/xjm/kafka/core/build/dependant-libs-2.12.10/slf4j-log4j12-1.7.28.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in 
[jar:file:/home/xjm/kafka/tools/build/dependant-libs-2.12.10/slf4j-log4j12-1.7.28.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in 
[jar:file:/home/xjm/kafka/connect/api/build/dependant-libs/slf4j-log4j12-1.7.28.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in 
[jar:file:/home/xjm/kafka/connect/transforms/build/dependant-libs/slf4j-log4j12-1.7.28.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in 
[jar:file:/home/xjm/kafka/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.28.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in 
[jar:file:/home/xjm/kafka/connect/file/build/dependant-libs/slf4j-log4j12-1.7.28.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in 
[jar:file:/home/xjm/kafka/connect/mirror/build/dependant-libs/slf4j-log4j12-1.7.28.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in 
[jar:file:/home/xjm/kafka/connect/mirror-client/build/dependant-libs/slf4j-log4j12-1.7.28.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in 
[jar:file:/home/xjm/kafka/connect/json/build/dependant-libs/slf4j-log4j12-1.7.28.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in 
[jar:file:/home/xjm/kafka/connect/basic-auth-extension/build/dependant-libs/slf4j-log4j12-1.7.28.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an 
explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

[2019-11-19 15:42:23,277] WARN The configuration 'admin.retries' was supplied 
but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig)

[2019-11-19 15:42:23,278] WARN The configuration 'admin.retry.backoff.ms' was 
supplied but isn't a known config. 
(org.apache.kafka.clients.consumer.ConsumerConfig)

[2019-11-19 15:42:24,278] ERROR stream-client 
[streams-wordcount-0f3cf88b-e2c4-4fb6-b7a3-9754fad5cd48] All stream threads 
have died. The instance will be in error state and should be closed. (org.apach 
    e.kafka.streams.KafkaStreams)

Exception in thread 
"streams-wordcount-0f3cf88b-e2c4-4fb6-b7a3-9754fad5cd48-StreamThread-1" 
java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni1377754636857652484.so: 
/tmp/librocksdbjni13777546368576524 84.so: 
cannot open shared object file: No such file or directory (Possible cause: 
can't load AMD 64-bit .so on a AARCH64-bit platform)

Analyze:

This issue is caused by rocksdbjni-5.18.3.jar which doesn't come with aarch64 
native support. Replace rocksdbjni-5.18.3.jar with rocksdbjni-6.3.6.jar from 
[https://mvnrepository.com/artifact/org.rocksdb/rocksdbjni/6.3.6] can fix this 
problem.

Attached is the binary compatibility report of rocksdbjni.jar between 5.18.3 
and 6.3.6. The result is 81.8%. So is it possible to upgrade rocksdbjni to 
6.3.6 in upstream? Should there be any kind of tests to execute, please kindly 
point me. Thanks a lot.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.0-jdk8 #305

2019-11-22 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Update to Gradle 4.10.3


--
[...truncated 433.99 KB...]

kafka.controller.ReplicaStateMachineTest > 
testInvalidOfflineReplicaToNonexistentReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToReplicaDeletionIneligibleTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToReplicaDeletionIneligibleTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionSuccessfulTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToOfflineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToOfflineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testOnlineReplicaToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testOnlineReplicaToOnlineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionIneligibleTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionIneligibleTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToOfflineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToOfflineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionStartedToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionStartedToReplicaDeletionSuccessfulTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToOnlineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToNewReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToNewReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionStartedToReplicaDeletionIneligibleTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionStartedToReplicaDeletionIneligibleTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToOfflineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToOfflineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToNewReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToNewReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionIneligibleToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionIneligibleToOnlineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToNewReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToNewReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToReplicaDeletionSuccessfulTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testNewReplicaToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testNewReplicaToOnlineReplicaTransition PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
STARTED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PASSED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent STARTED

kafka.controller.ControllerEventManagerTest > testSuccessfulE

Build failed in Jenkins: kafka-trunk-jdk8 #4062

2019-11-22 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Controller should process events without rate metrics (#7732)


--
[...truncated 2.76 MB...]
org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConifgsOnRestoreConsumer STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConifgsOnRestoreConsumer PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfGlobalConsumerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfGlobalConsumerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptExactlyOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptExactlyOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
testGetGlobalConsumerConfigsWithGlobalConsumerOverridenPrefix STARTED

org.apache.kafka.streams.StreamsConfigTest > 
testGetGlobalConsumerConfigsWithGlobalConsumerOverridenPrefix PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfMaxInFlightRequestsGreaterThanFiveIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfMaxInFlightRequestsGreaterThanFiveIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetInternalLeaveGroupOnCloseConfigToFalseInConsumer STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetInternalLeaveGroupOnCloseConfigToFalseInConsumer PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfConsumerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfConsumerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldForwardCustomConfigsWithNoPrefixToAllClients STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldForwardCustomConfigsWithNoPrefixToAllClients PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfRestoreConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfRestoreConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyCorrectValueSerdeClassOnError STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyCorrectValueSerdeClassOnError PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowToSpecifyMaxInFlightRequestsPerConnectionAsStringIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowToSpecifyMaxInFlightRequestsPerConnectionAsStringIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfGlobalConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfGlobalConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > testInvalidSocketReceiveBufferSize 
STARTED

org.apache.kafka.streams.StreamsConfigTest > testInvalidSocketReceiveBufferSize 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetGroupInstanceIdConfigs 
STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetGroupInstanceIdConfigs 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerIsolationLevelIsOverriddenIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerIsolationLevelIsOverriddenIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedGlobalConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedGlobalConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportNonPrefixedAdminConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportNonPrefixedAdminConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigCommitIntervalMsIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigCommitIntervalMsIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldLogWarningWhenPartitionGrouperIsUsed STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldLogWarningWhenPartitionGrouperIsUsed PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportMultipleBootstrapServers STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportMultipleBootstrapServers PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyNoOptimizationWhenNotExplicitlyAddedToConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyNoOptimizationWhenNotExplicitlyAddedToConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowConfigExceptionWhenOptimizationConfigNotValueInRange STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowConfigExceptionWhenOptimizationConfigNotValueInRange PASSED


Jenkins build is back to normal : kafka-1.0-jdk8 #286

2019-11-22 Thread Apache Jenkins Server
See 



[DISCUSS] KIP-548 Add Option to enforce rack-aware custom partition reassignment execution

2019-11-22 Thread Satish Bellapu
Hi All,

This [KIP-548] is basically extending the capabilities of 
"kafka-reassign-partitions" tool by adding rack-aware verification option when 
used along with custom or manually generated reassignment planner with 
--execute scenario. 

@sbellapu.