Build failed in Jenkins: kafka-trunk-jdk8 #4473

2020-04-27 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: document how to escape json parameters to ducktape tests (#8546)

[github] MINOR: Improve producer test BufferPoolTest#testCloseNotifyWaiters.


--
[...truncated 6.22 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

Build failed in Jenkins: kafka-2.5-jdk8 #101

2020-04-27 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9839; Broker should accept control requests with newer broker


--
[...truncated 5.90 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

[DISCUSS] KIP-601: Configurable socket connection timeout

2020-04-27 Thread Cheng Tan
Hi developers,


I’m proposing KIP-601 to support configurable socket connection timeout. 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-601%3A+Configurable+socket+connection+timeout
 


Currently, the initial socket connection timeout is depending on system setting 
tcp_syn_retries. The actual timeout value is 2 ^ (tcp_sync_retries + 1) - 1 
seconds. For the reasons below, we want to control the client-side socket 
timeout directly using configuration files. 
• The default value of tcp_syn_retries is 6. It means the default 
timeout value is 127 seconds and too long in some scenarios. For example, when 
the user specifies a list of N bootstrap-servers and no connection has been 
built between the client and the servers, the least loaded node provider will 
poll all the server nodes specified by the user. If M servers in the 
bootstrap-servers list are offline, the client may take (127 * M) seconds to 
connect to the cluster. In the worst case when M = N - 1, the wait time can be 
several minutes.
• Though we may set the default value of tcp_syn_retries smaller, we 
will then change the system level network behaviors, which might cause other 
issues.
• Applications depending on KafkaAdminClient may want to robustly know 
and control the initial socket connect timeout, which can help throw 
corresponding exceptions in their layer.

Please let me know if you have any thoughts or suggestions. Thanks.


Best, - Cheng Tan



Build failed in Jenkins: kafka-trunk-jdk14 #27

2020-04-27 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Improve producer test BufferPoolTest#testCloseNotifyWaiters.


--
[...truncated 1.51 MB...]

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless PASSED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType STARTED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema PASSED

org.apache.kafka.connect.transforms.CastTest > castLogicalToString STARTED

org.apache.kafka.connect.transforms.CastTest > castLogicalToString PASSED

org.apache.kafka.connect.transforms.InsertFieldTest > 
insertConfiguredFieldsIntoTombstoneEventWithSchemaLeavesValueUnchanged STARTED

org.apache.kafka.connect.transforms.InsertFieldTest > 
insertConfiguredFieldsIntoTombstoneEventWithSchemaLeavesValueUnchanged PASSED

org.apache.kafka.connect.transforms.InsertFieldTest > 
schemalessInsertConfiguredFields STARTED

org.apache.kafka.connect.transforms.InsertFieldTest > 
schemalessInsertConfiguredFields PASSED

org.apache.kafka.connect.transforms.InsertFieldTest > 
insertConfiguredFieldsIntoTombstoneEventWithoutSchemaLeavesValueUnchanged 
STARTED

org.apache.kafka.connect.transforms.InsertFieldTest > 
insertConfiguredFieldsIntoTombstoneEventWithoutSchemaLeavesValueUnchanged PASSED

org.apache.kafka.connect.transforms.InsertFieldTest > topLevelStructRequired 
STARTED

org.apache.kafka.connect.transforms.InsertFieldTest > topLevelStructRequired 
PASSED

org.apache.kafka.connect.transforms.InsertFieldTest > 
insertKeyFieldsIntoTombstoneEvent STARTED

org.apache.kafka.connect.transforms.InsertFieldTest > 
insertKeyFieldsIntoTombstoneEvent PASSED

org.apache.kafka.connect.transforms.InsertFieldTest > 
insertIntoNullKeyLeavesRecordUnchanged STARTED

org.apache.kafka.connect.transforms.InsertFieldTest > 
insertIntoNullKeyLeavesRecordUnchanged PASSED

org.apache.kafka.connect.transforms.InsertFieldTest > 
copySchemaAndInsertConfiguredFields STARTED

org.apache.kafka.connect.transforms.InsertFieldTest > 
copySchemaAndInsertConfiguredFields PASSED

> Task :spotlessScalaCheck
> Task :streams:streams-scala:processResources NO-SOURCE
> Task 

[DISCUSS] KIP-602 - Change default value for client.dns.lookup

2020-04-27 Thread Badai Aqrandista
Hi everyone

I have opened this KIP to have client.dns.lookup default value changed to
"use_all_dns_ips".

https://cwiki.apache.org/confluence/display/KAFKA/KIP-602%3A+Change+default+value+for+client.dns.lookup

Feedback appreciated.

PS: I'm new here so please let me know if I miss anything.

-- 
Thanks,
Badai


Re: KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations

2020-04-27 Thread Jun Rao
Hi, David,

Thanks for the reply. A few more comments.

1. I am actually not sure if a quota based on request rate is easier for
the users. For context, in KIP-124, we started with a request rate quota,
but ended up not choosing it. The main issues are (a) requests are not
equal; some are more expensive than others; (b) the users typically don't
know how expensive each type of request is. For example, a big part of
the controller cost is ZK write. To create a new topic with 1 partition,
the number of ZK writes is 4 (1 for each segment
in /brokers/topics/[topic]/partitions/[partitionId]/state). The cost of
adding one partition to an existing topic requires 2 ZK writes. The cost of
deleting a topic with 1 partition requires 6 to 7 ZK writes. It's unlikely
for a user to know the exact cost associated with those
implementation details. If users don't know the cost, it's not clear if
they can set the rate properly.

2. I think that depends on the goal. To me, the common problem is that you
have many applications running on a shared Kafka cluster and one of the
applications abuses the broker by issuing too many requests. In this case,
a global quota will end up throttling every application. However, what we
really want in this case is to only throttle the application that causes
the problem. A user level quota solves this problem more effectively. We
may still need some sort of global quota when the total usage from all
applications exceeds the broker resource. But that seems to be secondary
since it's uncommon for all applications' usage to go up at the same time.
We can also solve this problem by reducing the per user quota for every
application if there is a user level quota.

3. Not sure that I fully understand the difference in burst balance. The
current throttling logic works as follows. Each quota is measured over a
number of time windows. Suppose the Quota is to X/sec. If time passes and
the quota is not being used, we are accumulating credit at the rate of
X/sec. If a quota is being used, we are reducing that credit based on the
usage. The credit expires when the corresponding window rolls out. The max
credit that can be accumulated is X * number of windows * window size. So,
in some sense, the current logic also supports burst and a way to cap the
burst. Could you explain the difference with Token Bucket a bit more? Also,
the current quota system always executes the current request even if it's
being throttled. It just informs the client to back off a throttled amount
of time before sending another request.

Jun



On Mon, Apr 27, 2020 at 5:15 AM David Jacot  wrote:

> Hi Jun,
>
> Thank you for the feedback.
>
> 1. You are right. At the end, we do care about the percentage of time that
> an operation ties up the controller thread. I thought about this but I was
> not entirely convinced by it for following reasons:
>
> 1.1. While I do agree that setting up a rate and a burst is a bit harder
> than
> allocating a percentage for the administrator of the cluster, I believe
> that a
> rate and a burst are way easier to understand for the users of the cluster.
>
> 1.2. Measuring the time that a request ties up the controller thread is not
> as straightforward as it sounds because the controller reacts to ZK
> TopicChange and TopicDeletion events in lieu of handling requests directly.
> These events do not carry on the client id nor the user information so the
> best would be to refactor the controller to accept requests instead of
> reacting
> to the events. This will be possible with KIP-590. It has obviously other
> side effects in the controller (e.g. batching).
>
> I leaned towards the current proposal mainly due to 1.1. as 1.2. can be (or
> will be) fixed. Does 1.1. sound like a reasonable trade off to you?
>
> 2. It is not in the current proposal. I thought that a global quota would
> be
> enough to start with. We can definitely make it work like the other quotas.
>
> 3. The main difference is that the Token Bucket algorithm defines an
> explicit
> burst B while guaranteeing an average rate R whereas our existing quota
> guarantees an average rate R as well but starts to throttle as soon as the
> rate goes above the defined quota.
>
> Creating and deleting topics is bursty by nature. Applications create or
> delete
> topics occasionally by usually sending one request with multiple topics.
> The
> reasoning behind allowing a burst is to allow such requests with a
> reasonable
> size to pass without being throttled whereas our current quota mechanism
> would reject any topics as soon as the rate is above the quota requiring
> the
> applications to send subsequent requests to create or to delete all the
> topics.
>
> Best,
> David
>
>
> On Fri, Apr 24, 2020 at 9:03 PM Jun Rao  wrote:
>
> > Hi, David,
> >
> > Thanks for the KIP. A few quick comments.
> >
> > 1. About quota.partition.mutations.rate. I am not sure if it's very easy
> > for the user to set the quota as a rate. For example, each partition
> > 

Build failed in Jenkins: kafka-trunk-jdk11 #1396

2020-04-27 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Improve producer test BufferPoolTest#testCloseNotifyWaiters.


--
[...truncated 1.51 MB...]

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless PASSED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType STARTED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema PASSED

org.apache.kafka.connect.transforms.CastTest > castLogicalToString STARTED

org.apache.kafka.connect.transforms.CastTest > castLogicalToString PASSED

> Task :spotlessScalaCheck
> Task :streams:streams-scala:processResources NO-SOURCE
> Task :streams:streams-scala:processTestResources
> Task :streams:test-utils:processTestResources
> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task 

Build failed in Jenkins: kafka-trunk-jdk14 #26

2020-04-27 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: document how to escape json parameters to ducktape tests (#8546)


--
[...truncated 1.51 MB...]

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneSchemaless 
STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneSchemaless 
PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneWithSchema 
STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneWithSchema 
PASSED

> Task 

Build failed in Jenkins: kafka-trunk-jdk11 #1395

2020-04-27 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: document how to escape json parameters to ducktape tests (#8546)


--
[...truncated 1.52 MB...]

org.apache.kafka.connect.transforms.FlattenTest > testOptionalStruct PASSED

org.apache.kafka.connect.transforms.FlattenTest > testNestedStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaNameUpdate 
STARTED

org.apache.kafka.connect.transforms.FlattenTest > testNestedStruct PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaNameUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdateWithStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdateWithStruct PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfStruct STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfStruct PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdate STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
schemaNameAndVersionUpdate PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > updateSchemaOfNull 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > schemaVersionUpdate 
PASSED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ValueToKeyTest > withSchema STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ValueToKeyTest > withSchema PASSED

org.apache.kafka.connect.transforms.ValueToKeyTest > nonExistingField STARTED

org.apache.kafka.connect.transforms.ValueToKeyTest > nonExistingField PASSED

org.apache.kafka.connect.transforms.ValueToKeyTest > schemaless STARTED

org.apache.kafka.connect.transforms.ValueToKeyTest > schemaless PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > doesntMatch STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > doesntMatch PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > identity STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > identity PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > addPrefix STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > addPrefix PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > addSuffix STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > addSuffix PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > slice STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > slice PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > staticReplacement STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > staticReplacement PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > schemaless PASSED

> Task :spotlessScalaCheck
> Task :streams:streams-scala:processResources NO-SOURCE
> Task :streams:streams-scala:processTestResources
> Task :streams:test-utils:processTestResources
> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task 

Re: [DISCUSS] KIP-595: A Raft Protocol for the Metadata Quorum

2020-04-27 Thread Jason Gustafson
As promised, here is a link to the current prototype:
https://github.com/confluentinc/kafka/tree/kafka-raft.

Thanks,
Jason

On Mon, Apr 20, 2020 at 10:53 AM Jason Gustafson  wrote:

> Hi Deng,
>
> Thanks for the question. I mentioned this in the rejected alternatives
> section. The current proposal is only for metadata, but I am definitely in
> favor of using Raft for partition replication in the long term as well.
> There are some interesting tradeoffs in terms of fault tolerance, latency,
> and batching compared with the current replication protocol. I consider
> this a good candidate for the next big architectural change once KIP-500
> nears completion.
>
> -Jason
>
> On Sun, Apr 19, 2020 at 7:16 PM deng ziming 
> wrote:
>
>> Big +1 for your initiative and I have a question, we implement the
>> Raft protocol
>> just to be used in the management of metadata in Zookeeper or we will also
>> use it to replace the current logical of managing log-replica since the
>> algorithm we used to manage log-replica is analogous to Raft.
>>
>>
>> On Fri, Apr 17, 2020 at 7:45 AM Jason Gustafson 
>> wrote:
>>
>> > Hi All,
>> >
>> > I'd like to start a discussion on KIP-595:
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-595%3A+A+Raft+Protocol+for+the+Metadata+Quorum
>> > .
>> > This proposal specifies a Raft protocol to ultimately replace Zookeeper
>> as
>> > documented in KIP-500. Please take a look and share your thoughts.
>> >
>> > A few minor notes to set the stage a little bit:
>> >
>> > - This KIP does not specify the structure of the messages used to
>> represent
>> > metadata in Kafka, nor does it specify the internal API that will be
>> used
>> > by the controller. Expect these to come in later proposals. Here we are
>> > primarily concerned with the replication protocol and basic operational
>> > mechanics.
>> > - We expect many details to change as we get closer to integration with
>> > the controller. Any changes we make will be made either as amendments to
>> > this KIP or, in the case of larger changes, as new proposals.
>> > - We have a prototype implementation which I will put online within the
>> > next week which may help in understanding some details. It has diverged
>> a
>> > little bit from our proposal, so I am taking a little time to bring it
>> in
>> > line. I'll post an update to this thread when it is available for
>> review.
>> >
>> > Finally, I want to mention that this proposal was drafted by myself,
>> Boyang
>> > Chen, and Guozhang Wang.
>> >
>> > Thanks,
>> > Jason
>> >
>>
>


Build failed in Jenkins: kafka-trunk-jdk11 #1394

2020-04-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9885; Evict last members of a group when the maximum allowed is


--
[...truncated 1.51 MB...]

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullValueToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED


Build failed in Jenkins: kafka-trunk-jdk14 #25

2020-04-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9885; Evict last members of a group when the maximum allowed is


--
[...truncated 1.51 MB...]

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneSchemaless 
STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneSchemaless 
PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneWithSchema 
STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneWithSchema 
PASSED

> Task 

Jenkins build is back to normal : kafka-trunk-jdk8 #4471

2020-04-27 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-9885) Evict last members of a group when the maximum allowed is reached

2020-04-27 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9885.

Fix Version/s: 2.6.0
   Resolution: Fixed

> Evict last members of a group when the maximum allowed is reached
> -
>
> Key: KAFKA-9885
> URL: https://issues.apache.org/jira/browse/KAFKA-9885
> Project: Kafka
>  Issue Type: Bug
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 2.6.0
>
>
> While analysing https://issues.apache.org/jira/browse/KAFKA-7965, we found 
> that multiple members of a group can be evicted from a group if the leader of 
> the consumer offset partition changes before the group is persisted. This 
> happens because the current evection logic always evict the first member 
> which rejoins the group.
> We would like to change the evection logic so that the last members to rejoin 
> the group are kicked out instead.
> Here is an example of what happens when the leader changes:
> {noformat}
> // Group is loaded in GroupCoordinator 0
> // A rebalance is triggered because the group is over capacity
> [2020-04-02 11:14:33,393] INFO [GroupMetadataManager brokerId=0] Scheduling 
> loading of offsets and group metadata from __consumer_offsets-0 
> (kafka.coordinator.group.GroupMetadataManager:66)
> [2020-04-02 11:14:33,406] INFO [Consumer clientId=ConsumerTestConsumer, 
> groupId=group-max-size-test] Discovered group coordinator localhost:40071 
> (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:794)
> [2020-04-02 11:14:33,409] INFO Static member 
> MemberMetadata(memberId=ConsumerTestConsumer-ceaab707-69af-4a65-8275-cb7db7fb66b3,
>  groupInstanceId=Some(null), clientId=ConsumerTestConsumer, 
> clientHost=/127.0.0.1, sessionTimeoutMs=1, rebalanceTimeoutMs=6, 
> supportedProtocols=List(range), ).groupInstanceId of group 
> group-max-size-test loaded with member id 
> ConsumerTestConsumer-ceaab707-69af-4a65-8275-cb7db7fb66b3 at generation 1. 
> (kafka.coordinator.group.GroupMetadata$:126)
> [2020-04-02 11:14:33,410] INFO Static member 
> MemberMetadata(memberId=ConsumerTestConsumer-07077ca2-30e9-45cd-b363-30672281bacb,
>  groupInstanceId=Some(null), clientId=ConsumerTestConsumer, 
> clientHost=/127.0.0.1, sessionTimeoutMs=1, rebalanceTimeoutMs=6, 
> supportedProtocols=List(range), ).groupInstanceId of group 
> group-max-size-test loaded with member id 
> ConsumerTestConsumer-07077ca2-30e9-45cd-b363-30672281bacb at generation 1. 
> (kafka.coordinator.group.GroupMetadata$:126)
> [2020-04-02 11:14:33,412] INFO Static member 
> MemberMetadata(memberId=ConsumerTestConsumer-5d359e65-1f11-43ce-874e-fddf55c0b49d,
>  groupInstanceId=Some(null), clientId=ConsumerTestConsumer, 
> clientHost=/127.0.0.1, sessionTimeoutMs=1, rebalanceTimeoutMs=6, 
> supportedProtocols=List(range), ).groupInstanceId of group 
> group-max-size-test loaded with member id 
> ConsumerTestConsumer-5d359e65-1f11-43ce-874e-fddf55c0b49d at generation 1. 
> (kafka.coordinator.group.GroupMetadata$:126)
> [2020-04-02 11:14:33,413] INFO [GroupCoordinator 0]: Loading group metadata 
> for group-max-size-test with generation 1 
> (kafka.coordinator.group.GroupCoordinator:66)
> [2020-04-02 11:14:33,413] INFO [GroupCoordinator 0]: Preparing to rebalance 
> group group-max-size-test in state PreparingRebalance with old generation 1 
> (__consumer_offsets-0) (reason: Freshly-loaded group is over capacity 
> (GroupConfig(10,180,2,0).groupMaxSize). Rebalacing in order to give a 
> chance for consumers to commit offsets) 
> (kafka.coordinator.group.GroupCoordinator:66)
> [2020-04-02 11:14:33,431] INFO [GroupMetadataManager brokerId=0] Finished 
> loading offsets and group metadata from __consumer_offsets-0 in 28 
> milliseconds, of which 0 milliseconds was spent in the scheduler. 
> (kafka.coordinator.group.GroupMetadataManager:66)
> // A first consumer is kicked out of the group while trying to re-join
> [2020-04-02 11:14:33,449] ERROR [Consumer clientId=ConsumerTestConsumer, 
> groupId=group-max-size-test] Attempt to join group failed due to fatal error: 
> The consumer group has reached its max size. 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:627)
> [2020-04-02 11:14:33,451] ERROR [daemon-consumer-assignment-2]: Error due to 
> (kafka.api.AbstractConsumerTest$ConsumerAssignmentPoller:76)
> org.apache.kafka.common.errors.GroupMaxSizeReachedException: Consumer group 
> group-max-size-test already has the configured maximum number of members.
> [2020-04-02 11:14:33,451] INFO [daemon-consumer-assignment-2]: Stopped 
> (kafka.api.AbstractConsumerTest$ConsumerAssignmentPoller:66)
> // Before the rebalance is completed, a preferred replica leader election 
> kicks 

Build failed in Jenkins: kafka-trunk-jdk11 #1393

2020-04-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9866: Avoid election for topics where preferred leader is not in


--
[...truncated 1.51 MB...]

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless PASSED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType STARTED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema PASSED

org.apache.kafka.connect.transforms.CastTest > castLogicalToString STARTED

org.apache.kafka.connect.transforms.CastTest > castLogicalToString PASSED

> Task :spotlessScalaCheck
> Task :streams:streams-scala:processResources NO-SOURCE
> Task :streams:streams-scala:processTestResources
> Task :streams:test-utils:processTestResources
> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task 

Build failed in Jenkins: kafka-trunk-jdk14 #24

2020-04-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9866: Avoid election for topics where preferred leader is not in


--
[...truncated 1.51 MB...]

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneSchemaless 
STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneSchemaless 
PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneWithSchema 
STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneWithSchema 
PASSED

> Task 

[jira] [Resolved] (KAFKA-9866) Do not attempt to elect preferred leader replicas which are outside ISR

2020-04-27 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-9866.

Fix Version/s: 2.6.0
   Resolution: Fixed

Merged the PR to trunk.

> Do not attempt to elect preferred leader replicas which are outside ISR
> ---
>
> Key: KAFKA-9866
> URL: https://issues.apache.org/jira/browse/KAFKA-9866
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Stanislav Kozlovski
>Assignee: Wang Ge
>Priority: Minor
> Fix For: 2.6.0
>
>
> The controller automatically triggers a preferred leader election every N 
> minutes. It tries to elect all preferred leaders of partitions without doing 
> some basic checks like whether the leader is in sync.
> This leads to a multitude of errors which cause confusion:
> {code:java}
> April 14th 2020, 17:01:11.015 [Controller id=0] Partition TOPIC-9 failed to 
> complete preferred replica leader election to 1. Leader is still 0{code}
> {code:java}
> April 14th 2020, 17:01:11.002 [Controller id=0] Error completing replica 
> leader election (PREFERRED) for partition TOPIC-9
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> TOPIC-9 under strategy PreferredReplicaPartitionLeaderElectionStrategy {code}
> It would be better if the Controller filtered out some of these elections, 
> not attempt them at all and maybe log an aggregate INFO-level log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1392

2020-04-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9839; Broker should accept control requests with newer broker


--
[...truncated 1.51 MB...]
org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneSchemaless 
STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneSchemaless 
PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneWithSchema 
STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneWithSchema 
PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.MaskFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.MaskFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > 
nonExistentFieldWithSchemaShouldFail STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > 
nonExistentFieldWithSchemaShouldFail PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > 
nonExistentFieldSchemalessShouldReturnNull STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > 
nonExistentFieldSchemalessShouldReturnNull PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless PASSED

> Task :spotlessScalaCheck
> Task :streams:streams-scala:processResources NO-SOURCE
> Task :streams:streams-scala:processTestResources
> Task :streams:test-utils:processTestResources
> Task 

[jira] [Resolved] (KAFKA-9839) IllegalStateException on metadata update when broker learns about its new epoch after the controller

2020-04-27 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9839.

Fix Version/s: 2.5.1
   Resolution: Fixed

> IllegalStateException on metadata update when broker learns about its new 
> epoch after the controller
> 
>
> Key: KAFKA-9839
> URL: https://issues.apache.org/jira/browse/KAFKA-9839
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, core
>Affects Versions: 2.3.1
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>Priority: Critical
> Fix For: 2.5.1
>
>
> Broker throws "java.lang.IllegalStateException: Epoch XXX larger than current 
> broker epoch YYY"  on UPDATE_METADATA when the controller learns about the 
> broker epoch and sends UPDATE_METADATA before KafkaZkCLient.registerBroker 
> completes (the broker learns about its new epoch).
> Here is the scenario we observed in more detail:
> 1. ZK session expires on broker 1
> 2. Broker 1 establishes new session to ZK and creates znode
> 3. Controller learns about broker 1 and assigns epoch
> 4. Broker 1 receives UPDATE_METADATA from controller, but it does not know 
> about its new epoch yet, so we get an exception:
> ERROR [KafkaApi-3] Error when handling request: clientId=1, correlationId=0, 
> api=UPDATE_METADATA, body={
> .
> java.lang.IllegalStateException: Epoch XXX larger than current broker epoch 
> YYY at kafka.server.KafkaApis.isBrokerEpochStale(KafkaApis.scala:2725) at 
> kafka.server.KafkaApis.handleUpdateMetadataRequest(KafkaApis.scala:320) at 
> kafka.server.KafkaApis.handle(KafkaApis.scala:139) at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) at 
> java.lang.Thread.run(Thread.java:748)
> 5. KafkaZkCLient.registerBroker completes on broker 1: "INFO Stat of the 
> created znode at /brokers/ids/1"
> The result is the broker has a stale metadata for some time.
> Possible solutions:
> 1. Broker returns a more specific error and controller retries UPDATE_MEDATA
> 2. Broker accepts UPDATE_METADATA with larger broker epoch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk14 #23

2020-04-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9839; Broker should accept control requests with newer broker


--
[...truncated 1.51 MB...]
org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneSchemaless 
STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneSchemaless 
PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneWithSchema 
STARTED


Re: [VOTE] KIP-584: Versioning scheme for features

2020-04-27 Thread Colin McCabe
Thanks, Kowshik.  +1 (binding)

best,
Colin

On Sat, Apr 25, 2020, at 13:20, Kowshik Prakasam wrote:
> Hi Colin,
> 
> Thanks for the explanation! I agree with you, and I have updated the 
> KIP.
> Here is a link to relevant section:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Controller:ZKnodebootstrapwithdefaultvalues
> 
> 
> Cheers,
> Kowshik
> 
> On Fri, Apr 24, 2020 at 8:50 PM Colin McCabe  wrote:
> 
> > On Fri, Apr 24, 2020, at 00:01, Kowshik Prakasam wrote:
> > > (Kowshik): Great point! However for case #1, I'm not sure why we need to
> > > create a '/features' ZK node with disabled features. Instead, do you see
> > > any drawback if we just do not create it? i.e. if IBP is less than 2.6,
> > the
> > > controller treats the case as though the versioning system is completely
> > > disabled, and would not create a non-existing '/features' node.
> >
> > Hi Kowshik,
> >
> > When the IBP is less than 2.6, but the software has been upgraded to a
> > state where it supports this KIP, that
> >  means the user is upgrading from an earlier version of the software.  In
> > this case, we want to start with all the features disabled and allow the
> > user to enable them when they are ready.
> >
> > Enabling all the possible features immediately after an upgrade could be
> > harmful to the cluster.  On the other hand, for a new cluster, we do want
> > to enable all the possible features immediately . I was proposing this as a
> > way to distinguish the two cases (since the new cluster will never be
> > started with an old IBP).
> >
> > > Colin MccCabe wrote:
> > > > And now, something a little bit bigger (sorry).  For finalized
> > features,
> > > > why do we need both min_version_level and max_version_level?  Assuming
> > that
> > > > we want all the brokers to be on the same feature version level, we
> > really only care
> > > > about three numbers for each feature, right?  The minimum supported
> > version
> > > > level, the maximum supported version level, and the current active
> > version level.
> > >
> > > > We don't actually want different brokers to be on different versions of
> > > > the same feature, right?  So we can just have one number for current
> > > > version level, rather than two.  At least that's what I was thinking
> > -- let
> > > > me know if I missed something.
> > >
> > > (Kowshik): It is my understanding that the "current active version level"
> > > that you have mentioned, is the "max_version_level". But we still
> > > maintain/publish both min and max version levels, because, the detail
> > about
> > > min level is useful to external clients. This is described below.
> > >
> > > For any feature F, think of the closed range: [min_version_level,
> > > max_version_level] as the range of finalized versions, that's guaranteed
> > to
> > > be supported by all brokers in the cluster.
> > >  - "max_version_level" is the finalized highest common version among all
> > > brokers,
> > >  - "min_version_level" is the finalized lowest common version among all
> > > brokers.
> > >
> > > Next, think of "client" here as the "user of the new feature versions
> > > system". Imagine that such a client learns about finalized feature
> > > versions, and exercises some logic based on the version. These clients
> > can
> > > be of 2 types:
> > > 1. Some part of the broker code itself could behave like a client trying
> > to
> > > use some feature that's "internal" to the broker cluster. Such a client
> > > would learn the latest finalized features via ZK.
> > > 2. An external system (ex: Streams) could behave like a client, trying to
> > > use some "external" facing feature. Such a client would learn latest
> > > finalized features via ApiVersionsRequest. Ex: group_coordinator feature
> > > described in the KIP.
> > >
> > > Next, imagine that for F, the max_version_level is successfully bumped by
> > > +1 (via Controller API). Now it is guaranteed that all brokers (i.e.
> > > internal clients) understand max_version_level + 1. However, it is still
> > > not guaranteed that all external clients have support for (or have
> > > activated) the logic for the newer version. Why? Because, this is
> > > subjective as explained next:
> > >
> > > 1. On one hand, imagine F as an internal feature only relevant to
> > Brokers.
> > > The binary for the internal client logic is controlled by Broker cluster
> > > deployments. When shipping a new Broker release, we wouldn't bump max
> > > "supported" feature version for F by 1, unless we have introduced some
> > new
> > > logic (with a potentially breaking change) in the Broker. Furthermore,
> > such
> > > feature logic in the broker should/will not be implemented in a way that
> > it
> > > would activate logic for an older feature version after it has migrated
> > to
> > > using the logic for a newer feature version (because this could break the
> > > cluster!). For these cases, 

[VOTE] KIP-586: Deprecate commit records without record metadata

2020-04-27 Thread Mario Molina
Hi all,

I'd like to start a vote for KIP-586. You can find the link for this KIP
here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-586%3A+Deprecate+commit+records+without+record+metadata

Thanks!
Mario


Build failed in Jenkins: kafka-trunk-jdk14 #22

2020-04-27 Thread Apache Jenkins Server
See 


Changes:

[manikumar] KAKFA-9612: Add an option to kafka-configs.sh to add configs from a 
prop


--
[...truncated 1.51 MB...]

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToDate PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessNullValueToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaNullFieldToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneSchemaless 
STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneSchemaless 
PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneWithSchema 
STARTED

org.apache.kafka.connect.transforms.ReplaceFieldTest > tombstoneWithSchema 
PASSED

> Task :spotlessScalaCheck
> Task :streams:streams-scala:processResources NO-SOURCE
> Task 

Build failed in Jenkins: kafka-trunk-jdk11 #1391

2020-04-27 Thread Apache Jenkins Server
See 


Changes:

[manikumar] KAKFA-9612: Add an option to kafka-configs.sh to add configs from a 
prop


--
[...truncated 1.51 MB...]

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless PASSED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType STARTED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema PASSED

org.apache.kafka.connect.transforms.CastTest > castLogicalToString STARTED

org.apache.kafka.connect.transforms.CastTest > castLogicalToString PASSED

> Task :spotlessScalaCheck
> Task :streams:streams-scala:processResources NO-SOURCE
> Task :streams:streams-scala:processTestResources
> Task :streams:test-utils:processTestResources
> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> 

[jira] [Created] (KAFKA-9923) Join window store duplicates can be compacted in changelog

2020-04-27 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9923:
--

 Summary: Join window store duplicates can be compacted in 
changelog 
 Key: KAFKA-9923
 URL: https://issues.apache.org/jira/browse/KAFKA-9923
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Sophie Blee-Goldman


Stream-stream joins use the regular `WindowStore` implementation but with 
`retainDuplicates` set to true. To allow for duplicates while using the same 
unique-key underlying stores we just wrap the key with an incrementing sequence 
number before inserting it.

This wrapping occurs at the innermost layer of the store hierarchy, which means 
the duplicates must first pass through the changelogging layer. At this point 
the keys are still identical. So, we end up sending the records to the 
changelog without distinct keys and therefore may lose the older of the 
duplicates during compaction.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9612) CLI Dynamic Configuration with file input

2020-04-27 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-9612.
--
Fix Version/s: 2.6.0
 Reviewer: Manikumar
   Resolution: Fixed

> CLI Dynamic Configuration with file input
> -
>
> Key: KAFKA-9612
> URL: https://issues.apache.org/jira/browse/KAFKA-9612
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Aneel Nazareth
>Assignee: Aneel Nazareth
>Priority: Minor
> Fix For: 2.6.0
>
>
> Add --add-config-file options to kafka-configs.sh
> More details here: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-592: MirrorMaker should replicate topics from earliest

2020-04-27 Thread Ryanne Dolan
Conversely, we could consider making MM2 use "latest" in "legacy mode", and
leave MM1 as it is? (Just thinking out loud.)

Ryanne

On Mon, Apr 27, 2020 at 12:39 PM Jeff Widman  wrote:

> Good questions:
>
>
> *I agree that `auto.offset.reset="earliest"` would be a better default.
> However, I am a little worried about backwardcompatibility. *
>
> Keep in mind that existing mirrormaker instances will *not* be affected for
> topics they are currently consuming because they will already have saved
> offsets. This will only affect mirrormakers that start consuming new
> topics, for which they don't have a saved offset. In those cases, they will
> stop seeing data loss when they first start consuming. My guess is the
> majority of those new topics are going to be newly-created topics anyway,
> so most of the time starting from the earliest simply prevents skipping the
> first few seconds/minutes of data written to the topic.
>
> *What I am also wondering thought is, does this only affect MirrorMaker or
> also MirrorMaker 2? *
>
> I checked and MM2 already sets `auto.offset.reset = 'earliest'`
> <
> https://github.com/apache/kafka/blob/d63e0181bb7b9b4f5ed088abc00d7b32aeb0/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorConnectorConfig.java#L233
> >
> .
>
> *Also, is it worth to change MirrorMaker now that **MirrorMaker 2 is
> available?*
>
> Given that it's 1-line of code, doesn't affect existing instances, and
> prevents data loss on new regex subscriptions, I think it's worth
> setting... I basically view it as a bugfix rather than a feature change.
>
> I realize MM1 is deprecated, but there's still a lot of old mirrormakers
> running, so flipping this now will ease the future transition to MM2
> because it brings the behavior of MM1 in line with MM2.
>
> Thoughts?
>
>
>
> On Sat, Apr 11, 2020 at 11:59 AM Matthias J. Sax  wrote:
>
> > Jeff,
> >
> > thanks for the KIP. I agree that `auto.offset.reset="earliest"` would be
> > a better default. However, I am a little worried about backward
> > compatibility. And even if the current default is not idea, users can
> > still change it.
> >
> > What I am also wondering thought is, does this only affect MirrorMaker
> > or also MirrorMaker 2? Also, is it worth to change MirrorMaker now that
> > MirrorMaker 2 is available?
> >
> >
> > -Matthias
> >
> >
> > On 4/10/20 9:56 PM, Jeff Widman wrote:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-592%3A+MirrorMaker+should+replicate+topics+from+earliest
> > >
> > > It's a relatively minor change, only one line of code. :-D
> > >
> > >
> > >
> >
> >
>
> --
>
> *Jeff Widman*
> jeffwidman.com  | 740-WIDMAN-J (943-6265)
> <><
>


Re: [DISCUSS] KIP-592: MirrorMaker should replicate topics from earliest

2020-04-27 Thread Jeff Widman
Good questions:


*I agree that `auto.offset.reset="earliest"` would be a better default.
However, I am a little worried about backwardcompatibility. *

Keep in mind that existing mirrormaker instances will *not* be affected for
topics they are currently consuming because they will already have saved
offsets. This will only affect mirrormakers that start consuming new
topics, for which they don't have a saved offset. In those cases, they will
stop seeing data loss when they first start consuming. My guess is the
majority of those new topics are going to be newly-created topics anyway,
so most of the time starting from the earliest simply prevents skipping the
first few seconds/minutes of data written to the topic.

*What I am also wondering thought is, does this only affect MirrorMaker or
also MirrorMaker 2? *

I checked and MM2 already sets `auto.offset.reset = 'earliest'`

.

*Also, is it worth to change MirrorMaker now that **MirrorMaker 2 is
available?*

Given that it's 1-line of code, doesn't affect existing instances, and
prevents data loss on new regex subscriptions, I think it's worth
setting... I basically view it as a bugfix rather than a feature change.

I realize MM1 is deprecated, but there's still a lot of old mirrormakers
running, so flipping this now will ease the future transition to MM2
because it brings the behavior of MM1 in line with MM2.

Thoughts?



On Sat, Apr 11, 2020 at 11:59 AM Matthias J. Sax  wrote:

> Jeff,
>
> thanks for the KIP. I agree that `auto.offset.reset="earliest"` would be
> a better default. However, I am a little worried about backward
> compatibility. And even if the current default is not idea, users can
> still change it.
>
> What I am also wondering thought is, does this only affect MirrorMaker
> or also MirrorMaker 2? Also, is it worth to change MirrorMaker now that
> MirrorMaker 2 is available?
>
>
> -Matthias
>
>
> On 4/10/20 9:56 PM, Jeff Widman wrote:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-592%3A+MirrorMaker+should+replicate+topics+from+earliest
> >
> > It's a relatively minor change, only one line of code. :-D
> >
> >
> >
>
>

-- 

*Jeff Widman*
jeffwidman.com  | 740-WIDMAN-J (943-6265)
<><


[jira] [Reopened] (KAFKA-9176) Flaky test failure: OptimizedKTableIntegrationTest.shouldApplyUpdatesToStandbyStore

2020-04-27 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman reopened KAFKA-9176:


Saw this fail again on a PR with
h3. Stacktrace

org.apache.kafka.streams.errors.InvalidStateStoreException: The state store, 
source-table, may have migrated to another instance. at 
org.apache.kafka.streams.state.internals.QueryableStoreProvider.getStore(QueryableStoreProvider.java:64)
 at org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:1183) at 
org.apache.kafka.streams.integration.OptimizedKTableIntegrationTest.shouldApplyUpdatesToStandbyStore(OptimizedKTableIntegrationTest.java:126)

> Flaky test failure:  
> OptimizedKTableIntegrationTest.shouldApplyUpdatesToStandbyStore
> 
>
> Key: KAFKA-9176
> URL: https://issues.apache.org/jira/browse/KAFKA-9176
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.4.0
>Reporter: Manikumar
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.6.0
>
>
> h4. 
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.4-jdk8/detail/kafka-2.4-jdk8/65/tests]
> h4. Error
> org.apache.kafka.streams.errors.InvalidStateStoreException: Cannot get state 
> store source-table because the stream thread is PARTITIONS_ASSIGNED, not 
> RUNNING
> h4. Stacktrace
> org.apache.kafka.streams.errors.InvalidStateStoreException: Cannot get state 
> store source-table because the stream thread is PARTITIONS_ASSIGNED, not 
> RUNNING
>  at 
> org.apache.kafka.streams.state.internals.StreamThreadStateStoreProvider.stores(StreamThreadStateStoreProvider.java:51)
>  at 
> org.apache.kafka.streams.state.internals.QueryableStoreProvider.getStore(QueryableStoreProvider.java:59)
>  at org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:1129)
>  at 
> org.apache.kafka.streams.integration.OptimizedKTableIntegrationTest.shouldApplyUpdatesToStandbyStore(OptimizedKTableIntegrationTest.java:157)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>  at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>  at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>  at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>  at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>  at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>  at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>  at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>  at 
> 

[jira] [Reopened] (KAFKA-9798) Flaky test: org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldAllowConcurrentAccesses

2020-04-27 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman reopened KAFKA-9798:

  Assignee: (was: Guozhang Wang)

> Flaky test: 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldAllowConcurrentAccesses
> 
>
> Key: KAFKA-9798
> URL: https://issues.apache.org/jira/browse/KAFKA-9798
> Project: Kafka
>  Issue Type: Test
>  Components: streams, unit tests
>Reporter: Bill Bejeck
>Priority: Major
>  Labels: flaky-test, test
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [Vote] KIP-588: Allow producers to recover gracefully from transaction timeouts

2020-04-27 Thread Boyang Chen
Hey all,

we want to piggy-back one more protocol change to this KIP, which is the
InitProducerId. Previously we overlooked the case returned from InitPid
where the error code is still INVALID_PRODUCER_EPOCH instead of
PRODUCER_FENCED. To be consistent, we will bump this protocol and always
return PRODUCER_FENCED for new request versions.

Let me know if you have any questions, the KIP is already updated.

Boyang

On Fri, Apr 24, 2020 at 8:44 PM Boyang Chen 
wrote:

> Thanks a lot John for giving a review on Friday night! And thank you
> Guozhang, and Jason for your votes as well :)
>
> Now that we have collected 3 binding votes (Guozhang, Jason, John), I will
> close the voting thread and mark the KIP as approved. Still feel free to
> raise any question on the mailing list for sure!
>
> Best,
> Boyang
>
> On Fri, Apr 24, 2020 at 8:29 PM John Roesler  wrote:
>
>> Hi Boyang,
>>
>> Thanks for the KIP! I've just read it over and caught up on all the prior
>> discussions.
>> The current version of the KIP looks good to me, and I think the
>> decisions you've
>> made are reasonable.
>>
>> I'm +1 (binding)
>>
>> Thanks,
>> -John
>>
>> On Wed, Apr 22, 2020, at 12:12, Boyang Chen wrote:
>> > Hey Jason,
>> >
>> > thanks for the suggestions! Addressed in the KIP.
>> >
>> > On Wed, Apr 22, 2020 at 9:21 AM Jason Gustafson 
>> wrote:
>> >
>> > > +1 Just a couple small comments:
>> > >
>> > > 1. My comment about `initTransactions()` usage in the javadoc above
>> appears
>> > > not to have been addressed.
>> > > 2. For the handling of INVALID_PRODUCER_EPOCH in the produce response,
>> > > would we only try to abort if the broker supports the newer protocol
>> > > version? I guess it would be simpler in the implementation if the
>> producer
>> > > did it in any case even if it might not be useful for older versions.
>> > >
>> > > -Jason
>> > >
>> > > On Fri, Apr 17, 2020 at 3:55 PM Guozhang Wang 
>> wrote:
>> > >
>> > > > Sounds good to me. Thanks Boyang.
>> > > >
>> > > > On Fri, Apr 17, 2020 at 3:32 PM Boyang Chen <
>> reluctanthero...@gmail.com>
>> > > > wrote:
>> > > >
>> > > > > Thanks Guozhang,
>> > > > >
>> > > > > I think most of the complexity comes from our intention to benefit
>> > > older
>> > > > > clients. After a second thought, I think the add-on complexity
>> > > > counteracts
>> > > > > the gain here as only 2.5 client is getting a slice of the
>> resilience
>> > > > > improvement, not for many older versions.
>> > > > >
>> > > > > So I decide to drop the UNKNOWN_PRODUCER_ID path, by just
>> claiming that
>> > > > > this change would only benefit 2.6 Producer clients. So the only
>> path
>> > > > that
>> > > > > needs version detection is the new transaction coordinator
>> handling
>> > > > > transactional requests. If the Producer is 2.6+, we pick
>> > > > > PRODUCER_FENCED(new error code) or TRANSACTION_TIMED_OUT as the
>> > > response;
>> > > > > otherwise  we return INVALID_PRODUCE_EPOCH to be consistent with
>> older
>> > > > > clients.
>> > > > >
>> > > > > Does this sound like a better plan? I already updated the KIP with
>> > > > > simplifications.
>> > > > >
>> > > > >
>> > > > > On Fri, Apr 17, 2020 at 12:02 PM Guozhang Wang <
>> wangg...@gmail.com>
>> > > > wrote:
>> > > > >
>> > > > > > Hi Boyang,
>> > > > > >
>> > > > > > Your reply to 3) seems conflicting with your other answers
>> which is a
>> > > > bit
>> > > > > > confusing to me. Following your other answers, it seems you
>> suggest
>> > > > > > returning UNKNOWN_PRODUCER_ID so that 2.5 clients can trigger
>> retry
>> > > > logic
>> > > > > > as well?
>> > > > > >
>> > > > > > To complete my reasoning here as a complete picture:
>> > > > > >
>> > > > > > a) post KIP-360 (2.5+) the partition leader broker does not
>> return
>> > > > > > UNKNOWN_PRODUCER_ID any more.
>> > > > > > b) upon seeing an old epoch, partition leader cannot tell if it
>> is
>> > > due
>> > > > to
>> > > > > > fencing or timeout; so it could only return
>> INVALID_PRODUCER_EPOCH.
>> > > > > >
>> > > > > > So the basic idea is to let the clients ask the transaction
>> > > coordinator
>> > > > > for
>> > > > > > the source of truth:
>> > > > > >
>> > > > > > 1) 2.5+ client would handle UNKNOWN_PRODUCER_ID (which could
>> only be
>> > > > > > returned from old brokers) by trying to re-initialize with the
>> > > > > transaction
>> > > > > > coordinator; the coordinator would then tell it whether it is
>> > > > > > PRODUCER_FENCED or TXN_TIMEOUT. And for old brokers, it would
>> always
>> > > > > return
>> > > > > > PRODUCER_FENCED anyways.
>> > > > > > 2) 2.6+ client would also handle INVALID_PRODUCER_EPOCH with the
>> > > retry
>> > > > > > initializing logic; and similarly the transaction coordinator
>> would
>> > > > > > return PRODUCER_FENCED or TXN_TIMEOUT if it is new or always
>> > > > > > return PRODUCER_FENCED if it is old.
>> > > > > >
>> > > > > > The question open is, whether
>> > > > > >
>> > > > > > * 3) the new broker should return UNKNOWN_PRODUCER_ID 

Re: Request for permission to create KIP

2020-04-27 Thread Bill Bejeck
Done.

Thanks,
Bill

On Mon, Apr 27, 2020 at 11:05 AM Wang (Leonard) Ge  wrote:

> Hi,
>
> I'd like to request permission to create KIP. My email address is:
> w...@confluent.io.
>
> Thanks!
>
> --
> Leonard Ge
> Software Engineer Intern - Confluent
>


Request for permission to create KIP

2020-04-27 Thread Wang (Leonard) Ge
Hi,

I'd like to request permission to create KIP. My email address is:
w...@confluent.io.

Thanks!

-- 
Leonard Ge
Software Engineer Intern - Confluent


Re: [DISCUSS] KIP-553: Enable TLSv1.3 by default and disable all protocols except [TLSV1.2, TLSV1.3]

2020-04-27 Thread Ismael Juma
Yes, a PR would be great.

Ismael

On Mon, Apr 27, 2020, 2:10 AM Nikolay Izhikov  wrote:

> Hello, Ismael.
>
> AFAIK we don’t run tests with the TLSv1.3, by default.
> Are you suggesting to do it?
> I can create a PR for it.
>
> > 24 апр. 2020 г., в 17:34, Ismael Juma  написал(а):
> >
> > Right, some companies run them nightly. What I meant to ask is if we
> > changed the configuration so that TLS 1.3 is exercised in the system
> tests
> > by default.
> >
> > Ismael
> >
> > On Fri, Apr 24, 2020 at 7:32 AM Nikolay Izhikov 
> wrote:
> >
> >> Hello, Ismael.
> >>
> >> AFAIK we don’t run system tests nightly.
> >> Do we have resources to run system tests periodically?
> >>
> >> When I did the testing I used servers my employer gave me.
> >>
> >>> 24 апр. 2020 г., в 08:05, Ismael Juma  написал(а):
> >>>
> >>> Hi Nikolay,
> >>>
> >>> Seems like we have been able to run the system tests with TLS 1.3. Do
> we
> >>> run them nightly?
> >>>
> >>> Ismael
> >>>
> >>> On Fri, Feb 14, 2020 at 4:17 AM Nikolay Izhikov 
> >> wrote:
> >>>
>  Hello, Kafka team.
> 
>  I ran system tests that use SSL for the TLSv1.3.
>  You can find the results of the tests in the Jira ticket [1], [2],
> [3],
>  [4].
> 
>  I also, need a changes [5] in `security_config.py` to execute system
> >> tests
>  with TLSv1.3(more info in PR description).
>  Please, take a look.
> 
>  Test environment:
>    • openjdk11
>    • trunk + changes from my PR [5].
> 
>  Full system tests results have volume 15gb.
>  Should I share full logs with you?
> 
>  What else should be done before we can enable TLSv1.3 by default?
> 
>  [1]
> 
> >>
> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036927=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036927
> 
>  [2]
> 
> >>
> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036928=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036928
> 
>  [3]
> 
> >>
> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036929=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036929
> 
>  [4]
> 
> >>
> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036930=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036930
> 
>  [5]
> 
> >>
> https://github.com/apache/kafka/pull/8106/files#diff-6dd015b94706f6920d9de524c355ddd8R51
> 
> > 29 янв. 2020 г., в 15:27, Nikolay Izhikov 
>  написал(а):
> >
> > Hello, Rajini.
> >
> > Thanks for the feedback.
> >
> > I’ve searched tests by the «ssl» keyword and found the following
> tests:
> >
> > ./test/kafkatest/services/kafka_log4j_appender.py
> > ./test/kafkatest/services/listener_security_config.py
> > ./test/kafkatest/services/security/security_config.py
> > ./test/kafkatest/tests/core/security_test.py
> >
> > Is this all tests that need to be run with the TLSv1.3 to ensure we
> can
>  enable it by default?
> >
> >> 28 янв. 2020 г., в 14:58, Rajini Sivaram 
>  написал(а):
> >>
> >> Hi Nikolay,
> >>
> >> Not sure of the total space required. But you can run a collection
> of
>  tests at a time instead of running them all together. That way, you
> >> could
>  just run all the tests that enable SSL. Details of running a subset of
>  tests are in the README in tests.
> >>
> >> On Mon, Jan 27, 2020 at 6:29 PM Nikolay Izhikov <
> nizhi...@apache.org>
>  wrote:
> >> Hello, Rajini.
> >>
> >> I’m tried to run all system tests but failed for now.
> >> It happens, that system tests generates a lot of logs.
> >> I had a 250GB of the free space but it all was occupied by the log
> >> from
>  half of the system tests.
> >>
> >> Do you have any idea what is summary disc space I need to run all
>  system tests?
> >>
> >>> 7 янв. 2020 г., в 14:49, Rajini Sivaram 
>  написал(а):
> >>>
> >>> Hi Nikolay,
> >>>
> >>> There a couple of things you could do:
> >>>
> >>> 1) Run all system tests that use SSL with TLSv1.3. I had run a
> >> subset,
>  but
> >>> it will be good to run all of them. You can do this locally using
>  docker
> >>> with JDK 11 by updating the files in tests/docker. You will need to
>  update
> >>> tests/kafkatest/services/security/security_config.py to enable only
> >>> TLSv1.3. Instructions for running system tests using docker are in
> >>> https://github.com/apache/kafka/blob/trunk/tests/README.md.
> >>> 2) For integration tests, we run a small number of tests using
> >> TLSv1.3
>  if
> >>> the tests are run using JDK 11 and above. We need to do this for
> >> system
> >>> tests as well. There is an open JIRA:
> >>> 

[jira] [Resolved] (KAFKA-9549) Local storage implementations for RSM which can be used in tests

2020-04-27 Thread Alexandre Dupriez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Dupriez resolved KAFKA-9549.
--
Resolution: Fixed

> Local storage implementations for RSM which can be used in tests
> 
>
> Key: KAFKA-9549
> URL: https://issues.apache.org/jira/browse/KAFKA-9549
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Satish Duggana
>Assignee: Alexandre Dupriez
>Priority: Major
>
> The goal of this task is to implement a straightforward file-system based 
> implementation of the {{RemoteStorageManager}} defined as part of the SPI for 
> Tiered Storage.
> It is intended to be used in single-host integration tests where the remote 
> storage is or can be exercised. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations

2020-04-27 Thread David Jacot
Hi Jun,

Thank you for the feedback.

1. You are right. At the end, we do care about the percentage of time that
an operation ties up the controller thread. I thought about this but I was
not entirely convinced by it for following reasons:

1.1. While I do agree that setting up a rate and a burst is a bit harder
than
allocating a percentage for the administrator of the cluster, I believe
that a
rate and a burst are way easier to understand for the users of the cluster.

1.2. Measuring the time that a request ties up the controller thread is not
as straightforward as it sounds because the controller reacts to ZK
TopicChange and TopicDeletion events in lieu of handling requests directly.
These events do not carry on the client id nor the user information so the
best would be to refactor the controller to accept requests instead of
reacting
to the events. This will be possible with KIP-590. It has obviously other
side effects in the controller (e.g. batching).

I leaned towards the current proposal mainly due to 1.1. as 1.2. can be (or
will be) fixed. Does 1.1. sound like a reasonable trade off to you?

2. It is not in the current proposal. I thought that a global quota would be
enough to start with. We can definitely make it work like the other quotas.

3. The main difference is that the Token Bucket algorithm defines an
explicit
burst B while guaranteeing an average rate R whereas our existing quota
guarantees an average rate R as well but starts to throttle as soon as the
rate goes above the defined quota.

Creating and deleting topics is bursty by nature. Applications create or
delete
topics occasionally by usually sending one request with multiple topics. The
reasoning behind allowing a burst is to allow such requests with a
reasonable
size to pass without being throttled whereas our current quota mechanism
would reject any topics as soon as the rate is above the quota requiring the
applications to send subsequent requests to create or to delete all the
topics.

Best,
David


On Fri, Apr 24, 2020 at 9:03 PM Jun Rao  wrote:

> Hi, David,
>
> Thanks for the KIP. A few quick comments.
>
> 1. About quota.partition.mutations.rate. I am not sure if it's very easy
> for the user to set the quota as a rate. For example, each partition
> mutation could take a different number of ZK operations (depending on
> things like retry). The time to process each ZK operation may also vary
> from cluster to cluster. An alternative way to model this is to do sth
> similar to the request (CPU) quota, which exposes quota as a percentage of
> the server threads that can be used. The current request quota doesn't
> include the controller thread. We could add something that measures/exposes
> the percentage of time that a request ties up the controller thread, which
> seems to be what we really care about.
>
> 2. Is the new quota per user? Intuitively, we want to only penalize
> applications that overuse the broker resources, but not others. Also, in
> existing types of quotas (request, bandwidth), there is a hierarchy among
> clientId vs user and default vs customized (see
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-55%3A+Secure+Quotas+for+Authenticated+Users
> ). Does the new quota fit into the existing hierarchy?
>
> 3. It seems that you are proposing a new quota mechanism based on Token
> Bucket algorithm. Could you describe its tradeoff with the existing quota
> mechanism? Ideally, it would be better if we have a single quota mechanism
> within Kafka.
>
> Jun
>
>
>
>
> On Fri, Apr 24, 2020 at 9:52 AM David Jacot  wrote:
>
> > Hi folks,
> >
> > I'd like to start the discussion for KIP-599:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-599%3A+Throttle+Create+Topic%2C+Create+Partition+and+Delete+Topic+Operations
> >
> > It proposes to introduce quotas for the create topics, create partitions
> > and delete topics operations. Let me know what you think, thanks.
> >
> > Best,
> > David
> >
>


Re: [DISCUSS] KIP-553: Enable TLSv1.3 by default and disable all protocols except [TLSV1.2, TLSV1.3]

2020-04-27 Thread Nikolay Izhikov
Hello, Ismael.

AFAIK we don’t run tests with the TLSv1.3, by default.
Are you suggesting to do it?
I can create a PR for it.

> 24 апр. 2020 г., в 17:34, Ismael Juma  написал(а):
> 
> Right, some companies run them nightly. What I meant to ask is if we
> changed the configuration so that TLS 1.3 is exercised in the system tests
> by default.
> 
> Ismael
> 
> On Fri, Apr 24, 2020 at 7:32 AM Nikolay Izhikov  wrote:
> 
>> Hello, Ismael.
>> 
>> AFAIK we don’t run system tests nightly.
>> Do we have resources to run system tests periodically?
>> 
>> When I did the testing I used servers my employer gave me.
>> 
>>> 24 апр. 2020 г., в 08:05, Ismael Juma  написал(а):
>>> 
>>> Hi Nikolay,
>>> 
>>> Seems like we have been able to run the system tests with TLS 1.3. Do we
>>> run them nightly?
>>> 
>>> Ismael
>>> 
>>> On Fri, Feb 14, 2020 at 4:17 AM Nikolay Izhikov 
>> wrote:
>>> 
 Hello, Kafka team.
 
 I ran system tests that use SSL for the TLSv1.3.
 You can find the results of the tests in the Jira ticket [1], [2], [3],
 [4].
 
 I also, need a changes [5] in `security_config.py` to execute system
>> tests
 with TLSv1.3(more info in PR description).
 Please, take a look.
 
 Test environment:
   • openjdk11
   • trunk + changes from my PR [5].
 
 Full system tests results have volume 15gb.
 Should I share full logs with you?
 
 What else should be done before we can enable TLSv1.3 by default?
 
 [1]
 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036927=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036927
 
 [2]
 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036928=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036928
 
 [3]
 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036929=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036929
 
 [4]
 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036930=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036930
 
 [5]
 
>> https://github.com/apache/kafka/pull/8106/files#diff-6dd015b94706f6920d9de524c355ddd8R51
 
> 29 янв. 2020 г., в 15:27, Nikolay Izhikov 
 написал(а):
> 
> Hello, Rajini.
> 
> Thanks for the feedback.
> 
> I’ve searched tests by the «ssl» keyword and found the following tests:
> 
> ./test/kafkatest/services/kafka_log4j_appender.py
> ./test/kafkatest/services/listener_security_config.py
> ./test/kafkatest/services/security/security_config.py
> ./test/kafkatest/tests/core/security_test.py
> 
> Is this all tests that need to be run with the TLSv1.3 to ensure we can
 enable it by default?
> 
>> 28 янв. 2020 г., в 14:58, Rajini Sivaram 
 написал(а):
>> 
>> Hi Nikolay,
>> 
>> Not sure of the total space required. But you can run a collection of
 tests at a time instead of running them all together. That way, you
>> could
 just run all the tests that enable SSL. Details of running a subset of
 tests are in the README in tests.
>> 
>> On Mon, Jan 27, 2020 at 6:29 PM Nikolay Izhikov 
 wrote:
>> Hello, Rajini.
>> 
>> I’m tried to run all system tests but failed for now.
>> It happens, that system tests generates a lot of logs.
>> I had a 250GB of the free space but it all was occupied by the log
>> from
 half of the system tests.
>> 
>> Do you have any idea what is summary disc space I need to run all
 system tests?
>> 
>>> 7 янв. 2020 г., в 14:49, Rajini Sivaram 
 написал(а):
>>> 
>>> Hi Nikolay,
>>> 
>>> There a couple of things you could do:
>>> 
>>> 1) Run all system tests that use SSL with TLSv1.3. I had run a
>> subset,
 but
>>> it will be good to run all of them. You can do this locally using
 docker
>>> with JDK 11 by updating the files in tests/docker. You will need to
 update
>>> tests/kafkatest/services/security/security_config.py to enable only
>>> TLSv1.3. Instructions for running system tests using docker are in
>>> https://github.com/apache/kafka/blob/trunk/tests/README.md.
>>> 2) For integration tests, we run a small number of tests using
>> TLSv1.3
 if
>>> the tests are run using JDK 11 and above. We need to do this for
>> system
>>> tests as well. There is an open JIRA:
>>> https://issues.apache.org/jira/browse/KAFKA-9319. Feel free to
>> assign
 this
>>> to yourself if you have time to do this.
>>> 
>>> Regards,
>>> 
>>> Rajini
>>> 
>>> 
>>> On Tue, Jan 7, 2020 at 5:15 AM Николай Ижиков 
 wrote:
>>> 
 Hello, Rajini.
 
 Can you, please, clarify, what should be done?
 I can try to do tests by 

[jira] [Created] (KAFKA-9922) Update examples README

2020-04-27 Thread jiamei xie (Jira)
jiamei xie created KAFKA-9922:
-

 Summary: Update examples README
 Key: KAFKA-9922
 URL: https://issues.apache.org/jira/browse/KAFKA-9922
 Project: Kafka
  Issue Type: Bug
  Components: consumer, documentation
Reporter: jiamei xie
Assignee: jiamei xie


Class kafka.examples.SimpleConsumerDemo was removed. But the 
java-simple-consumer-demo.sh was not removed and README was not updated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)