Build failed in Jenkins: kafka-trunk-jdk11 #1431

2020-05-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9768: Fix handling of rest.advertised.listener config (#8360)


--
[...truncated 3.08 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Jenkins build is back to normal : kafka-2.3-jdk8 #201

2020-05-06 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #4508

2020-05-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9768: Fix handling of rest.advertised.listener config (#8360)


--
[...truncated 3.06 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

Re: [DISCUSS] KIP-554: Add Broker-side SCRAM Config API

2020-05-06 Thread Colin McCabe
On Tue, May 5, 2020, at 08:12, Tom Bentley wrote:
> Hi Colin,
> 
> SCRAM is better than SASL PLAIN because it doesn't send the password over
> the wire in the clear. Presumably this property is important for some users
> who have chosen to use SCRAM. This proposal does send the password in the
> clear when setting the password. That doesn't mean it can't be used
> securely (e.g. connect over TLS–if available–when setting or changing a
> password, or connect to the broker from the same machine over localhost),
> but won't this just result in some CVE against Kafka? It's a tricky problem
> to solve in a cluster without TLS (you basically just end up reinventing
> TLS).
>

Hi Tom,

Thanks for the thoughtful reply.

If you don't set up SSL, we currently do send passwords in the clear over the 
wire.  There's just no other option-- as you yourself said, we're not going to 
reinvent TLS from first principles.  So this KIP isn't changing our policy 
about this.

One example of this is if you have a zookeeper connection and it is not 
encrypted, your SCRAM password currently goes over the wire in the clear when 
you run the kafka-configs.sh command.  Another example is if you have one 
plaintext endpoint Kafka and one SSL Kafka endpoint, you can send the keystore 
password for the SSL endpoint in cleartext over the plaintext endpoint.  

>
> I know you're not a few of the ever-growing list of configs, but when
> I wrote KIP-506 I suggested some configs which could have been used to at
> least make it secure by default.
>

I think if we want to add a configuration like that, it should be done in a 
separate KIP, because it affects more than just SCRAM.  We would also have to 
disallow setting any "sensitive" configuration over IncrementalAlterConfigs / 
AlterConfigs.

Although I haven't thought about it that much, I doubt that such a KIP would be 
successful  Think about who still uses plaintext mode,.  Developers use it 
for testing things locally.  They don't want additional restrictions on what 
they can do.  Sysadmins who are really convinced that their network is secure 
(I know, I know...) or who are setting up a proof-of-concept might use 
plaintext mode.  They don't want restrictions either.

If the network is insecure and you're using plaintext, then we shouldn't allow 
you to send or receive messages either, since they could contain sensitive 
data.  So I think it's impossible to follow this logic very far before you 
arrive at plaintext delenda est.  And indeed, there have been people who have 
said we should remove the option to use plaintext mode from Kafka.  But so far, 
we're not ready to do that.

> 
> You mentioned on the discussion for KIP-595 that there's a bootstrapping
> problem to be solved in this area. Maybe KIP-595 is the better place for
> that, but I wondered if you had any thoughts about it. I thought about
> using a broker CLI option to read a password from stdin (`--scram-user tom`
> would prompt for the password for user 'tom' on boot), that way the
> password doesn't have to be on the command line arguments or in a file. In
> fact this could be a solution to both the bootstrap problem and plaintext
> password problem in the absence of TLS.
> 

Yeah, I think this would be a good improvement.  The ability to read a password 
from stdin without echoing it to the terminal would be nice.  But it also 
deserves its own separate KIP, and should also apply to the other stuff you can 
do with kafka-configs.sh (SSL passwords, etc.)

best,
Colin

>
> Kind regards,
> 
> Tom
> 
> Cheers,
> 
> Tom
> 
> On Tue, May 5, 2020 at 12:52 AM Guozhang Wang  wrote:
> 
> > Cool, that makes sense.
> >
> > Guozhang
> >
> >
> > On Mon, May 4, 2020 at 2:50 PM Colin McCabe  wrote:
> >
> > > I think once something becomes more complex than just key = value it's
> > > time to consider an official Kafka API, rather than trying to fit it into
> > > AlterConfigs.  For example, for client quotas, we have KIP-546.
> > >
> > > There are just so many reasons.  Real Kafka APIs have well-defined
> > > compatibility policies, Java types defined that make them easy to use,
> > and
> > > APIs that can return partial results rather than needing to do the
> > > filtering on the client side.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Mon, May 4, 2020, at 14:30, Guozhang Wang wrote:
> > > > Got it.
> > > >
> > > > Besides SCRAM, are there other scenarios that we may have such
> > > > "hierarchical" (I know the term may not be very accurate here :P)
> > configs
> > > > such as "config1=[key1=value1, key2=value2]" compared with most common
> > > > pattern of "config1=value1" or "config1=value1,config2=value2"? For
> > > example
> > > > I know that quotas may be specified in the former pattern as well. If
> > we
> > > > believe that such hierarchical configuration may be more common in the
> > > > future, I'm wondering should we just consider support it more natively
> > in
> > > > alter/describe config patterns.
> > > >

Re: [DISCUSS] Kafka 3.0

2020-05-06 Thread Ryanne Dolan
> This will allow us to get an "alpha" version of the KIP-500 mode out
early for people to experiment with

I think this is a non-sequitur. It's not a requirement that KIP-500 be
merged to master and released in order for people to experiment with it.
Nor does it sound great to call for a major release (3.0) in order to get
an "alpha version ... out early".

Ryanne

On Wed, May 6, 2020 at 10:52 PM Colin McCabe  wrote:

> On Mon, May 4, 2020, at 17:12, Ryanne Dolan wrote:
> > Hey Colin, I think we should wait until after KIP-500's "bridge
> > release" so there is a clean break from Zookeeper after 3.0. The
> > bridge release by definition is an attempt to not break anything, so
> > it theoretically doesn't warrant a major release.
>
> Hi Ryanne,
>
> I think it's important to clarify this a little bit.  The bridge release
> (really, releases, plural) allow you to upgrade from a cluster that is
> using ZooKeeper to one that is not using ZooKeeper.  But, that doesn't
> imply that the bridge release itself doesn't break anything.  Upgrading
> to the bridge release itself might involve some minor incompatibility.
>
> Kafka does occasionally have incompatible changes.  In those cases, we
> bump the major version number.  One example is that when we went from
> Kafka 1.x to Kafka 2.0, we dropped support for JDK7.  This is an
> incompatible change.
>
> In fact, we know that the bridge release will involve at least one
> incompatible change.  We will need to drop support for the --zookeeper
> flags in the command-line tools.
>
> We've been preparing for this change for a long time.  People have spent
> a lot of effort designing new APIs that can be used instead of the old
> zookeeper-based code that some of the command-line tools used.  We have
> also deprecated the old ZK-based flags.  But at the end of the day, it
> is still an incompatible change.  So it's unfortunately not possible for
> the
> bridge release to be a 2.x release.
>
> > If that's not the case (i.e. if a single "bridge release" turns out to
> > be impractical), we should consider forking 3.0 while maintaining a
> > line of Zookeeper-dependent Kafka in 2.x. That way 3.x can evolve
> > dramatically without breaking the 2.x line. In particular, anything
> > related to removing Zookeeper could land in pre-3.0 while every other
> > feature targets 2.6.
>
> Just to be super clear about this, what we want to do here is support
> operating in __either__ KIP-500 mode and legacy mode for a while.  So the
> same branch will have support for both the old way and the new way of
> managing metadata.
>
> This will allow us to get an "alpha" version of the KIP-500 mode out early
> for people to experiment with.  It also greatly reduces the number of Kafka
> releases we have to make, and the amount of backporting we have to do.
>
> >
> > If you are proposing 2.6 should be the "bridge release", I think this
> > is premature given Kafka's time-based release schedule. If the bridge
> > features happen to be merged before 2.6's feature freeze, then sure --
> > let's make that the bridge release in retrospect. And if we get all
> > the post-Zookeeper features merged before 2.7, I'm onboard with naming
> > it "3.0" instead.
> >
> > That said, we should aim to remove legacy MirrorMaker before 3.0 as
> > well. I'm happy to drive that additional breaking change. Maybe 2.6
> > can be the "bridge" for MM2 as well.
>
> I don't have a strong opinion either way about this, but if we want to
> remove the original MirrorMaker, we have to deprecate it first, right?  Are
> we ready to do that?
>
> best,
> Colin
>
> >
> > Ryanne
> >
> > On Mon, May 4, 2020, 5:05 PM Colin McCabe  wrote:
> >
> > > Hi all,
> > >
> > > We've had a few proposals recently for incompatible changes.  One of
> > > them is my KIP-604: Remove ZooKeeper Flags from the Administrative
> > > Tools.  The other is Boyang's KIP-590: Redirect ZK Mutation
> > > Protocols to the Controller.  I think it's time to start thinking
> > > about Kafka 3.0. Specifically, I think we should move to 3.0 after
> > > the 2.6 release.
> > >
> > > From the perspective of KIP-500, in Kafka 3.x we'd like to make
> > > running in a ZooKeeper-less mode possible (but not yet the default.)
> > > This is the motivation behind KIP-590 and KIP-604, as well as some
> > > of the other KIPs we've done recently.  Since it will take some time
> > > to stabilize the new ZooKeeper-free Kafka code, we will hide it
> > > behind an option initially. (We'll have a KIP describing this all in
> > > detail soon.)
> > >
> > > What does everyone think about having Kafka 3.0 come up next after
> > > 2.6? Are there any other things we should change in the 2.6 -> 3.0
> > > transition?
> > >
> > > best, Colin
> > >
> >
>


Re: [DISCUSS] Kafka 3.0

2020-05-06 Thread Ryanne Dolan
> In fact, we know that the bridge release will involve at least one
> incompatible change.  We will need to drop support for the --zookeeper
> flags in the command-line tools.

If the bridge release(s) and the subsequent post-ZK release are _both_
breaking changes, I think we only have one option: the 3.x line are the
bridge release(s), and ZK is removed in 4.0, as suggested by Andrew
Schofield.

Specifically:
- in order to _remove_ (not merely deprecate) the --zookeeper args, we will
need a major release.
- in oder to drop support for ZK entirely (e.g. break a bunch of external
tooling like Cruise Control), we will need a major release.

I count two major releases.

Ryanne

-

On Wed, May 6, 2020 at 10:52 PM Colin McCabe  wrote:

> On Mon, May 4, 2020, at 17:12, Ryanne Dolan wrote:
> > Hey Colin, I think we should wait until after KIP-500's "bridge
> > release" so there is a clean break from Zookeeper after 3.0. The
> > bridge release by definition is an attempt to not break anything, so
> > it theoretically doesn't warrant a major release.
>
> Hi Ryanne,
>
> I think it's important to clarify this a little bit.  The bridge release
> (really, releases, plural) allow you to upgrade from a cluster that is
> using ZooKeeper to one that is not using ZooKeeper.  But, that doesn't
> imply that the bridge release itself doesn't break anything.  Upgrading
> to the bridge release itself might involve some minor incompatibility.
>
> Kafka does occasionally have incompatible changes.  In those cases, we
> bump the major version number.  One example is that when we went from
> Kafka 1.x to Kafka 2.0, we dropped support for JDK7.  This is an
> incompatible change.
>
> In fact, we know that the bridge release will involve at least one
> incompatible change.  We will need to drop support for the --zookeeper
> flags in the command-line tools.
>
> We've been preparing for this change for a long time.  People have spent
> a lot of effort designing new APIs that can be used instead of the old
> zookeeper-based code that some of the command-line tools used.  We have
> also deprecated the old ZK-based flags.  But at the end of the day, it
> is still an incompatible change.  So it's unfortunately not possible for
> the
> bridge release to be a 2.x release.
>
> > If that's not the case (i.e. if a single "bridge release" turns out to
> > be impractical), we should consider forking 3.0 while maintaining a
> > line of Zookeeper-dependent Kafka in 2.x. That way 3.x can evolve
> > dramatically without breaking the 2.x line. In particular, anything
> > related to removing Zookeeper could land in pre-3.0 while every other
> > feature targets 2.6.
>
> Just to be super clear about this, what we want to do here is support
> operating in __either__ KIP-500 mode and legacy mode for a while.  So the
> same branch will have support for both the old way and the new way of
> managing metadata.
>
> This will allow us to get an "alpha" version of the KIP-500 mode out early
> for people to experiment with.  It also greatly reduces the number of Kafka
> releases we have to make, and the amount of backporting we have to do.
>
> >
> > If you are proposing 2.6 should be the "bridge release", I think this
> > is premature given Kafka's time-based release schedule. If the bridge
> > features happen to be merged before 2.6's feature freeze, then sure --
> > let's make that the bridge release in retrospect. And if we get all
> > the post-Zookeeper features merged before 2.7, I'm onboard with naming
> > it "3.0" instead.
> >
> > That said, we should aim to remove legacy MirrorMaker before 3.0 as
> > well. I'm happy to drive that additional breaking change. Maybe 2.6
> > can be the "bridge" for MM2 as well.
>
> I don't have a strong opinion either way about this, but if we want to
> remove the original MirrorMaker, we have to deprecate it first, right?  Are
> we ready to do that?
>
> best,
> Colin
>
> >
> > Ryanne
> >
> > On Mon, May 4, 2020, 5:05 PM Colin McCabe  wrote:
> >
> > > Hi all,
> > >
> > > We've had a few proposals recently for incompatible changes.  One of
> > > them is my KIP-604: Remove ZooKeeper Flags from the Administrative
> > > Tools.  The other is Boyang's KIP-590: Redirect ZK Mutation
> > > Protocols to the Controller.  I think it's time to start thinking
> > > about Kafka 3.0. Specifically, I think we should move to 3.0 after
> > > the 2.6 release.
> > >
> > > From the perspective of KIP-500, in Kafka 3.x we'd like to make
> > > running in a ZooKeeper-less mode possible (but not yet the default.)
> > > This is the motivation behind KIP-590 and KIP-604, as well as some
> > > of the other KIPs we've done recently.  Since it will take some time
> > > to stabilize the new ZooKeeper-free Kafka code, we will hide it
> > > behind an option initially. (We'll have a KIP describing this all in
> > > detail soon.)
> > >
> > > What does everyone think about having Kafka 3.0 come up next after
> > > 2.6? Are there any other things we should change 

Build failed in Jenkins: kafka-trunk-jdk14 #62

2020-05-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9419: Fix possible integer overflow in CircularIterator (#7950)

[github] KAFKA-9768: Fix handling of rest.advertised.listener config (#8360)


--
[...truncated 3.08 MB...]
org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureApplicationAndRecordMetadata STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureApplicationAndRecordMetadata PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureRecordsOutputToChildByName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureRecordsOutputToChildByName PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED


Re: [DISCUSS] Kafka 3.0

2020-05-06 Thread Colin McCabe
On Tue, May 5, 2020, at 12:36, Ryanne Dolan wrote:
> > In 3.0 it sounds like nothing is breaking and our big change won't be
> > complete... so, what's the motivation for the major release?
> 
> Exactly. Why would 3.1 be the breaking release? No one would expect
> everything to break going from 3.0 to 3.1
>

Hi Ryanne,

That's right.  Kafka uses Semantic Versioning.  See https://semver.org/.  The 
short summary is that we can't make incompatible changes in minor releases.

So the decision to move from 2.x to 3.0 isn't really about the number of new 
features or changes, but just about whether the new release is 100% compatible 
with the old.

Most users never encounter the incompatibilities, but it's important to set 
expectations that moving to a new 2.x release won't cause a compatibility 
break, but moving to a new major release might... IF you are using a deprecated 
feature.  Or sometimes if you are using an old JVM you will need to upgrade to 
upgrade major releases.

best,
Colin

> 
> On Tue, May 5, 2020 at 2:34 PM Gwen Shapira  wrote:
> 
> > It sounds like the decision to make the next release 3.0 is a bit arbitrary
> > then?
> >
> > With Exactly Once, we announced 1.0 as one release after the one where EOS
> > shipped, when we felt it was "ready" (little did we know... but that's
> > another story).
> > 2.0 was breaking due to us dropping Java 7.
> >
> > In 3.0 it sounds like nothing is breaking and our big change won't be
> > complete... so, what's the motivation for the major release?
> >
> > On Tue, May 5, 2020 at 12:12 PM Guozhang Wang  wrote:
> >
> > > I think there's a confusion regarding the "bridge release" proposed in
> > > KIP-500: should it be release "3.0" or be release "2.X" (i.e. the last
> > > minor release before 3.0).
> > >
> > > My understanding is that "3.0" would be the bridge release, i.e. it would
> > > not break any compatibility, but 3.1 potentially would, so an upgrade
> > from
> > > 2.5 to 3.1 would need to first upgrade to 3.0, and then to 3.1. For
> > > clients, all broker-client compatibility are still maintained 3.1+ so
> > that
> > > 2.x producer / consumer clients could still talk to 3.1+ brokers, only
> > > those old versioned scripts with on "--zookeeper" would not work with
> > 3.1+
> > > brokers anymore since there are no zookeepers.
> > >
> > >
> > > Guozhang
> > >
> > > On Mon, May 4, 2020 at 5:33 PM Gwen Shapira  wrote:
> > >
> > > > +1 for removing MM 1.0 when we cut a breaking release. It is sad to see
> > > > that we are still investing in it (I just saw a KIP toward improving
> > its
> > > > reset policy).
> > > >
> > > > My understanding was that KIP-590 is not breaking compatibility, I
> > think
> > > > Guozhang said that in response to my question on the discussion thread.
> > > >
> > > > Overall, since Kafka has time-based releases, we can make the call on
> > 3.0
> > > > vs 2.7 when we are at "KIP freeze date" and can see which features are
> > > > likely to make it.
> > > >
> > > >
> > > > On Mon, May 4, 2020 at 5:13 PM Ryanne Dolan 
> > > wrote:
> > > >
> > > > > Hey Colin, I think we should wait until after KIP-500's "bridge
> > > release"
> > > > so
> > > > > there is a clean break from Zookeeper after 3.0. The bridge release
> > by
> > > > > definition is an attempt to not break anything, so it theoretically
> > > > doesn't
> > > > > warrant a major release. If that's not the case (i.e. if a single
> > > "bridge
> > > > > release" turns out to be impractical), we should consider forking 3.0
> > > > while
> > > > > maintaining a line of Zookeeper-dependent Kafka in 2.x. That way 3.x
> > > can
> > > > > evolve dramatically without breaking the 2.x line. In particular,
> > > > anything
> > > > > related to removing Zookeeper could land in pre-3.0 while every other
> > > > > feature targets 2.6.
> > > > >
> > > > > If you are proposing 2.6 should be the "bridge release", I think this
> > > is
> > > > > premature given Kafka's time-based release schedule. If the bridge
> > > > features
> > > > > happen to be merged before 2.6's feature freeze, then sure -- let's
> > > make
> > > > > that the bridge release in retrospect. And if we get all the
> > > > post-Zookeeper
> > > > > features merged before 2.7, I'm onboard with naming it "3.0" instead.
> > > > >
> > > > > That said, we should aim to remove legacy MirrorMaker before 3.0 as
> > > well.
> > > > > I'm happy to drive that additional breaking change. Maybe 2.6 can be
> > > the
> > > > > "bridge" for MM2 as well.
> > > > >
> > > > > Ryanne
> > > > >
> > > > > On Mon, May 4, 2020, 5:05 PM Colin McCabe 
> > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > We've had a few proposals recently for incompatible changes.  One
> > of
> > > > them
> > > > > > is my KIP-604: Remove ZooKeeper Flags from the Administrative
> > Tools.
> > > > The
> > > > > > other is Boyang's KIP-590: Redirect ZK Mutation Protocols to the
> > > > > > Controller.  I think it's time to start thinking about Kafka 3.0.
> 

[jira] [Created] (KAFKA-9967) SASL PLAIN authentication with custom callback handler

2020-05-06 Thread indira (Jira)
indira created KAFKA-9967:
-

 Summary: SASL PLAIN authentication with custom callback handler
 Key: KAFKA-9967
 URL: https://issues.apache.org/jira/browse/KAFKA-9967
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.5.0
Reporter: indira


I'm trying to add custom handler for SASL PLAN authentication. i have followed 
kafka document which says to add 
"listener.name.sasl_ssl.plain.sasl.server.callback.handler.class" with custom 
class name to server config.  but this custom class is never taken , its always 
going to default PlainServerCallbackHandler. 

On debuging the kafka-client code, observed that 
SaslChannelBuilder->createServerCallbackHandlers method is trying to read 
config property as "plain.sasl.server.callback.handler.class". which is 
different from the one mentioned in the doc.  i have changed the property name 
in my config file and tried, but still it did not work.

If there are any sample code explaining custom SASL authentication, it could 
help us. I could not find any proper sample code related to this topic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Kafka 3.0

2020-05-06 Thread Colin McCabe
On Mon, May 4, 2020, at 17:33, Gwen Shapira wrote:
> +1 for removing MM 1.0 when we cut a breaking release. It is sad to see
> that we are still investing in it (I just saw a KIP toward improving its
> reset policy).
> 
> My understanding was that KIP-590 is not breaking compatibility, I think
> Guozhang said that in response to my question on the discussion thread.

Hi Gwen,

The latest proposal for KIP-590 does break compatibility because it requires 
principals to be serializable.  So anyone implementing custom KafkaPrincipal 
subclasses would have to add support for such serialization.

> 
> Overall, since Kafka has time-based releases, we can make the call on 3.0
> vs 2.7 when we are at "KIP freeze date" and can see which features are
> likely to make it.
>

The release we're discussing won't happen until October or November, so I would 
put this more in the category of mid or long-term planning rather than short 
term planning.  It would be good to get some clarity on what we're going to do 
here.  If we can't drop support for the --zookeeper flags in November then I 
think the KIP-500 work will be delayed.  Remember that there are a lot of 
downstream users who won't migrate off of the --zookeeper flags until they're 
really gone-- things like k8s integrations, puppet, chef, and ansible 
integrations, and so on.

best,
Colin

> 
> 
> On Mon, May 4, 2020 at 5:13 PM Ryanne Dolan  wrote:
> 
> > Hey Colin, I think we should wait until after KIP-500's "bridge release" so
> > there is a clean break from Zookeeper after 3.0. The bridge release by
> > definition is an attempt to not break anything, so it theoretically doesn't
> > warrant a major release. If that's not the case (i.e. if a single "bridge
> > release" turns out to be impractical), we should consider forking 3.0 while
> > maintaining a line of Zookeeper-dependent Kafka in 2.x. That way 3.x can
> > evolve dramatically without breaking the 2.x line. In particular, anything
> > related to removing Zookeeper could land in pre-3.0 while every other
> > feature targets 2.6.
> >
> > If you are proposing 2.6 should be the "bridge release", I think this is
> > premature given Kafka's time-based release schedule. If the bridge features
> > happen to be merged before 2.6's feature freeze, then sure -- let's make
> > that the bridge release in retrospect. And if we get all the post-Zookeeper
> > features merged before 2.7, I'm onboard with naming it "3.0" instead.
> >
> > That said, we should aim to remove legacy MirrorMaker before 3.0 as well.
> > I'm happy to drive that additional breaking change. Maybe 2.6 can be the
> > "bridge" for MM2 as well.
> >
> > Ryanne
> >
> > On Mon, May 4, 2020, 5:05 PM Colin McCabe  wrote:
> >
> > > Hi all,
> > >
> > > We've had a few proposals recently for incompatible changes.  One of them
> > > is my KIP-604: Remove ZooKeeper Flags from the Administrative Tools.  The
> > > other is Boyang's KIP-590: Redirect ZK Mutation Protocols to the
> > > Controller.  I think it's time to start thinking about Kafka 3.0.
> > > Specifically, I think we should move to 3.0 after the 2.6 release.
> > >
> > > From the perspective of KIP-500, in Kafka 3.x we'd like to make running
> > in
> > > a ZooKeeper-less mode possible (but not yet the default.)  This is the
> > > motivation behind KIP-590 and KIP-604, as well as some of the other KIPs
> > > we've done recently.  Since it will take some time to stabilize the new
> > > ZooKeeper-free Kafka code, we will hide it behind an option initially.
> > > (We'll have a KIP describing this all in detail soon.)
> > >
> > > What does everyone think about having Kafka 3.0 come up next after 2.6?
> > > Are there any other things we should change in the 2.6 -> 3.0 transition?
> > >
> > > best,
> > > Colin
> > >
> >
> 
> 
> -- 
> Gwen Shapira
> Engineering Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: [DISCUSS] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-05-06 Thread Boyang Chen
Hey David,

thanks for the feedbacks!

On Wed, May 6, 2020 at 2:06 AM David Jacot  wrote:

> Hi Boyang,
>
> While re-reading the KIP, I've got few small questions/comments:
>
> 1. When auto topic creation is enabled, brokers will send a
> CreateTopicRequest
> to the controller instead of writing to ZK directly. It means that
> creation of these
> topics are subject to be rejected with an error if a CreateTopicPolicy is
> used. Today,
> it bypasses the policy entirely. I suppose that clusters allowing auto
> topic creation
> don't have a policy in place so it is not a big deal. I suggest to call
> out explicitly the
> limitation in the KIP though.
>
> That's a good idea, will add to the KIP.


> 2. In the same vein as my first point. How do you plan to handle errors
> when internal
> topics are created by a broker? Do you plan to retry retryable errors
> indefinitely?
>
> I checked a bit on the admin client handling of the create topic RPC. It
seems that
the only retriable exceptions at the moment are NOT_CONTROLLER and
REQUEST_TIMEOUT.
So I guess we just need to retry on these exceptions?


> 3. Could you clarify which listener will be used for the internal requests?
> Do you plan
> to use the control plane listener or perhaps the inter-broker listener?
>
> As we discussed in the KIP, currently the internal design for
broker->controller channel has not been
done yet, and I feel it makes sense to consolidate redirect RPC and
internal topic creation RPC to this future channel,
which are details to be filled in the near future, right now some
controller refactoring effort is still WIP.


> Thanks,
> David
>
> On Mon, May 4, 2020 at 9:37 AM Sönke Liebau
>  wrote:
>
> > Ah, I see, thanks for the clarification!
> >
> > Shouldn't be an issue I think. My understanding of KIPs was always that
> > they are mostly intended as a place to discuss and agree changes up
> front,
> > whereas tracking the actual releases that things go into should be
> handled
> > in Jira.
> > So maybe we just create new jiras for any subsequent work and either link
> > those or make them subtasks (even though this jira is already a subtask
> > itself), that should allow us to properly track all releases that work
> goes
> > into.
> >
> > Thanks for your work on this!!
> >
> > Best,
> > Sönke
> >
> >
> > On Sat, 2 May 2020 at 00:31, Boyang Chen 
> > wrote:
> >
> > > Sure thing Sonke, what I suggest is that usual KIPs get accepted to go
> > into
> > > next release. It could span for a couple of releases because of
> > engineering
> > > time, but no change has to be shipped in specific future releases, like
> > the
> > > backward incompatible change for KafkaPrincipal. But I guess it's not
> > > really a blocker, as long as we stated clearly in the KIP how we are
> > going
> > > to roll things out, and let it partially finish in 2.6.
> > >
> > > Boyang
> > >
> > > On Fri, May 1, 2020 at 2:32 PM Sönke Liebau
> > >  wrote:
> > >
> > > > Hi Boyang,
> > > >
> > > > thanks for the update, sounds reasonable to me. Making it a breaking
> > > change
> > > > is definitely the safer route to go.
> > > >
> > > > Just one quick question regarding your mail, I didn't fully
> understand
> > > what
> > > > you mean by "I think this is the first time we need to introduce a
> KIP
> > > > without having it
> > > > fully accepted in next release."  - could you perhaps explain that
> some
> > > > more very briefly?
> > > >
> > > > Best regards,
> > > > Sönke
> > > >
> > > >
> > > >
> > > > On Fri, 1 May 2020 at 23:03, Boyang Chen  >
> > > > wrote:
> > > >
> > > > > Hey Tom,
> > > > >
> > > > > thanks for the suggestion. As long as we could correctly serialize
> > the
> > > > > principal and embed in the Envelope, I think we could still
> leverage
> > > the
> > > > > controller to do the client request authentication. Although this
> > pays
> > > an
> > > > > extra round trip if the authorization is doomed to fail on the
> > receiver
> > > > > side, having a centralized processing unit is more favorable such
> as
> > > > > ensuring the audit log is consistent instead of scattering between
> > > > > forwarder and receiver.
> > > > >
> > > > > Boyang
> > > > >
> > > > > On Wed, Apr 29, 2020 at 9:50 AM Tom Bentley 
> > > wrote:
> > > > >
> > > > > > Hi Boyang,
> > > > > >
> > > > > > Thanks for the update. In the EnvelopeRequest handling section of
> > the
> > > > KIP
> > > > > > it might be worth saying explicitly that authorization of the
> > request
> > > > > will
> > > > > > happen as normal. Otherwise what you're proposing makes sense to
> > me.
> > > > > >
> > > > > > Thanks again,
> > > > > >
> > > > > > Tom
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Wed, Apr 29, 2020 at 5:27 PM Boyang Chen <
> > > > reluctanthero...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Thanks for the proposed idea Sonke. I reviewed it and had some
> > > > offline
> > > > > > > discussion with Colin, Rajini and Mathew.
> > > > > > >
> > > > > > > We do need to add 

Re: [DISCUSS] Kafka 3.0

2020-05-06 Thread Colin McCabe
On Mon, May 4, 2020, at 17:12, Ryanne Dolan wrote:
> Hey Colin, I think we should wait until after KIP-500's "bridge
> release" so there is a clean break from Zookeeper after 3.0. The
> bridge release by definition is an attempt to not break anything, so
> it theoretically doesn't warrant a major release.

Hi Ryanne,

I think it's important to clarify this a little bit.  The bridge release
(really, releases, plural) allow you to upgrade from a cluster that is
using ZooKeeper to one that is not using ZooKeeper.  But, that doesn't
imply that the bridge release itself doesn't break anything.  Upgrading
to the bridge release itself might involve some minor incompatibility.

Kafka does occasionally have incompatible changes.  In those cases, we
bump the major version number.  One example is that when we went from
Kafka 1.x to Kafka 2.0, we dropped support for JDK7.  This is an
incompatible change.

In fact, we know that the bridge release will involve at least one
incompatible change.  We will need to drop support for the --zookeeper
flags in the command-line tools.

We've been preparing for this change for a long time.  People have spent
a lot of effort designing new APIs that can be used instead of the old
zookeeper-based code that some of the command-line tools used.  We have
also deprecated the old ZK-based flags.  But at the end of the day, it
is still an incompatible change.  So it's unfortunately not possible for the
bridge release to be a 2.x release.

> If that's not the case (i.e. if a single "bridge release" turns out to
> be impractical), we should consider forking 3.0 while maintaining a
> line of Zookeeper-dependent Kafka in 2.x. That way 3.x can evolve
> dramatically without breaking the 2.x line. In particular, anything
> related to removing Zookeeper could land in pre-3.0 while every other
> feature targets 2.6.

Just to be super clear about this, what we want to do here is support operating 
in __either__ KIP-500 mode and legacy mode for a while.  So the same branch 
will have support for both the old way and the new way of managing metadata.

This will allow us to get an "alpha" version of the KIP-500 mode out early for 
people to experiment with.  It also greatly reduces the number of Kafka 
releases we have to make, and the amount of backporting we have to do.

>
> If you are proposing 2.6 should be the "bridge release", I think this
> is premature given Kafka's time-based release schedule. If the bridge
> features happen to be merged before 2.6's feature freeze, then sure --
> let's make that the bridge release in retrospect. And if we get all
> the post-Zookeeper features merged before 2.7, I'm onboard with naming
> it "3.0" instead.
>
> That said, we should aim to remove legacy MirrorMaker before 3.0 as
> well. I'm happy to drive that additional breaking change. Maybe 2.6
> can be the "bridge" for MM2 as well.

I don't have a strong opinion either way about this, but if we want to remove 
the original MirrorMaker, we have to deprecate it first, right?  Are we ready 
to do that?

best,
Colin

>
> Ryanne
>
> On Mon, May 4, 2020, 5:05 PM Colin McCabe  wrote:
>
> > Hi all,
> >
> > We've had a few proposals recently for incompatible changes.  One of
> > them is my KIP-604: Remove ZooKeeper Flags from the Administrative
> > Tools.  The other is Boyang's KIP-590: Redirect ZK Mutation
> > Protocols to the Controller.  I think it's time to start thinking
> > about Kafka 3.0. Specifically, I think we should move to 3.0 after
> > the 2.6 release.
> >
> > From the perspective of KIP-500, in Kafka 3.x we'd like to make
> > running in a ZooKeeper-less mode possible (but not yet the default.)
> > This is the motivation behind KIP-590 and KIP-604, as well as some
> > of the other KIPs we've done recently.  Since it will take some time
> > to stabilize the new ZooKeeper-free Kafka code, we will hide it
> > behind an option initially. (We'll have a KIP describing this all in
> > detail soon.)
> >
> > What does everyone think about having Kafka 3.0 come up next after
> > 2.6? Are there any other things we should change in the 2.6 -> 3.0
> > transition?
> >
> > best, Colin
> >
>


Jenkins build is back to normal : kafka-trunk-jdk8 #4507

2020-05-06 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk14 #61

2020-05-06 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-9768) rest.advertised.listener configuration is not handled properly by the worker

2020-05-06 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-9768.
---
Resolution: Fixed

> rest.advertised.listener configuration is not handled properly by the worker
> 
>
> Key: KAFKA-9768
> URL: https://issues.apache.org/jira/browse/KAFKA-9768
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.6.0, 2.4.2, 2.5.1
>
>
> The {{rest.advertised.listener}} config can currently be set to either "http" 
> or "https", and a listener with that protocol should be used when advertising 
> the URL of the worker to other members of the Connect cluster.
> For example, someone might configure their worker with a {{listeners}} value 
> of 
> {{[https://localhost:42069,http://localhost:4761|https://localhost:42069%2Chttp//localhost:4761]}}
>  and a {{rest.advertised.listener}} value of {{http}}, which should cause the 
> worker to listen on port {{42069}} with TLS and port {{4761}} with plaintext, 
> and advertise the URL {{[http://localhost:4761|http://localhost:4761/]}} to 
> other workers.
> However, the worker instead advertises the URL 
> {{[https://localhost:42069|https://localhost:42069/]}} to other workers. This 
> is because the {{RestServer}} class, which is responsible for determining 
> which URL to advertise to other workers, simply [chooses the first listener 
> whose name begins with the 
> protocol|https://github.com/apache/kafka/blob/0f48446690e42b78a9a6b8c6a9bbab9f01d84cb1/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java#L422]
>  specified in the {{rest.advertised.listener}} config.
> This breaks because "http" is a prefix of "https", so if the advertised 
> listener is "http" but the first listener that's found starts with 
> "https://;, that listener will still be chosen.
> This bug has been present since SSL support (and the 
> {{rest.advertised.listener}} config) were added via 
> [KIP-208|https://cwiki.apache.org/confluence/display/KAFKA/KIP-208%3A+Add+SSL+support+to+Kafka+Connect+REST+interface],
>  in release 1.1.0.
> This bug should only present in the case where a user has set 
> {{rest.advertised.listener}} to {{http}} but the {{listeners}} list begins 
> with a listener that uses {{https}}. A workaround can be performed by 
> changing the order of the {{listeners}} list to put the desired advertised 
> listener at the beginning.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.4-jdk8 #196

2020-05-06 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-9830: Implement AutoCloseable in ErrorReporter and 
subclasses

[konstantine] KAFKA-9919: Add logging to KafkaBasedLog::readToLogEnd (#8554)

[konstantine] KAFKA-9633: Ensure ConfigProviders are closed (#8204)


--
[...truncated 5.60 MB...]
org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED


[jira] [Created] (KAFKA-9966) Flaky Test EosBetaUpgradeIntegrationTest#shouldUpgradeFromEosAlphaToEosBeta

2020-05-06 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-9966:
--

 Summary: Flaky Test 
EosBetaUpgradeIntegrationTest#shouldUpgradeFromEosAlphaToEosBeta
 Key: KAFKA-9966
 URL: https://issues.apache.org/jira/browse/KAFKA-9966
 Project: Kafka
  Issue Type: Improvement
  Components: streams, unit tests
Reporter: Matthias J. Sax


[https://builds.apache.org/job/kafka-pr-jdk14-scala2.13/285/testReport/junit/org.apache.kafka.streams.integration/EosBetaUpgradeIntegrationTest/shouldUpgradeFromEosAlphaToEosBeta_true_/]
{quote}java.lang.AssertionError: Condition not met within timeout 6. 
Clients did not startup and stabilize on time. Observed transitions: client-1 
transitions: [KeyValue(RUNNING, REBALANCING), KeyValue(REBALANCING, RUNNING)] 
client-2 transitions: [KeyValue(CREATED, REBALANCING), KeyValue(REBALANCING, 
RUNNING), KeyValue(RUNNING, REBALANCING), KeyValue(REBALANCING, RUNNING)] at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26) at 
org.apache.kafka.test.TestUtils.lambda$waitForCondition$5(TestUtils.java:381) 
at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:429) 
at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:397) 
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:378) at 
org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.waitForStateTransition(EosBetaUpgradeIntegrationTest.java:924)
 at 
org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta(EosBetaUpgradeIntegrationTest.java:741){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-trunk-jdk11 #1429

2020-05-06 Thread Apache Jenkins Server
See 




Re: KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations

2020-05-06 Thread Anna Povzner
Hi David and Jun,

I wanted to add to the discussion about using requests/sec vs. time on
server threads (similar to request quota) for expressing quota for topic
ops.

I think request quota does not protect the brokers from overload by itself
-- it still requires tuning and sometimes re-tuning, because it depends on
the workload behavior of all users (like a relative share of requests
exempt from throttling). This makes it not that easy to set. Let me give
you more details:

   1.

   The amount of work that the user can get from the request quota depends
   on the load from other users. We measure and enforce user's clock time on
   threads---the time between 2 timestamps, one when the operation starts and
   one when the operation ends. If the user is the only load on the broker, it
   is less likely that their operation will be interrupted by the kernel to
   switch to another thread, and time away from the thread still counts.
   1.

  Pros: this makes it more work-conserving, the user is less limited
  when there are more resources available.
  2.

  Cons: Harder to capacity plan for the user, and could be confusing
  when the broker will suddenly stop supporting the load which it was
  supporting before.
  2.

   For the above reason, it makes most sense to maximize the user's quota
   and set it as a percent of the maximum thread capacity (1100 with default
   broker config).
   3.

   However, the actual maximum threads capacity is not really 1100:
   1.

  Some of it will be taken by requests exempt from throttling, and the
  amount depends on the workload. We have seen cases (somewhat rare) where
  requests exempt from throttling take like ⅔ of the time on threads.
  2.

  We have also seen cases of an overloaded cluster (full queues,
  timeouts, etc) due to high request rate while the time used on
threads was
  way below the max (1100), like 600 or 700 (total exempt + non-exempt
  usage). Basically, when a broker is close to 100% CPU, it takes more and
  more time for the "unaccounted" work like thread getting a chance to pick
  up a request from the queue and get a timestamp.
  4.

   As a result, there will be some tuning to decide on a safe value for
   total thread capacity, from where users can carve out their quotas. Some
   changes in users' workloads may require re-tuning, if, for example, it
   dramatically changes the relative share of non-exempt load.


I think request quota works well for client request load in a sense that it
ensures that different users get a fair/proportional share of resources
during high broker load. If the user cannot get enough resources from their
quota to support their request rate anymore, they can monitor their load
and expand the cluster if needed (or rebalance).

However, I think using time on threads for topic ops could be even more
difficult than simple request rate (as proposed):

   1.

   I understand that we don't only care about topic requests tying up the
   controller thread, but we also care that it does not create a large extra
   load on the cluster due to LeaderAndIsr and other related requests (this is
   more important for small clusters).
   2.

   For that reason, tuning quota in terms of time on threads can be harder,
   because there is no easy way to say how this quota would translate to a
   number of operations (because that would depend on other broker load).


Since tuning would be required anyway, I see the following workflow if we
express controller quota in terms of partition mutations per second:

   1.

   Run topic workload in isolation (the most expensive one, like create
   topic vs. add partitions) and see how much load it adds based on incoming
   rate. Choose quota depending on how much extra load your cluster can
   sustain in addition to its normal load.
   2.

   Could be useful to publish some experimental results to give some
   ballpark numbers to make this sizing easier.


I am interested to see if you agree with the listed assumptions here. I may
have missed something, especially if there is an easier workflow for
setting quota based on time on threads.

Thanks,

Anna


On Thu, Apr 30, 2020 at 8:13 AM Tom Bentley  wrote:

> Hi David,
>
> Thanks for the KIP.
>
> If I understand the proposed throttling algorithm, an initial request would
> be allowed (possibly making K negative) and only subsequent requests
> (before K became positive) would receive the QUOTA_VIOLATED. That would
> mean it was still possible to block the controller from handling other
> events – you just need to do so via making one big request.
>
> While the reasons for rejecting execution throttling make sense given the
> RPCs we have today that seems to be at the cost of still allowing harm to
> the cluster, or did I misunderstand?
>
> Kind regards,
>
> Tom
>
>
>
> On Tue, Apr 28, 2020 at 1:49 AM Jun Rao  wrote:
>
> > Hi, David,
> >
> > Thanks for the reply. A few more 

[jira] [Created] (KAFKA-9965) Uneven distribution with RoundRobinPartitioner in AK 2.4+

2020-05-06 Thread Michael Bingham (Jira)
Michael Bingham created KAFKA-9965:
--

 Summary: Uneven distribution with RoundRobinPartitioner in AK 2.4+
 Key: KAFKA-9965
 URL: https://issues.apache.org/jira/browse/KAFKA-9965
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 2.4.1, 2.5.0, 2.4.0
Reporter: Michael Bingham


{{RoundRobinPartitioner}} states that it will provide equal distribution of 
records across partitions. However with the enhancements made in KIP-480, it 
may not. In some cases, when a new batch is started, the partitioner may be 
called a second time for the same record:

[https://github.com/apache/kafka/blob/2.4/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L909]

[https://github.com/apache/kafka/blob/2.4/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L934]

Each time the partitioner is called, it increments a counter in 
{{RoundRobinPartitioner}}, so this can result in unequal distribution.

Easiest fix might be to decrement the counter in 
{{RoundRobinPartitioner#onNewBatch}}.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9964) Better description of RoundRobinPartitioner behavior for AK 2.4+

2020-05-06 Thread Michael Bingham (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Bingham resolved KAFKA-9964.

Resolution: Invalid

> Better description of RoundRobinPartitioner behavior for AK 2.4+
> 
>
> Key: KAFKA-9964
> URL: https://issues.apache.org/jira/browse/KAFKA-9964
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 2.4.0, 2.5.0, 2.4.1
>Reporter: Michael Bingham
>Priority: Minor
>
> The Javadocs for {{RoundRobinPartitioner}} currently state:
> {quote}This partitioning strategy can be used when user wants to distribute 
> the writes to all partitions equally
> {quote}
> In AK 2.4+, equal distribution is not guaranteed, even with this partitioner. 
> The enhancements to consider batching made with 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-480%3A+Sticky+Partitioner]
>  affect this partitioner as well.
> So it would be useful to add some additional Javadocs to explain that unless 
> batching is disabled, even distribution of records is not guaranteed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9419) Integer Overflow Possible with CircularIterator

2020-05-06 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-9419.
---
Resolution: Fixed

> Integer Overflow Possible with CircularIterator
> ---
>
> Key: KAFKA-9419
> URL: https://issues.apache.org/jira/browse/KAFKA-9419
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Mollitor
>Priority: Minor
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
>
> Very unlikely to happen, but as someone that gets called in when something 
> goes wrong, I'd like to remove as many possibilities as possible.
>  
> [https://github.com/apache/kafka/blob/8c21fa837df6908d9147805e097407d006d95fd4/clients/src/main/java/org/apache/kafka/common/utils/CircularIterator.java#L39-L43]
>  
> Also, the current implementation will work with a LinkedList, but it won't 
> work well.  The constant call to `get(i)` will perform at O(n^2).  Using an 
> iterator instead allows other Collections to work reasonably well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9964) Better description of RoundRobinPartitioner behavior for AK 2.4+

2020-05-06 Thread Michael Bingham (Jira)
Michael Bingham created KAFKA-9964:
--

 Summary: Better description of RoundRobinPartitioner behavior for 
AK 2.4+
 Key: KAFKA-9964
 URL: https://issues.apache.org/jira/browse/KAFKA-9964
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Affects Versions: 2.4.1, 2.5.0, 2.4.0
Reporter: Michael Bingham


The Javadocs for {{RoundRobinPartitioner}} currently state:
{quote}This partitioning strategy can be used when user wants to distribute the 
writes to all partitions equally
{quote}
In AK 2.4+, equal distribution is not guaranteed, even with this partitioner. 
The enhancements to consider batching made with 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-480%3A+Sticky+Partitioner]
 affect this partitioner as well.


So it would be useful to add some additional Javadocs to explain that unless 
batching is disabled, even distribution of records is not guaranteed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9947) TransactionsBounceTest may leave threads running

2020-05-06 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9947.

Resolution: Fixed

> TransactionsBounceTest may leave threads running
> 
>
> Key: KAFKA-9947
> URL: https://issues.apache.org/jira/browse/KAFKA-9947
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> I saw this failure recently:
> ```
> 14:28:23 kafka.api.TransactionsBounceTest > testWithGroupId FAILED
> 14:28:23 org.scalatest.exceptions.TestFailedException: Consumed 0 records 
> before timeout instead of the expected 200 records
> 14:28:23 at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
> 14:28:23 at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
> 14:28:23 at 
> org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389)
> 14:28:23 at org.scalatest.Assertions.fail(Assertions.scala:1091)
> 14:28:23 at org.scalatest.Assertions.fail$(Assertions.scala:1087)
> 14:28:23 at org.scalatest.Assertions$.fail(Assertions.scala:1389)
> 14:28:23 at 
> kafka.utils.TestUtils$.pollUntilAtLeastNumRecords(TestUtils.scala:843)
> 14:28:23 at 
> kafka.api.TransactionsBounceTest.testWithGroupId(TransactionsBounceTest.scala:110)
> ```
> This was followed by a bunch of test failures such as the following:
> ```
> 14:28:38 kafka.api.TransactionsBounceTest > classMethod FAILED
> 14:28:38 java.lang.AssertionError: Found unexpected threads during 
> @AfterClass, allThreads=HashSet(controller-event-thread, 
> ExpirationReaper-0-topic, ExpirationReaper-0-ElectLeader, 
> ExpirationReaper-0-Heartbeat, metrics-meter-tick-thread-2, main, 
> metrics-meter-tick-thread-1, 
> data-plane-kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-41287, 
> scala-execution-context-global-246, transaction-log-manager-0, Reference 
> Handler, scala-execution-context-global-24107, /127.0.0.1:35460 to 
> /127.0.0.1:42451 workers Thread 2, /127.0.0.1:35460 to /127.0.0.1:42451 
> workers Thread 3, kafka-log-cleaner-thread-0, ExpirationReaper-0-Fetch, 
> scala-execution-context-global-12253, ExpirationReaper-0-Rebalance, 
> Common-Cleaner, daemon-broker-bouncer-EventThread, Signal Dispatcher, 
> SensorExpiryThread, daemon-broker-bouncer-SendThread(127.0.0.1:32919), 
> kafka-scheduler-0, kafka-scheduler-3, kafka-scheduler-4, kafka-scheduler-1, 
> kafka-scheduler-2, kafka-scheduler-7, ExpirationReaper-0-DeleteRecords, 
> kafka-scheduler-8, kafka-scheduler-5, kafka-scheduler-6, 
> scala-execution-context-global-4200, kafka-scheduler-9, LogDirFailureHandler, 
> TxnMarkerSenderThread-0, /config/changes-event-process-thread, 
> ExpirationReaper-0-AlterAcls, group-metadata-manager-0, Test worker, 
> Finalizer, scala-execution-context-global-4199, 
> ThrottledChannelReaper-Produce, data-plane-kafka-request-handler-3, 
> data-plane-kafka-request-handler-2, Controller-0-to-broker-0-send-thread, 
> data-plane-kafka-request-handler-1, 
> data-plane-kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-1, 
> data-plane-kafka-request-handler-0, 
> data-plane-kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-0, 
> data-plane-kafka-network-thread-0-ListenerName(PLAINTEXT)-PLAINTEXT-2, 
> scala-execution-context-global-572, scala-execution-context-global-573, 
> data-plane-kafka-request-handler-7, data-plane-kafka-request-handler-6, 
> data-plane-kafka-request-handler-5, scala-execution-context-global-137, 
> data-plane-kafka-request-handler-4, ThrottledChannelReaper-Request, 
> ExpirationReaper-0-Produce, ThrottledChannelReaper-Fetch), 
> unexpected=HashSet(controller-event-thread, daemon-broker-bouncer-EventThread)
> ```
> The test case needs to ensure that `BounceScheduler` gets shutdown properly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4506

2020-05-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6342; Remove unused workaround for JSON parsing of non-escaped


--
[...truncated 3.10 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task 

[jira] [Created] (KAFKA-9963) High CPU

2020-05-06 Thread Evan Williams (Jira)
Evan Williams created KAFKA-9963:


 Summary: High CPU
 Key: KAFKA-9963
 URL: https://issues.apache.org/jira/browse/KAFKA-9963
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.4.1
Reporter: Evan Williams


When replacing a broker, with an empty data dir, and the same broker ID - we 
are seeing very high CPU usage during replication, generally up to 100% for 
some time, on a 4 VCPU (EC2 R5) host.  This is a 6 host cluster, with approx 
1000 topics and 3000 partitions.

 

There is of course traffic being served as well, as it catches up and becomes 
leader of partitions, however due to the high replication CPU usage - client's 
start to have connection issue.

CPU profiling during this 'replace' scenario, shows this:

 
{code:java}

 5473000   19.43% 5473  java.util.TreeMap$PrivateEntryIterator.nextEntry
 4975000   17.66% 4975  
scala.collection.convert.Wrappers$JIteratorWrapper.hasNext
 4417000   15.68% 4417  java.util.TreeMap.successor
 17730006.29% 1773  java.util.TreeMap$ValueIterator.next
 1706.03% 1700  java.util.TreeMap$PrivateEntryIterator.hasNext
  6010002.13%  601  
scala.collection.convert.Wrappers$JIteratorWrapper.next
  5160001.83%  516  writev
--- 3885000 ns (13.79%), 3885 samples
  [ 0] java.util.TreeMap$PrivateEntryIterator.nextEntry
  [ 1] java.util.TreeMap$ValueIterator.next
  [ 2] scala.collection.convert.Wrappers$JIteratorWrapper.next
  [ 3] scala.collection.Iterator.find
  [ 4] scala.collection.Iterator.find$
  [ 5] scala.collection.AbstractIterator.find
  [ 6] scala.collection.IterableLike.find
  [ 7] scala.collection.IterableLike.find$
  [ 8] scala.collection.AbstractIterable.find
  [ 9] kafka.log.ProducerStateManager.lastStableOffset
  [10] kafka.log.Log.$anonfun$append$12
  [11] kafka.log.Log.$anonfun$append$2
  [12] kafka.log.Log.append
  [13] kafka.log.Log.appendAsFollower
  [14] 
kafka.cluster.Partition.$anonfun$doAppendRecordsToFollowerOrFutureReplica$1
  [15] kafka.cluster.Partition.doAppendRecordsToFollowerOrFutureReplica
  [16] kafka.cluster.Partition.appendRecordsToFollowerOrFutureReplica
  [17] kafka.server.ReplicaFetcherThread.processPartitionData
  [18] kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$7
  [19] kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$6
  [20] kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$6$adapted
  [21] kafka.server.AbstractFetcherThread$$Lambda$552.191789933.apply
  [22] scala.collection.mutable.ResizableArray.foreach
  [23] scala.collection.mutable.ResizableArray.foreach$
  [24] scala.collection.mutable.ArrayBuffer.foreach
  [25] kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$5
  [26] kafka.server.AbstractFetcherThread.processFetchRequest
  [27] kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3
  [28] kafka.server.AbstractFetcherThread.maybeFetch
  [29] kafka.server.AbstractFetcherThread.doWork
  [30] kafka.utils.ShutdownableThread.run--- 3632000 ns (12.89%), 3632 
samples
  [ 0] scala.collection.convert.Wrappers$JIteratorWrapper.hasNext
  [ 1] scala.collection.Iterator.find
  [ 2] scala.collection.Iterator.find$
  [ 3] scala.collection.AbstractIterator.find
  [ 4] scala.collection.IterableLike.find
  [ 5] scala.collection.IterableLike.find$
  [ 6] scala.collection.AbstractIterable.find
  [ 7] kafka.log.ProducerStateManager.lastStableOffset
  [ 8] kafka.log.Log.$anonfun$append$12
  [ 9] kafka.log.Log.$anonfun$append$2
  [10] kafka.log.Log.append
  [11] kafka.log.Log.appendAsFollower
  [12] 
kafka.cluster.Partition.$anonfun$doAppendRecordsToFollowerOrFutureReplica$1
  [13] kafka.cluster.Partition.doAppendRecordsToFollowerOrFutureReplica
  [14] kafka.cluster.Partition.appendRecordsToFollowerOrFutureReplica
  [15] kafka.server.ReplicaFetcherThread.processPartitionData
  [16] kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$7
  [17] kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$6
  [18] kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$6$adapted
  [19] kafka.server.AbstractFetcherThread$$Lambda$552.191789933.apply
  [20] scala.collection.mutable.ResizableArray.foreach
  [21] scala.collection.mutable.ResizableArray.foreach$
  [22] scala.collection.mutable.ArrayBuffer.foreach
  [23] kafka.server.AbstractFetcherThread.$anonfun$processFetchRequest$5
  [24] kafka.server.AbstractFetcherThread.processFetchRequest
  [25] kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3
  [26] kafka.server.AbstractFetcherThread.maybeFetch
  [27] kafka.server.AbstractFetcherThread.doWork
  [28] kafka.utils.ShutdownableThread.run--- 3236000 ns (11.49%), 3236 
samples
  [ 0] java.util.TreeMap.successor
  [ 1] java.util.TreeMap$PrivateEntryIterator.nextEntry
  [ 2] java.util.TreeMap$ValueIterator.next
  [ 3] 

Build failed in Jenkins: kafka-trunk-jdk14 #60

2020-05-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6342; Remove unused workaround for JSON parsing of non-escaped


--
[...truncated 3.08 MB...]
org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task 

Build failed in Jenkins: kafka-2.3-jdk8 #200

2020-05-06 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-9830: Implement AutoCloseable in ErrorReporter and 
subclasses

[konstantine] KAFKA-9919: Add logging to KafkaBasedLog::readToLogEnd (#8554)

[konstantine] KAFKA-9633: Ensure ConfigProviders are closed (#8204)


--
[...truncated 2.71 MB...]

kafka.log.ProducerStateManagerTest > testLoadFromCorruptSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromCorruptSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testProducerSequenceWrapAround STARTED

kafka.log.ProducerStateManagerTest > testProducerSequenceWrapAround PASSED

kafka.log.ProducerStateManagerTest > testPidExpirationTimeout STARTED

kafka.log.ProducerStateManagerTest > testPidExpirationTimeout PASSED

kafka.log.ProducerStateManagerTest > testAcceptAppendWithSequenceGapsOnReplica 
STARTED

kafka.log.ProducerStateManagerTest > testAcceptAppendWithSequenceGapsOnReplica 
PASSED

kafka.log.ProducerStateManagerTest > testAppendTxnMarkerWithNoProducerState 
STARTED

kafka.log.ProducerStateManagerTest > testAppendTxnMarkerWithNoProducerState 
PASSED

kafka.log.ProducerStateManagerTest > testOldEpochForControlRecord STARTED

kafka.log.ProducerStateManagerTest > testOldEpochForControlRecord PASSED

kafka.log.ProducerStateManagerTest > 
testTruncateAndReloadRemovesOutOfRangeSnapshots STARTED

kafka.log.ProducerStateManagerTest > 
testTruncateAndReloadRemovesOutOfRangeSnapshots PASSED

kafka.log.ProducerStateManagerTest > testStartOffset STARTED

kafka.log.ProducerStateManagerTest > testStartOffset PASSED

kafka.log.ProducerStateManagerTest > testProducerSequenceInvalidWrapAround 
STARTED

kafka.log.ProducerStateManagerTest > testProducerSequenceInvalidWrapAround 
PASSED

kafka.log.ProducerStateManagerTest > testTruncateHead STARTED

kafka.log.ProducerStateManagerTest > testTruncateHead PASSED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction STARTED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction PASSED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged STARTED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > 

Re: [VOTE] KIP-437: Custom replacement for MaskField SMT

2020-05-06 Thread Randall Hauch
Thanks for starting the vote, Yu.

+1 (binding)

Randall

On Sat, Dec 21, 2019 at 1:22 AM Yu Watanabe  wrote:

> Thank for the KIP.
> I really want this for my project.
>
> +1 (non-binding)
>


Re: [VOTE] KIP-586: Deprecate commit records without record metadata

2020-05-06 Thread Randall Hauch
Thanks for putting this KIP together, Mario.

+1 (binding)

Randall

On Mon, Apr 27, 2020 at 2:05 PM Mario Molina  wrote:

> Hi all,
>
> I'd like to start a vote for KIP-586. You can find the link for this KIP
> here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-586%3A+Deprecate+commit+records+without+record+metadata
>
> Thanks!
> Mario
>


Re: [Discuss] KIP-582 Add a "continue" option for Kafka Connect error handling

2020-05-06 Thread Zihan Li
Hi Chris,

Thanks a lot for the reply. 

To make error messages easy to find, I think we can add messages to the 
header of a broken record, just like the mechanism in DLQ. This way even 
if the broken record is stored into external systems, error messages and 
broken records are paired.

Currently, after a broken record is sent to DLQ, there are usually two ways 
to analysis that. One method is to use some consumer tool to examine the 
messages directly. The other is to consume the DLQ again to external 
system for analysis. This proposal would help in the second case by 
eliminating the DLQ sink connector. In the first case, most open-sourced 
consumer tools are not as powerful as external tools in terms of querying, 
aggregating, and pattern finding bytes messages. Confluent KSQL is a 
powerful consumer tool, but it is not part of the open-sourced project. 
Therefore I think the proposal would help in the first case by not only 
flatten learning curve of consumer tools, but also enabling extern tools for 
analysis. 

Best,
Zihan

On 2020/05/03 17:36:34, Christopher Egerton  wrote: 
> Hi Zihan,
> 
> I guess I'm still unclear on exactly what form this analysis might take. If
> a converter has an issue (de)-serializing a record, for example, the first
> thing I check out is the stack trace in the worker logs that tells me what
> went wrong and where. The same goes for errors thrown during
> transformation. Can we have some concrete examples about what kind analysis
> performed on byte arrays in external systems might be more informative,
> especially when it would either be performed without easy-to-find log
> messages or require extra effort to make those log messages easy to find
> and associate with the bytes in the external system?
> 
> Cheers,
> 
> Chris
> 
> On Thu, Apr 30, 2020 at 1:01 PM Zihan Li  wrote:
> 
> > Hi Chris and Andrew,
> >
> > Thanks a lot for your reply!
> >
> > I think in most cases it is easier to analysis broken records in an
> > external
> > system rather than in a Kafka DLQ topic. While it might be possible to
> > directly analysis broken records with Kafka, people are generally more
> > familiar with external tools, such as file systems and relational
> > databases.
> > Exporting broken records to those external systems would enable many more
> > analysis tools. Users can use those tools to audit end-to-end data flow
> > and
> > work with upstream teams to improve data quality. As a result, in many
> > cases, DLQ is consumed again by an additional connector for further
> > analysis.
> > So as Chris have mentioned, the point of this KIP is to save user the
> > extra
> > time and effort to maintain and tune this addition DLQ sink connector.
> >
> > The expected behavior of this new error handling option should be
> > consistent
> > with DLQ. Namely, if any of key, value or header is broken, the record
> > should be sent to SinkTask.putBrokenRecord() instead of DLQ.
> >
> > Best,
> > Zihan
> >
> > On 2020/04/25 20:05:37, Christopher Egerton  wrote:
> > > Hi Zihan,
> > >
> > > Thanks for the changes and the clarifications! I agree that the
> > complexity
> > > of maintaining a second topic and a second connector is a fair amount of
> > > work; to Andrew's question, it seems less about the cost of just running
> > > another connector, and more about managing that second connector (and
> > > topic) when a lot of the logic is identical, such as topic ACLs,
> > > credentials for the connector to access the external system, and other
> > > fine-tuning.
> > >
> > > However, I'm still curious about the general use case here. For example,
> > if
> > > a converter fails to deserialize a record, it seems like the right thing
> > to
> > > do would be to examine the record, try to understand why it's failing,
> > and
> > > then find a converter that can handle it. If the raw byte array for the
> > > Kafka message gets written to the external system instead, what's the
> > > benefit to the user? Yes, they won't have to configure another connector
> > > and manage another topic, but they're still going to want to examine that
> > > data at some point; why would it be easier to deal with malformed records
> > > from an external system than it would from where they originally broke,
> > in
> > > Kafka?
> > >
> > > If we're going to add a new feature like this to the framework, I just
> > want
> > > to make sure that there's a general use case for this that isn't tied to
> > > one specific type of connector, external system, usage pattern, etc.
> > >
> > > Oh, and one other question that came to mind--what would the expected
> > > behavior be if a converter was unable to deserialize a record's key, but
> > > was able to deserialize its value?
> > >
> > > Cheers,
> > >
> > > Chris
> > >
> > > On Sat, Apr 25, 2020 at 12:27 PM Andrew Schofield <
> > andrew_schofi...@live.com>
> > > wrote:
> > >
> > > > Hi Zihan,
> > > > Thanks for the KIP. I have a question about the proposal.
> > > >
> > > > Why do 

Re: [DISCUSS] KIP-587 Suppress detailed responses for handled exceptions in security-sensitive environments

2020-05-06 Thread Connor Penhale
Hi Chris,

Apologies for the name confusion! I've been working with the my customer 
sponsor over the last few weeks, and we finally have an answer regarding "only 
exceptions or all responses." This organization is really interested in 
removing stack traces from all responses, which will expand the scope of this 
KIP a bit. I'm going to update the wiki entry, and then would it be reasonable 
to call for a vote?

Thanks!
Connor

On 4/17/20, 3:53 PM, "Christopher Egerton"  wrote:

Hi Connor,

That's great, but I think you may have mistaken Colin for me :)

One more thing that should be addressed--the "public interfaces" section
isn't just for Java interfaces, it's for any changes to any public part of
Kafka that users and external developers interact with. As far as Connect
is concerned, this includes (but is not limited to) the REST API and worker
configuration properties, so it might be worth briefly summarizing the
scope of your proposed changes in that section (something like "We plan on
adding a new worker config named  that will affect the REST API under
".

Cheers,

Chris

On Wed, Apr 15, 2020 at 1:00 PM Connor Penhale 
wrote:

> Hi Chris,
>
> I can ask the customer if they can disclose any additional information. I
> provided the information around "PCI-DSS" to give the community a flavor 
of
> the type of environment the customer was operating in. The current mode is
> /not/ insecure, I would agree with this. I would be willing to agree that
> my customer has particular security audit requirements that go above and
> beyond what most environments would consider reasonable. Are you
> comfortable with that language?
>
> " enable.rest.response.stack.traces" works great for me!
>
> I created a new class in the example PR because I wanted the highest
> chance of not gunking up the works by stepping on toes in an important
> class. I figured I'd be reducing risk by creating an alternative
> implementing class. In retrospect, and now that I'm getting a first-hand
> look at Kafka's community process, that is probably unnecessary.
> Additionally, I would agree with your statement that we should modify the
> existing ExceptionMapper to avoid behavior divergence in subsequent
> releases and ensure this feature's particular scope is easy to maintain.
>
> Thanks!
> Connor
>
> On 4/15/20, 1:17 PM, "Colin McCabe"  wrote:
>
> Hi Connor,
>
> I still would like to hear more about whether this feature is required
> for PCI-DSS or any other security certification.  Nobody I talked to 
seemed
> to think that it was-- if there are certifications that would require 
this,
> it would be nice to know.  However, I don't object to implementing this as
> long as we don't imply that the current mode is insecure.
>
> What do you think about using "enable.rest.response.stack.traces" as
> the config name?  It seems like that  makes it clearer that it's a boolean
> value.
>
> It's not really necessary to describe the internal implementation in
> the KIP, but since you mentioned it, it's probably worth considering using
> the current ExceptionMapper class with a different configuration rather
> than creating a new one.
>
> best,
> Colin
>
>
> On Mon, Apr 13, 2020, at 09:04, Connor Penhale wrote:
> > Hi Chris!
> >
> > RE: SSL, indeed, the issue is not that the information is not
> > encrypted, but that there is no authorization layer.
> >
> > I'll be sure to edit the KIP as we continue discussion!
> >
> > RE: the 200 response you highlighted, great catch! I'll work with my
> > customer and get back to you on their audit team's intention! I'm
> > fairly certain I know the answer, but I need to be sure before I
> speak
> > for them.
> >
> > Thanks!
> > Connor
> >
> > On 4/8/20, 11:27 PM, "Christopher Egerton" 
> wrote:
> >
> > Hi Connor,
> >
> > Just a few more remarks!
> >
> > I noticed that you said "Kafka Connect was passing these
> exceptions without
> > authentication." For what it's worth, the Connect REST API can
> be secured
> > with TLS out-of-the-box by configuring the worker with the
> various ssl.*
> > properties, but that doesn't provide any kind of authorization
> layer to
> > provide levels of security depending who the user is. Just
> pointing out in
> > case this helps with your use case.
> >
> > As far as editing the KIP based on discussion goes--it's not 
only
> > acceptable, it's expected :) Ideally, the KIP should be kept
> up-to-date 

[jira] [Resolved] (KAFKA-9112) Combine streams `onAssignment` with `partitionsAssigned` task creation

2020-05-06 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman resolved KAFKA-9112.

Fix Version/s: 2.6.0
 Assignee: Guozhang Wang
   Resolution: Fixed

Think this issue was addressed as part of "The Refactor"

> Combine streams `onAssignment` with `partitionsAssigned` task creation
> --
>
> Key: KAFKA-9112
> URL: https://issues.apache.org/jira/browse/KAFKA-9112
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Boyang Chen
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.6.0
>
>
> Task manager needs to call `createTasks` inside partitionsAssigned callback, 
> which is after the `onAssignment` callback for assignor. This means during 
> task creation we rely on the status change based on the intermediate data 
> structures populated by a different callback, which is hard to reason about. 
> We should consider consolidate logics to either one of the callbacks, prefer 
> `onAssignment` as it contains full information needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6063) StreamsException is thrown after the changing `partitions`

2020-05-06 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman resolved KAFKA-6063.

Resolution: Not A Problem

> StreamsException is thrown after the changing `partitions`
> --
>
> Key: KAFKA-6063
> URL: https://issues.apache.org/jira/browse/KAFKA-6063
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
> Environment: macOS 10.12
> kafka 0.11.0.1
>Reporter: Akihito Nakano
>Priority: Trivial
>  Labels: user-experience
>
> Hi.
> "org.apache.kafka.streams.errors.StreamsException" is thrown in following 
> case.
> h3. Create topic
> {code:java}
> $ bin/kafka-topics.sh --create --zookeeper localhost:2181 
> --replication-factor 1 --partitions 6 --topic word-count-input
> {code}
> h3. Create Kafka Streams Application
> {code:java}
> public class WordCountApp {
> public static void main(String[] args) {
> Properties config = new Properties();
> config.put(StreamsConfig.APPLICATION_ID_CONFIG, 
> "wordcount-application");
> ...
> ...
> {code}
> h3.  Ensure that it works fine
> {code:java}
> $ java -jar wordcount.jar
> KafkaStreams processID: b4a559cb-7075-4ece-a718-5043a432900b
> StreamsThread appId: wordcount-application
> ...
> ...
> {code}
> h3.  Change "partitions"
> {code:java}
> $ bin/kafka-topics.sh --alter --zookeeper localhost:2181 --partitions 8 
> --topic word-count-input
> Adding partitions succeeded!
> {code}
> h3.  When I start Application, StreamsException is thrown
> {code:java}
> $ java -jar wordcount.jar
> KafkaStreams processID: 8a9cbf03-b841-4cb2-9d44-6456b4520522
> StreamsThread appId: wordcount-applicationn
> StreamsThread clientId: 
> wordcount-applicationn-8a9cbf03-b841-4cb2-9d44-6456b4520522
> StreamsThread threadId: 
> wordcount-applicationn-8a9cbf03-b841-4cb2-9d44-6456b4520522-StreamThread-1
> Active tasks:
> Running:
> Suspended:
> Restoring:
> New:
> Standby tasks:
> Running:
> Suspended:
> Restoring:
> New:
> Exception in thread 
> "wordcount-application-8a9cbf03-b841-4cb2-9d44-6456b4520522-StreamThread-1" 
> org.apache.kafka.streams.errors.StreamsException: Could not create internal 
> topics.
>   at 
> org.apache.kafka.streams.processor.internals.InternalTopicManager.makeReady(InternalTopicManager.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.prepareTopic(StreamPartitionAssignor.java:660)
>   at 
> org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign(StreamPartitionAssignor.java:398)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:365)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:522)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1100(AbstractCoordinator.java:93)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:472)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:455)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:808)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:788)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
>   at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:488)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:348)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:168)
>   at 
> 

[jira] [Resolved] (KAFKA-8858) Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some reason

2020-05-06 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman resolved KAFKA-8858.

  Assignee: Guozhang Wang
Resolution: Duplicate

> Kafka Streams - Failed to Rebalance Error and stream consumer stuck for some 
> reason
> ---
>
> Key: KAFKA-8858
> URL: https://issues.apache.org/jira/browse/KAFKA-8858
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Apache Kafka 2.1.1
>Reporter: Ante B.
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: Stream, consumer, corrupt, offset, rebalance, 
> transactions
>
> I have a basic Kafka Streams application that reads from a {{topic}}, 
> performs a rolling aggregate, and performs a join to publish to an 
> {{agg_topic}}. Our project has the timeout failure in Kafka 2.1.1 env and we 
> don't know the reason yet.
> Our stream consumer stuck for some reason. 
> After we changed our group id to another one it became normal. So seems 
> offset data for this consumer is corrupted.
> Can you help us please to resolve this problem to be able to revert us to the 
> previous consumer name because we have many inconveniences due to this.
> Ping me pls if you will need some additional info.
> Our temporary workaround is to disable the {{exactly_once}} config which 
> skips the initializing transactional state. Also offset reseted for corrupted 
> partition, with no effect.
> Full problem description in log:
> {code:java}
> [2019-08-30 14:20:02.168] [abc-streamer-StreamThread-21] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [abc-streamer-StreamThread-21] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> [2019-08-30 14:21:35.407] [abc-streamer-StreamThread-14] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [abc-streamer-StreamThread-14] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> [2019-08-30 14:22:58.487] [abc-streamer-StreamThread-13] ERROR 
> org.apache.kafka.streams.processor.internals.StreamThread:273 - stream-thread 
> [abc-streamer-StreamThread-13] Error caught during partition assignment, will 
> abort the current process and re-throw at the end of rebalance: {} 
>  org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 6ms.
> {noformat}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8207) StickyPartitionAssignor for KStream

2020-05-06 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman resolved KAFKA-8207.

Resolution: Not A Problem

Closing this as "Not a Problem" as the partition assignor can't be overriden by 
design. The assignment of tasks to clients should be sticky, in particular it 
should not return the exact same assignment in the case of a simple restart as 
mentioned below.

> StickyPartitionAssignor for KStream
> ---
>
> Key: KAFKA-8207
> URL: https://issues.apache.org/jira/browse/KAFKA-8207
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: neeraj
>Priority: Major
>
> In KStreams I am not able to give a sticky partition assignor or my custom 
> partition assignor.
> Overriding the property while building stream does not work
> streams props.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, 
> CustomAssignor.class.getName());
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-4969) State-store workload-aware StreamsPartitionAssignor

2020-05-06 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman resolved KAFKA-4969.

Fix Version/s: (was: 1.1.0)
   2.6.0
   Resolution: Fixed

This has been resolved as part of KIP-441. The new assignor should distribute 
tasks of each type evenly

> State-store workload-aware StreamsPartitionAssignor
> ---
>
> Key: KAFKA-4969
> URL: https://issues.apache.org/jira/browse/KAFKA-4969
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Major
> Fix For: 2.6.0
>
>
> Currently, {{StreamPartitionsAssigner}} does not distinguish different 
> "types" of tasks. For example, task can be stateless of have one or multiple 
> stores.
> This can lead to an suboptimal task placement: assume there are 2 stateless 
> and 2 stateful tasks and the app is running with 2 instances. To share the 
> "store load" it would be good to place one stateless and one stateful task 
> per instance. Right now, there is no guarantee about this, and it can happen, 
> that one instance processed both stateless tasks while the other processes 
> both stateful tasks.
> We should improve {{StreamPartitionAssignor}} and introduce "task types" 
> including a cost model for task placement. We should consider the following 
> parameters:
>  - number of stores
>  - number of sources/sinks
>  - number of processors
>  - regular task vs standby task
>  - in the case of standby tasks, which tasks have progressed the most with 
> respect to restoration
> This improvement should be backed by a design document in the project wiki 
> (no KIP required though) as it's a fairly complex change.
>  
> There have been some additional discussions around task assignment on a 
> related PR https://github.com/apache/kafka/pull/5390



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-605 Expand Connect Worker Internal Topic Settings

2020-05-06 Thread Randall Hauch
Thanks, folks.

The KIP appears to be satisfactory, so I plan to start a vote thread
tomorrow unless there are objections.

Best regards,

Randall

On Mon, May 4, 2020 at 2:56 PM Christopher Egerton 
wrote:

> Hi Randall,
>
> Looks great! Definitely in favor of a WARN level message instead of failing
> on startup. +1 non-binding when the vote thread comes
>
> Cheers,
>
> Chris
>
> On Mon, May 4, 2020 at 12:51 PM Randall Hauch  wrote:
>
> > Thanks, Chris.
> >
> > 1. Added a use case that describes why you might want to use -1 for
> > replication factor but want to set other properties.
> > 2&3. Thanks for bringing up these special properties that the worker
> always
> > sets. I've added an "Excluded properties" column to the table of new
> > properties, and listed out the topic-specific properties should not be
> set.
> > I've also added this paragraph:
> >
> > Note that some topic-specific properties are excluded because the
> > distributed worker always sets specific values. Therefore, if a
> distributed
> > worker configuration does set any of these excluded properties, the
> > distributed worker will issue a warning that such properties should not
> be
> > set and will be ignored.
> >
> >
> > I'm proposing that using such properties will result in a WARN log
> message,
> > but allow the worker to continue by ignoring these values. I added a
> > rejected alternative describing why I think failing is not desirable.
> >
> > 4. Added a sentence as suggested.
> >
> > Thanks!
> >
> > On Sun, May 3, 2020 at 12:55 PM Christopher Egerton  >
> > wrote:
> >
> > > Hi Randall,
> > >
> > > Thanks for the KIP! I have a few questions and suggestions but no major
> > > objections.
> > >
> > > 1. The motivation is pretty clear for altering the various
> > > "*.storage.replication.factor" properties to allow -1 as a value now.
> Are
> > > there expected use cases for allowing modification of other properties
> of
> > > these topic configs? It'd be nice to understand why we're adding this
> > extra
> > > configurability to the worker.
> > >
> > > 2. Should the "cleanup.policy" property have some additional guarding
> > logic
> > > to make sure that people don't set it to "delete" or "both"?
> > >
> > > 3. The lack of a "config.storage.partitions" property seems intentional
> > > because the config topic should only ever have one partition. Now that
> > > we're adding all of these other internal topic-related properties, do
> you
> > > think it might be helpful to users if we emit a warning message of some
> > > sort when they try to configure their worker with this property?
> > >
> > > 4. On the topic of compatibility--this is a fairly niche edge case, but
> > any
> > > time we add new configs to the worker we run the risk of overlap with
> > > existing configs for REST extensions that users may have implemented.
> > This
> > > is different from other pluggable interfaces like config providers and
> > > converters, whose properties are namespaced (presumably to avoid
> > collisions
> > > like this). Might be worth it to note this in a small paragraph or even
> > > just a single sentence.
> > >
> > > Cheers,
> > >
> > > Chris
> > >
> > > On Thu, Apr 30, 2020 at 4:32 PM Ryanne Dolan 
> > > wrote:
> > >
> > > > Much needed, thanks.
> > > >
> > > > Ryanne
> > > >
> > > > On Thu, Apr 30, 2020 at 4:59 PM Randall Hauch 
> > wrote:
> > > >
> > > > > Hello!
> > > > >
> > > > > I'd like to use this thread to discuss KIP-605, which expands some
> of
> > > the
> > > > > properties that the Connect distributed worker uses when creating
> > > > internal
> > > > > topics:
> > > > >
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-605%3A+Expand+Connect+Worker+Internal+Topic+Settings
> > > > >
> > > > > Best regards,
> > > > >
> > > > > Randall
> > > > >
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-608: Add a new method to AuthorizerServerInfo Interface

2020-05-06 Thread Jeff Huang
Will update KIP.  
thanks!

On 2020/05/06 15:09:53, Rajini Sivaram  wrote: 
> Hi Jeff,
> 
> Thanks for the KIP. It looks useful since it allows authorizers to use the
> broker's metrics instance. We could perhaps use this in AclAuthorizer to
> generate authorizer metrics?
> 
> Regards,
> 
> Rajini
> 
> On Tue, May 5, 2020 at 9:04 PM Zhiguo Huang  wrote:
> 
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-608+-+Add+a+new+method+to+AuthorizerServerInfo+Interface
> >
> 


Re: [DISCUSS] KIP-608: Add a new method to AuthorizerServerInfo Interface

2020-05-06 Thread Jeff Huang
1. It is intended to return broker metrics.
2. I did searching, there is no any public API return broker Metrics class so 
far. So this will be first time. I did find connect also expose the Metrics 
class, but it is not in public API.
org.apache.kafka.connect.runtime.ConnectMetrics
/**
 * Get the {@link Metrics Kafka Metrics} that are managed by this object and 
that should be used to
 * add sensors and individual metrics.
 *
 * @return the Kafka Metrics instance; never null
 */
public Metrics metrics() {
return metrics;
}


On 2020/05/06 15:31:07, Ismael Juma  wrote: 
> Thanks for the KIP. A couple of questions:
> 
> 1. Is it intended for this method to return null or the broker metrics
> instance?
> 2. Is the Metrics class returned in any public APIs today or this the first
> time we are doing it?
> 
> Ismael
> 
> On Wed, May 6, 2020 at 8:10 AM Rajini Sivaram 
> wrote:
> 
> > Hi Jeff,
> >
> > Thanks for the KIP. It looks useful since it allows authorizers to use the
> > broker's metrics instance. We could perhaps use this in AclAuthorizer to
> > generate authorizer metrics?
> >
> > Regards,
> >
> > Rajini
> >
> > On Tue, May 5, 2020 at 9:04 PM Zhiguo Huang 
> > wrote:
> >
> > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-608+-+Add+a+new+method+to+AuthorizerServerInfo+Interface
> > >
> >
> 


[jira] [Created] (KAFKA-9962) Admin client throws UnsupportedVersion exception when talking to old broker

2020-05-06 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-9962:
-

 Summary: Admin client throws UnsupportedVersion exception when 
talking to old broker
 Key: KAFKA-9962
 URL: https://issues.apache.org/jira/browse/KAFKA-9962
 Project: Kafka
  Issue Type: Task
  Components: clients
Affects Versions: 2.4.1, 2.5.0, 2.3.1
Reporter: Jose Armando Garcia Sancio
Assignee: Jose Armando Garcia Sancio


Users are getting this error when using a client version `2.5` against a 
`1.1.0` cluster/broker.

{code}
[2020-04-28 01:09:10,663] ERROR Failed to start KSQL 
(io.confluent.ksql.rest.server.KsqlServerMain:63)

io.confluent.ksql.util.KsqlServerException: Could not get Kafka authorized 
operations!

at 
io.confluent.ksql.services.KafkaClusterUtil.isAuthorizedOperationsSupported(KafkaClusterUtil.java:51)

at 
io.confluent.ksql.security.KsqlAuthorizationValidatorFactory.create(KsqlAuthorizationValidatorFactory.java:52)

at 
io.confluent.ksql.rest.server.KsqlRestApplication.buildApplication(KsqlRestApplication.java:639)

at 
io.confluent.ksql.rest.server.KsqlRestApplication.buildApplication(KsqlRestApplication.java:567)

at 
io.confluent.ksql.rest.server.KsqlServerMain.createExecutable(KsqlServerMain.java:100)

at 
io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:59)

Caused by: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.UnsupportedVersionException: Attempted to write 
a non-default includeClusterAuthorizedOperations at version 5

at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)

at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)

at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)

at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)

at 
io.confluent.ksql.services.KafkaClusterUtil.isAuthorizedOperationsSupported(KafkaClusterUtil.java:49)

... 5 more

Caused by: org.apache.kafka.common.errors.UnsupportedVersionException: 
Attempted to write a non-default includeClusterAuthorizedOperations at version 5
{code}

Looking at `KIP-430`, it mentions that the client is supposed to handle this 
case:

# Existing clients using older versions will not request authorized operations 
in Describe requests since the default is to disable this feature. This keeps 
older clients compatible with newer brokers.
# Newer clients connecting to older brokers will use the older protocol version 
and hence will not request authorized operations.
# When the AdminClient is talking to a broker which does not support KIP-430, 
it will fill in either null or UnsupportedVersionException for the returned ACL 
operations fields in objects. For example, 
`ConsumerGroupDescription#authorizedOperations` will be null if the broker did 
not supply this information. DescribeClusterResult#authorizedOperations will 
throw an `UnsupportedVersionException` if the broker did not supply this 
information.
# When new operations are added, newer brokers may return operations that are 
not known to older clients. AdminClient will ignore any bit that is set in 
authorized_operations that is not known to the client. The Set 
created by the client from the bits returned by the broker will only include 
operations that the client client knows about.

I assume that this deployment environment falls under case 2, we have this in 
the serialization code:

{code}
if (_version >= 8) {
_writable.writeByte(includeClusterAuthorizedOperations ? (byte) 1 : 
(byte) 0);
} else {
if (includeClusterAuthorizedOperations) {
throw new UnsupportedVersionException("Attempted to write a 
non-default includeClusterAuthorizedOperations at version " + _version);
}
}
{code}

I also looks like we blindly set the version independent of the Broker’s 
supported version:

{code}
MetadataRequest.Builder createRequest(int timeoutMs) {
// Since this only requests node information, it's safe to pass 
true for allowAutoTopicCreation (and it
// simplifies communication with older brokers)
return new MetadataRequest.Builder(new MetadataRequestData()
.setTopics(Collections.emptyList())
.setAllowAutoTopicCreation(true)

.setIncludeClusterAuthorizedOperations(options.includeAuthorizedOperations()));
}
{code}

To implement 2. we need to make these properties ignorable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] KIP-607: Add Metrics to Record the Memory Used by RocksDB to Kafka Streams

2020-05-06 Thread Bruno Cadonna
Hi all,

I'd like to discuss KIP-607 that aims to add RocksDB memory usage
metrics to Kafka Streams.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-607%3A+Add+Metrics+to+Record+the+Memory+Used+by+RocksDB+to+Kafka+Streams

Best,
Bruno


Re: [DISCUSS] KIP-608: Add a new method to AuthorizerServerInfo Interface

2020-05-06 Thread Ismael Juma
Thanks for the KIP. A couple of questions:

1. Is it intended for this method to return null or the broker metrics
instance?
2. Is the Metrics class returned in any public APIs today or this the first
time we are doing it?

Ismael

On Wed, May 6, 2020 at 8:10 AM Rajini Sivaram 
wrote:

> Hi Jeff,
>
> Thanks for the KIP. It looks useful since it allows authorizers to use the
> broker's metrics instance. We could perhaps use this in AclAuthorizer to
> generate authorizer metrics?
>
> Regards,
>
> Rajini
>
> On Tue, May 5, 2020 at 9:04 PM Zhiguo Huang 
> wrote:
>
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-608+-+Add+a+new+method+to+AuthorizerServerInfo+Interface
> >
>


Re: [DISCUSS] KIP-608: Add a new method to AuthorizerServerInfo Interface

2020-05-06 Thread Rajini Sivaram
Hi Jeff,

Thanks for the KIP. It looks useful since it allows authorizers to use the
broker's metrics instance. We could perhaps use this in AclAuthorizer to
generate authorizer metrics?

Regards,

Rajini

On Tue, May 5, 2020 at 9:04 PM Zhiguo Huang  wrote:

>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-608+-+Add+a+new+method+to+AuthorizerServerInfo+Interface
>


Please give me the permission to create KIPs

2020-05-06 Thread Eddi Em
My id: eddiejamsession

Eddie


Re: [VOTE] KIP-577: Allow HTTP Response Headers Configured for Kafka Connect

2020-05-06 Thread Randall Hauch
Thanks for putting this together.

+1 (binding)

On Fri, Apr 17, 2020 at 2:02 PM Aneel Nazareth  wrote:

> Thanks Jeff, this seems like it addresses a user need.
>
> +1 (non-binding)
>
> On Fri, Apr 17, 2020 at 1:28 PM Zhiguo Huang 
> wrote:
> >
> > Thanks to everyone for their input. I've incorporated the changes, and I
> > think this is ready for voting.
> >
> > To summarize, the KIP simply proposes to add a feature which allows HTTP
> > response headers configured for Kafka Connect.The KIP can be found here:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP+577%3A+Allow+HTTP+Response+Headers+to+be+Configured+for+Kafka+Connect
> >
> > Thanks!
> >
> > Jeff.
>


Warning free builds and Scala 2.13 as the default build

2020-05-06 Thread Ismael Juma
Hi all,

I would like to share a few recent changes that should make the development
experience a little better:

1. Warning free builds for Scala. We have had this when it comes to Java
code for some time and with Scala 2.13.2's support for warning suppression,
we can finally have it for Scala too.
2. Switched to the Scala 2.13 build by default. The main motivation is that
it's a requirement for warning free builds. Scala 2.12 is still supported
and we validate with it in pull requests and trunk/release branch builds.
3. Use the `-release 8` flag with Scala (we have been doing it with Java
for a while) so that the compilation output is independent of the JDK used
to build (i.e. you can build with Java 8, 11 or 14 and the compilation
output is the same). A bug fix included in Gradle 6.4 unblocked our usage
of this flag.

See the following PRs for more information:
* https://github.com/apache/kafka/pull/8537
* https://github.com/apache/kafka/pull/8429
* https://github.com/apache/kafka/pull/8538

Let me know if you have any questions or if you run into any issues.

Ismael


Build failed in Jenkins: kafka-trunk-jdk11 #1428

2020-05-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Pass `-release 8` to scalac and upgrade to Gradle 6.4 (#8538)


--
[...truncated 1.76 MB...]

kafka.coordinator.transaction.TransactionStateManagerTest > 
testDeleteLoadingPartition PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldNotRemoveExpiredTransactionalIdsIfLogAppendFails STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldNotRemoveExpiredTransactionalIdsIfLogAppendFails PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldNotRemovePrepareAbortTransactionalIds STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldNotRemovePrepareAbortTransactionalIds PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldRemoveCompleteCommmitExpiredTransactionalIds STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldRemoveCompleteCommmitExpiredTransactionalIds PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testSuccessfulReimmigration STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testSuccessfulReimmigration PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenNotLeaderForPartitionError STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenNotLeaderForPartitionError PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldRemoveTopicPartitionFromWaitingSetOnUnsupportedForMessageFormat STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldRemoveTopicPartitionFromWaitingSetOnUnsupportedForMessageFormat PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenRecordListTooLargeError STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenRecordListTooLargeError PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenInvalidProducerEpoch STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenInvalidProducerEpoch PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenInvalidRequiredAcksError STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenInvalidRequiredAcksError PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldReEnqueuePartitionsWhenBrokerDisconnected STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldReEnqueuePartitionsWhenBrokerDisconnected PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenNotEnoughReplicasError STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldRetryPartitionWhenNotEnoughReplicasError PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenNoErrors STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenNoErrors PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenCorruptMessageError STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenCorruptMessageError PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenCoordinatorLoading STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenCoordinatorLoading PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWheCoordinatorEpochFenced STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWheCoordinatorEpochFenced PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenUnknownError STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionWhenUnknownError PASSED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenNotCoordinator STARTED

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldCompleteDelayedOperationWhenNotCoordinator PASSED


default auto.offset.reset value is reset to none

2020-05-06 Thread Alexander Sibiryakov
Hello,

I'm facing an issue in one of our Kafka Streams applications using
GlobalKTable. The idea is to have a GlobalKTable over compacted topic and
be able to re-read it on startup. We had a consumer group and topic
sometime ago, recently I've recreated a topic, requiring consumer offsets
to be reset and consumed from beginning. But application started to fail
with OffsetOutOfRangeException and message "Offsets out of range with no
configured reset policy for partitions..". I do have auto.offset.reset set
in my configuration to "latest", but it is overridden to "none" for global
and restore consumers of Streams application.

This exception is resulting in a shutdown loop, and requiring investigation
to understand what is going on.


This is the line where it is happening
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java#L1244

So the question is this behavior of overriding offset reset policy is
intended? If not, please confirm it is a bug, and will submit a patch.

See the Streams output for detailed configs, traceback and exceptions in
attachment.

Thanks,
A.
[2020-05-06 13:03:19,360: INFO/main] (AbstractConfig.java:347) - StreamsConfig 
values:
application.id = *-green-staging
application.server =
bootstrap.servers = [*:9092]
buffered.records.per.partition = 1000
cache.max.bytes.buffering = 10485760
client.id = *-green-staging
commit.interval.ms = 3
connections.max.idle.ms = 54
default.deserialization.exception.handler = class 
org.apache.kafka.streams.errors.LogAndContinueExceptionHandler
default.key.serde = class 
org.apache.kafka.common.serialization.Serdes$StringSerde
default.production.exception.handler = class 
com.test.LogAndContinueProductionExceptionHandler
default.timestamp.extractor = class 
org.apache.kafka.streams.processor.FailOnInvalidTimestamp
default.value.serde = class 
io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
max.task.idle.ms = 0
metadata.max.age.ms = 30
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 3
num.standby.replicas = 0
num.stream.threads = 1
partition.grouper = class 
org.apache.kafka.streams.processor.DefaultPartitionGrouper
poll.ms = 100
processing.guarantee = at_least_once
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
replication.factor = 1
request.timeout.ms = 2
retries = 2147483647
retry.backoff.ms = 500
rocksdb.config.setter = null
security.protocol = SASL_SSL
send.buffer.bytes = 131072
state.cleanup.delay.ms = 60
state.dir = /mnt/storage
topology.optimization = none
upgrade.from = null
windowstore.changelog.additional.retention.ms = 8640

[2020-05-06 13:03:19,386: INFO/main] (KafkaStreams.java:686) - stream-client 
[*-green-staging] Kafka Streams version: 2.4.0
[2020-05-06 13:03:19,386: INFO/main] (KafkaStreams.java:687) - stream-client 
[*-green-staging] Kafka Streams commit ID: 77a89fcf8d7fa018
[2020-05-06 13:03:21,446: INFO/main] (AbstractConfig.java:347) - ConsumerConfig 
values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = none
bootstrap.servers = [*:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = *-green-staging-global-consumer
client.rack =
connections.max.idle.ms = 54
default.api.timeout.ms = 6
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = false
isolation.level = read_uncommitted
key.deserializer = class 
org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 30
max.poll.records = 1000
metadata.max.age.ms = 30
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 3
partition.assignment.strategy = [class 
org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 2
retry.backoff.ms = 500
sasl.client.callback.handler.class = null
sasl.jaas.config = [hidden]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 6
 

RE: [DISCUSS] Kafka 3.0

2020-05-06 Thread Edoardo Comar
Ryanne Dolan  wrote on 05/05/2020 20:36:49:

> Exactly. Why would 3.1 be the breaking release? No one would expect
> everything to break going from 3.0 to 3.1

Agree completely

> 
> Ryanne
> 
> On Tue, May 5, 2020 at 2:34 PM Gwen Shapira  wrote:
> 
> > It sounds like the decision to make the next release 3.0 is a bit 
arbitrary
> > then?
> >
> > With Exactly Once, we announced 1.0 as one release after the one where 
EOS
> > shipped, when we felt it was "ready" (little did we know... but that's
> > another story).
> > 2.0 was breaking due to us dropping Java 7.
> >
> > In 3.0 it sounds like nothing is breaking and our big change won't be
> > complete... so, what's the motivation for the major release?
> >
> > On Tue, May 5, 2020 at 12:12 PM Guozhang Wang  
wrote:
> >
> > > I think there's a confusion regarding the "bridge release" proposed 
in
> > > KIP-500: should it be release "3.0" or be release "2.X" (i.e. the 
last
> > > minor release before 3.0).
> > >
> > > My understanding is that "3.0" would be the bridge release, i.e. it 
would
> > > not break any compatibility, but 3.1 potentially would, so an 
upgrade
> > from
> > > 2.5 to 3.1 would need to first upgrade to 3.0, and then to 3.1. For
> > > clients, all broker-client compatibility are still maintained 3.1+ 
so
> > that
> > > 2.x producer / consumer clients could still talk to 3.1+ brokers, 
only
> > > those old versioned scripts with on "--zookeeper" would not work 
with
> > 3.1+
> > > brokers anymore since there are no zookeepers.
> > >
> > >
> > > Guozhang
> > >
> > > On Mon, May 4, 2020 at 5:33 PM Gwen Shapira  
wrote:
> > >
> > > > +1 for removing MM 1.0 when we cut a breaking release. It is sad 
to see
> > > > that we are still investing in it (I just saw a KIP toward 
improving
> > its
> > > > reset policy).
> > > >
> > > > My understanding was that KIP-590 is not breaking compatibility, I
> > think
> > > > Guozhang said that in response to my question on the discussion 
thread.
> > > >
> > > > Overall, since Kafka has time-based releases, we can make the call 
on
> > 3.0
> > > > vs 2.7 when we are at "KIP freeze date" and can see which features 
are
> > > > likely to make it.
> > > >
> > > >
> > > > On Mon, May 4, 2020 at 5:13 PM Ryanne Dolan 

> > > wrote:
> > > >
> > > > > Hey Colin, I think we should wait until after KIP-500's "bridge
> > > release"
> > > > so
> > > > > there is a clean break from Zookeeper after 3.0. The bridge 
release
> > by
> > > > > definition is an attempt to not break anything, so it 
theoretically
> > > > doesn't
> > > > > warrant a major release. If that's not the case (i.e. if a 
single
> > > "bridge
> > > > > release" turns out to be impractical), we should consider 
forking 3.0
> > > > while
> > > > > maintaining a line of Zookeeper-dependent Kafka in 2.x. That way 
3.x
> > > can
> > > > > evolve dramatically without breaking the 2.x line. In 
particular,
> > > > anything
> > > > > related to removing Zookeeper could land in pre-3.0 while every 
other
> > > > > feature targets 2.6.
> > > > >
> > > > > If you are proposing 2.6 should be the "bridge release", I think 
this
> > > is
> > > > > premature given Kafka's time-based release schedule. If the 
bridge
> > > > features
> > > > > happen to be merged before 2.6's feature freeze, then sure -- 
let's
> > > make
> > > > > that the bridge release in retrospect. And if we get all the
> > > > post-Zookeeper
> > > > > features merged before 2.7, I'm onboard with naming it "3.0" 
instead.
> > > > >
> > > > > That said, we should aim to remove legacy MirrorMaker before 3.0 
as
> > > well.
> > > > > I'm happy to drive that additional breaking change. Maybe 2.6 
can be
> > > the
> > > > > "bridge" for MM2 as well.
> > > > >
> > > > > Ryanne
> > > > >
> > > > > On Mon, May 4, 2020, 5:05 PM Colin McCabe 
> > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > We've had a few proposals recently for incompatible changes. 
One
> > of
> > > > them
> > > > > > is my KIP-604: Remove ZooKeeper Flags from the Administrative
> > Tools.
> > > > The
> > > > > > other is Boyang's KIP-590: Redirect ZK Mutation Protocols to 
the
> > > > > > Controller.  I think it's time to start thinking about Kafka 
3.0.
> > > > > > Specifically, I think we should move to 3.0 after the 2.6 
release.
> > > > > >
> > > > > > From the perspective of KIP-500, in Kafka 3.x we'd like to 
make
> > > running
> > > > > in
> > > > > > a ZooKeeper-less mode possible (but not yet the default.) This 
is
> > > the
> > > > > > motivation behind KIP-590 and KIP-604, as well as some of the 
other
> > > > KIPs
> > > > > > we've done recently.  Since it will take some time to 
stabilize the
> > > new
> > > > > > ZooKeeper-free Kafka code, we will hide it behind an option
> > > initially.
> > > > > > (We'll have a KIP describing this all in detail soon.)
> > > > > >
> > > > > > What does everyone think about having Kafka 3.0 come up next 
after
> > > 2.6?
> > > > > > Are there any other things we should 

Build failed in Jenkins: kafka-2.5-jdk8 #109

2020-05-06 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] KAFKA-9718; Don't log passwords for AlterConfigs in request logs 
(#8294)

[cmccabe] KAFKA-9625: Fix altering and describing dynamic broker configurations


--
[...truncated 4.54 MB...]
kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > 
testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize STARTED

kafka.api.ConsumerBounceTest > 
testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize PASSED

kafka.api.ConsumerBounceTest > testSubscribeWhenTopicUnavailable STARTED

kafka.api.ConsumerBounceTest > testSubscribeWhenTopicUnavailable PASSED

kafka.api.ConsumerBounceTest > 
testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup STARTED

kafka.api.ConsumerBounceTest > 
testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures SKIPPED

kafka.api.SaslMultiMechanismConsumerTest > testMultipleBrokerMechanisms STARTED

kafka.api.SaslMultiMechanismConsumerTest > testMultipleBrokerMechanisms PASSED

kafka.api.SaslMultiMechanismConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslMultiMechanismConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslMultiMechanismConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslMultiMechanismConsumerTest > testSimpleConsumption PASSED

kafka.api.ApiVersionTest > testApiVersionUniqueIds STARTED

kafka.api.ApiVersionTest > testApiVersionUniqueIds PASSED

kafka.api.ApiVersionTest > testMinSupportedVersionFor STARTED

kafka.api.ApiVersionTest > testMinSupportedVersionFor PASSED

kafka.api.ApiVersionTest > testShortVersion STARTED

kafka.api.ApiVersionTest > testShortVersion PASSED

kafka.api.ApiVersionTest > testApply STARTED

kafka.api.ApiVersionTest > testApply PASSED

kafka.api.ApiVersionTest > testApiVersionValidator STARTED

kafka.api.ApiVersionTest > testApiVersionValidator PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
STARTED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
PASSED

kafka.api.SslProducerSendTest > testClose STARTED

kafka.api.SslProducerSendTest > testClose PASSED

kafka.api.SslProducerSendTest > testFlush STARTED

kafka.api.SslProducerSendTest > testFlush PASSED

kafka.api.SslProducerSendTest > testSendToPartition STARTED

kafka.api.SslProducerSendTest > testSendToPartition PASSED

kafka.api.SslProducerSendTest > testSendOffset STARTED

kafka.api.SslProducerSendTest > testSendOffset PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime STARTED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread STARTED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread STARTED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.api.SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion STARTED

kafka.api.SslProducerSendTest > testSendBeforeAndAfterPartitionExpansion PASSED

kafka.api.GroupEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.GroupEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeWithPrefixedAcls 
STARTED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeWithPrefixedAcls 
PASSED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeViaAssign STARTED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeViaAssign PASSED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.GroupEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl STARTED

kafka.api.GroupEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl PASSED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeWithWildcardAcls 
STARTED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeWithWildcardAcls 
PASSED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.GroupEndToEndAuthorizationTest > 

Re: [DISCUSS] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-05-06 Thread David Jacot
Hi Boyang,

While re-reading the KIP, I've got few small questions/comments:

1. When auto topic creation is enabled, brokers will send a
CreateTopicRequest
to the controller instead of writing to ZK directly. It means that
creation of these
topics are subject to be rejected with an error if a CreateTopicPolicy is
used. Today,
it bypasses the policy entirely. I suppose that clusters allowing auto
topic creation
don't have a policy in place so it is not a big deal. I suggest to call
out explicitly the
limitation in the KIP though.

2. In the same vein as my first point. How do you plan to handle errors
when internal
topics are created by a broker? Do you plan to retry retryable errors
indefinitely?

3. Could you clarify which listener will be used for the internal requests?
Do you plan
to use the control plane listener or perhaps the inter-broker listener?

Thanks,
David

On Mon, May 4, 2020 at 9:37 AM Sönke Liebau
 wrote:

> Ah, I see, thanks for the clarification!
>
> Shouldn't be an issue I think. My understanding of KIPs was always that
> they are mostly intended as a place to discuss and agree changes up front,
> whereas tracking the actual releases that things go into should be handled
> in Jira.
> So maybe we just create new jiras for any subsequent work and either link
> those or make them subtasks (even though this jira is already a subtask
> itself), that should allow us to properly track all releases that work goes
> into.
>
> Thanks for your work on this!!
>
> Best,
> Sönke
>
>
> On Sat, 2 May 2020 at 00:31, Boyang Chen 
> wrote:
>
> > Sure thing Sonke, what I suggest is that usual KIPs get accepted to go
> into
> > next release. It could span for a couple of releases because of
> engineering
> > time, but no change has to be shipped in specific future releases, like
> the
> > backward incompatible change for KafkaPrincipal. But I guess it's not
> > really a blocker, as long as we stated clearly in the KIP how we are
> going
> > to roll things out, and let it partially finish in 2.6.
> >
> > Boyang
> >
> > On Fri, May 1, 2020 at 2:32 PM Sönke Liebau
> >  wrote:
> >
> > > Hi Boyang,
> > >
> > > thanks for the update, sounds reasonable to me. Making it a breaking
> > change
> > > is definitely the safer route to go.
> > >
> > > Just one quick question regarding your mail, I didn't fully understand
> > what
> > > you mean by "I think this is the first time we need to introduce a KIP
> > > without having it
> > > fully accepted in next release."  - could you perhaps explain that some
> > > more very briefly?
> > >
> > > Best regards,
> > > Sönke
> > >
> > >
> > >
> > > On Fri, 1 May 2020 at 23:03, Boyang Chen 
> > > wrote:
> > >
> > > > Hey Tom,
> > > >
> > > > thanks for the suggestion. As long as we could correctly serialize
> the
> > > > principal and embed in the Envelope, I think we could still leverage
> > the
> > > > controller to do the client request authentication. Although this
> pays
> > an
> > > > extra round trip if the authorization is doomed to fail on the
> receiver
> > > > side, having a centralized processing unit is more favorable such as
> > > > ensuring the audit log is consistent instead of scattering between
> > > > forwarder and receiver.
> > > >
> > > > Boyang
> > > >
> > > > On Wed, Apr 29, 2020 at 9:50 AM Tom Bentley 
> > wrote:
> > > >
> > > > > Hi Boyang,
> > > > >
> > > > > Thanks for the update. In the EnvelopeRequest handling section of
> the
> > > KIP
> > > > > it might be worth saying explicitly that authorization of the
> request
> > > > will
> > > > > happen as normal. Otherwise what you're proposing makes sense to
> me.
> > > > >
> > > > > Thanks again,
> > > > >
> > > > > Tom
> > > > >
> > > > >
> > > > >
> > > > > On Wed, Apr 29, 2020 at 5:27 PM Boyang Chen <
> > > reluctanthero...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Thanks for the proposed idea Sonke. I reviewed it and had some
> > > offline
> > > > > > discussion with Colin, Rajini and Mathew.
> > > > > >
> > > > > > We do need to add serializability to the PrincipalBuilder
> > interface,
> > > > but
> > > > > we
> > > > > > should not make any default implementation which could go wrong
> and
> > > > messy
> > > > > > up with the security in a production environment if the user
> > neglects
> > > > it.
> > > > > > Instead we need to make it required and backward incompatible.
> So I
> > > > > > integrated your proposed methods and expand the Envelope RPC
> with a
> > > > > couple
> > > > > > of more fields for audit log purpose as well.
> > > > > >
> > > > > > Since the KafkaPrincipal builder serializability is a binary
> > > > incompatible
> > > > > > change, I propose (also stated in the KIP) the following
> > > implementation
> > > > > > plan:
> > > > > >
> > > > > >1. For next *2.x* release:
> > > > > >   1. Get new admin client forwarding changes
> > > > > >   2. Get the Envelope RPC implementation
> > > > > >   3. Get the forwarding path working and validate the
> 

Build failed in Jenkins: kafka-trunk-jdk8 #4505

2020-05-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Pass `-release 8` to scalac and upgrade to Gradle 6.4 (#8538)


--
[...truncated 3.06 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task 

Re: Avro DeSerializeation Issue in Kafka Streams

2020-05-06 Thread Suresh Chidambaram
Thanks for the info Nagendra.

Thanks
C Suresh

On Wednesday, May 6, 2020, Nagendra Korrapati 
wrote:

>  When specific.avro.reader is set to true Deserializer tries to create the
> instance of the Class. The class name is formed by reading the schema
> (writer schema) from schema registry and concatenating the namespace and
> record name. It is trying to create that instance and it is not found in
> the class path. But I am not sure how it formed the name XYZ-Table (Check
> the namespace and name of the record in the schema registry and making the
> class available in the class path should solve it )This is my
> understanding. I may be wrong!!
>
> Nagendra
>
> > On May 5, 2020, at 11:12 AM, Suresh Chidambaram 
> wrote:
> >
> > Hi All,
> >
> > Currently, I'm working on a usecase wherein I have to deserialie an Avro
> > object and convert to some other format of Avro. Below is the  flow.
> >
> > DB -> Source Topic(Avro format) -> Stream Processor -> Target Topic (Avro
> > as nested object).
> >
> > When I deserialize the message from the Source Topic, the below exception
> > is thrown.
> >
> > Could someone help me resolving this issue?
> >
> > 2020-05-05 10:29:34.218  INFO 13804 --- [-StreamThread-1]
> > o.a.k.clients.consumer.KafkaConsumer : [Consumer
> > clientId=confluent-kafka-poc-client-StreamThread-1-restore-consumer,
> > groupId=null] Unsubscribed all topics or patterns and assigned partitions
> > 2020-05-05 10:29:34.218  INFO 13804 --- [-StreamThread-1]
> > o.a.k.clients.consumer.KafkaConsumer : [Consumer
> > clientId=confluent-kafka-poc-client-StreamThread-1-restore-consumer,
> > groupId=null] Unsubscribed all topics or patterns and assigned partitions
> > 2020-05-05 10:29:34.218  INFO 13804 --- [-StreamThread-1]
> > o.a.k.s.p.internals.StreamThread : stream-thread
> > [confluent-kafka-poc-client-StreamThread-1] State transition from
> > PARTITIONS_ASSIGNED to RUNNING
> > 2020-05-05 10:29:34.219  INFO 13804 --- [-StreamThread-1]
> > org.apache.kafka.streams.KafkaStreams: stream-client
> > [confluent-kafka-poc-client] State transition from REBALANCING to RUNNING
> > 2020-05-05 10:29:34.220  INFO 13804 --- [-StreamThread-1]
> > o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer
> > clientId=confluent-kafka-poc-client-StreamThread-1-consumer,
> > groupId=confluent-kafka-poc] Found no committed offset for partition
> > DEMO-poc-0
> > 2020-05-05 10:29:34.228  INFO 13804 --- [-StreamThread-1]
> > o.a.k.c.c.internals.SubscriptionState: [Consumer
> > clientId=confluent-kafka-poc-client-StreamThread-1-consumer,
> > groupId=confluent-kafka-poc] Resetting offset for partition DEMO-poc-0 to
> > offset 0.
> > 2020-05-05 10:30:12.886 ERROR 13804 --- [-StreamThread-1]
> > o.a.k.s.e.LogAndFailExceptionHandler : Exception caught during
> > Deserialization, taskId: 0_0, topic: DEMO-poc, partition: 0, offset: 0
> >
> > org.apache.kafka.common.errors.SerializationException: Error
> deserializing
> > Avro message for id 1421
> >
> > *Caused by: org.apache.kafka.common.errors.SerializationException: Could
> > not find class "XYZ-Table" specified in writer's schema whilst finding
> > reader's schema for a SpecificRecord.*
> > 2020-05-05 10:30:12.888 ERROR 13804 --- [-StreamThread-1]
> > o.a.k.s.p.internals.StreamThread : stream-thread
> > [confluent-kafka-poc-client-StreamThread-1] Encountered the following
> > unexpected Kafka exception during processing, this usually indicate
> Streams
> > internal errors:
> >
> > org.apache.kafka.streams.errors.StreamsException: Deserialization
> exception
> > handler is set to fail upon a deserialization error. If you would rather
> > have the streaming pipeline continue after a deserialization error,
> please
> > set the default.deserialization.exception.handler appropriately.
> >at
> > org.apache.kafka.streams.processor.internals.RecordDeserializer.
> deserialize(RecordDeserializer.java:80)
> > ~[kafka-streams-5.3.0-ccs.jar!/:na]
> >at
> > org.apache.kafka.streams.processor.internals.RecordQueue.updateHead(
> RecordQueue.java:158)
> > ~[kafka-streams-5.3.0-ccs.jar!/:na]
> >at
> > org.apache.kafka.streams.processor.internals.RecordQueue.addRawRecords(
> RecordQueue.java:100)
> > ~[kafka-streams-5.3.0-ccs.jar!/:na]
> >at
> > org.apache.kafka.streams.processor.internals.
> PartitionGroup.addRawRecords(PartitionGroup.java:136)
> > ~[kafka-streams-5.3.0-ccs.jar!/:na]
> >at
> > org.apache.kafka.streams.processor.internals.StreamTask.addRecords(
> StreamTask.java:746)
> > ~[kafka-streams-5.3.0-ccs.jar!/:na]
> >at
> > org.apache.kafka.streams.processor.internals.StreamThread.
> addRecordsToTasks(StreamThread.java:1023)
> > ~[kafka-streams-5.3.0-ccs.jar!/:na]
> >at
> > org.apache.kafka.streams.processor.internals.StreamThread.runOnce(
> StreamThread.java:861)
> > ~[kafka-streams-5.3.0-ccs.jar!/:na]
> >at
> > org.apache.kafka.streams.processor.internals.StreamThread.runLoop(
> 

Re: [DISCUSS] KIP-585: Conditional SMT

2020-05-06 Thread Andrew Schofield
Hi Tom,
I think that either proposal is sufficiently readable and they both have 
elements of
cryptic syntax. I remember talking to an engineering director once who said he 
didn't hire
"curly braces people" any more. This is an interface for curly braces people 
whichever
way we go here.

I slightly prefer the predicates as a top-level concept rather than as a feature
of the If transform. I also don't like the namespacing behaviour of the If 
transform which
I guess works like variable declaration so each scope has its own namespace of 
transform
aliases. I expect the following is valid but ill-advised.

transforms: t1
transforms.t1.type: If
transforms.t1.!test.type: TopicNameMatch
transforms.t1.!test.pattern: my-prefix-.*
transforms.t1.then: t1
transforms.t1.then.t1.type: ExtractField$Key
transforms.t1.then.t1.field: c1

I would love to see conditional application of transforms in SMT in Kafka 2.6.
Let's see what other people think and get a consensus view on how to spell it.

Cheers,
Andrew

On 05/05/2020, 15:20, "Tom Bentley"  wrote:

Hi,

Again, reusing the existing numbering...

3) I had exactly the same thought. The reason I didn't already rename it
was because :

* Filter is not exactly wrong (by default it just filters out all the
messages)
* Filter is a useful hint about how it's intended to be used.

But I don't have a strong opinion, so I'm happy to rename it if people
think that Drop is better.

4) Putting the negation on the use site–rather than the declaration
site–seems like a good idea, since it makes it easier to get 'else'-like
behaviour.

But, playing devil's advocate, I wondered how all this compares with the
original proposal, so I went to the trouble of writing out a few examples
to compare the two

Example of a basic 'if'
Current proposal:
transforms: t2
transforms.t2.?predicate: !has-my-prefix
transforms.t2.type: ExtractField$Key
transforms.t2.field: c1
?predicates: has-my-prefix
?predicates.has-my-prefix.type: TopicNameMatch
?predicates.has-my-prefix.pattern: my-prefix-.*
Original (or originalish, see notes below):
transforms: t1
transforms.t1.type: If
transforms.t1.!test.type: TopicNameMatch
transforms.t1.!test.pattern: my-prefix-.*
transforms.t1.then: t2
transforms.t1.then.t2.type: ExtractField$Key
transforms.t1.then.t2.field: c1

Example of 'if/else':
Current:
transforms: t2,t3
transforms.t2.?predicate: !has-my-prefix
transforms.t2.type: ExtractField$Key
transforms.t2.field: c1
transforms.t3.?predicate: has-my-prefix
transforms.t3.type: ExtractField$Key
transforms.t3.field: c2
?predicates: has-my-prefix
?predicates.has-my-prefix.type: TopicNameMatch
?predicates.has-my-prefix.pattern: my-prefix-.*
Original:
transforms: t1
transforms.t1.type: If
transforms.t1.!test.type: TopicNameMatch
transforms.t1.!test.pattern: my-prefix-.*
transforms.t1.then: t2
transforms.t1.then.t2.type: ExtractField$Key
transforms.t1.then.t2.field: c1
transforms.t1.else: t3
transforms.t1.else.t3.type: ExtractField$Key
transforms.t1.else.t3.field: c2

Example of guarding a >1 SMT
Current:
transforms: t2,t3
transforms.t2.?predicate: !has-my-prefix
transforms.t2.type: ExtractField$Key
transforms.t2.field: c1
transforms.t3.?predicate: !has-my-prefix
transforms.t3.type: ExtractField$Value
transforms.t3.field: c2
?predicates: has-my-prefix
?predicates.has-my-prefix.type: TopicNameMatch
?predicates.has-my-prefix.pattern: my-prefix-.*
Original:
transforms: t1
transforms.t1.type: If
transforms.t1.!test.type: TopicNameMatch
transforms.t1.!test.pattern: my-prefix-.*
transforms.t1.then: t2,t3
transforms.t1.then.t2.type: ExtractField$Key
transforms.t1.then.t2.field: c1
transforms.t1.then.t3.type: ExtractField$Value
transforms.t1.then.t3.field: c2

Notes:
* I'm assuming we would negate predicates in the Current proposal by
prefixing the predicate name with !
* I'm using "test" rather than "predicate" in the Original because it looks
a bit nicer having the same number of letters as "then" and "else".
* I'm negating the test in the Original by using "!test" rather than "test".

Those notes aside, I'd make the following assertions:

1. The Original examples require the same number of keys (or one less line
in the last example).
2. The Original examples are more guessable if you'd never seen this KIP
before. The SMT name (If) and its config parameter names ('test' and 'then'
and maybe also 'else') all give quite strong clues about what's happening.
I'd also argue that using !test is less cryptic than '?predicate', since
the purpose of the '!' fits with its use in most programming languages. The
purpose of the '?' in '?predicate' is 

Build failed in Jenkins: kafka-trunk-jdk14 #59

2020-05-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Pass `-release 8` to scalac and upgrade to Gradle 6.4 (#8538)


--
[...truncated 3.08 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

Re: [DISCUSS] Kafka 3.0

2020-05-06 Thread Andrew Schofield
That's my view here. I think there are two ways this could work:

1) 2.x is the bridge release, and 3.0 is the one that completes the ZK removal.
2) 3.0 is the bridge release, and 4.0 is the one that completes the ZK removal.

On 06/05/2020, 05:15, "Jeff Widman"  wrote:

IMO a bridge release, or one that you have to upgrade to before upgrading
to the breaking release should be numbered as the last of the 2.x series...

In other words, it's acceptable to say "before upgrading to 3.0, first
upgrade to 2.9" but it's very unexpected to say "before upgrading to 3.1,
first upgrade to 3.0"... no one will be expecting that.

On Tue, May 5, 2020 at 12:37 PM Ryanne Dolan  wrote:

> > In 3.0 it sounds like nothing is breaking and our big change won't be
> > complete... so, what's the motivation for the major release?
>
> Exactly. Why would 3.1 be the breaking release? No one would expect
> everything to break going from 3.0 to 3.1
>
> Ryanne
>
> On Tue, May 5, 2020 at 2:34 PM Gwen Shapira  wrote:
>
> > It sounds like the decision to make the next release 3.0 is a bit
> arbitrary
> > then?
> >
> > With Exactly Once, we announced 1.0 as one release after the one where
> EOS
> > shipped, when we felt it was "ready" (little did we know... but that's
> > another story).
> > 2.0 was breaking due to us dropping Java 7.
> >
> > In 3.0 it sounds like nothing is breaking and our big change won't be
> > complete... so, what's the motivation for the major release?
> >
> > On Tue, May 5, 2020 at 12:12 PM Guozhang Wang 
> wrote:
> >
> > > I think there's a confusion regarding the "bridge release" proposed in
> > > KIP-500: should it be release "3.0" or be release "2.X" (i.e. the last
> > > minor release before 3.0).
> > >
> > > My understanding is that "3.0" would be the bridge release, i.e. it
> would
> > > not break any compatibility, but 3.1 potentially would, so an upgrade
> > from
> > > 2.5 to 3.1 would need to first upgrade to 3.0, and then to 3.1. For
> > > clients, all broker-client compatibility are still maintained 3.1+ so
> > that
> > > 2.x producer / consumer clients could still talk to 3.1+ brokers, only
> > > those old versioned scripts with on "--zookeeper" would not work with
> > 3.1+
> > > brokers anymore since there are no zookeepers.
> > >
> > >
> > > Guozhang
> > >
> > > On Mon, May 4, 2020 at 5:33 PM Gwen Shapira  wrote:
> > >
> > > > +1 for removing MM 1.0 when we cut a breaking release. It is sad to
> see
> > > > that we are still investing in it (I just saw a KIP toward improving
> > its
> > > > reset policy).
> > > >
> > > > My understanding was that KIP-590 is not breaking compatibility, I
> > think
> > > > Guozhang said that in response to my question on the discussion
> thread.
> > > >
> > > > Overall, since Kafka has time-based releases, we can make the call 
on
> > 3.0
> > > > vs 2.7 when we are at "KIP freeze date" and can see which features
> are
> > > > likely to make it.
> > > >
> > > >
> > > > On Mon, May 4, 2020 at 5:13 PM Ryanne Dolan 
> > > wrote:
> > > >
> > > > > Hey Colin, I think we should wait until after KIP-500's "bridge
> > > release"
> > > > so
> > > > > there is a clean break from Zookeeper after 3.0. The bridge 
release
> > by
> > > > > definition is an attempt to not break anything, so it 
theoretically
> > > > doesn't
> > > > > warrant a major release. If that's not the case (i.e. if a single
> > > "bridge
> > > > > release" turns out to be impractical), we should consider forking
> 3.0
> > > > while
> > > > > maintaining a line of Zookeeper-dependent Kafka in 2.x. That way
> 3.x
> > > can
> > > > > evolve dramatically without breaking the 2.x line. In particular,
> > > > anything
> > > > > related to removing Zookeeper could land in pre-3.0 while every
> other
> > > > > feature targets 2.6.
> > > > >
> > > > > If you are proposing 2.6 should be the "bridge release", I think
> this
> > > is
> > > > > premature given Kafka's time-based release schedule. If the bridge
> > > > features
> > > > > happen to be merged before 2.6's feature freeze, then sure -- 
let's
> > > make
> > > > > that the bridge release in retrospect. And if we get all the
> > > > post-Zookeeper
> > > > > features merged before 2.7, I'm onboard with naming it "3.0"
> instead.
> > > > >
> > > > > That said, we should aim to remove legacy MirrorMaker before 3.0 
as
> > > well.
> > > > > I'm happy to drive that additional breaking change. Maybe 2.6 can
> be
> > > the
> > > > > "bridge" for MM2 as well.
> > > > >
> > > > > Ryanne
> > > > >
> > > > > On