Jenkins build is back to normal : kafka-trunk-jdk8 #4429

2020-04-13 Thread Apache Jenkins Server
See 




Re: [kafka-clients] [VOTE] 2.5.0 RC3

2020-04-13 Thread Matthias J. Sax
Thanks for running the release Arthur!

- verified signatures
- build from sources
- run tests locally (some flaky tests failed, but eventually all tests
passed)
- run quickstart (core/connect/streams) using Scala 2.13 binaries


+1 (binding)


-Matthias



On 4/13/20 5:14 PM, Colin McCabe wrote:
> +1 (binding)
> 
> verified checksums
> ran unitTest
> ran check
> 
> best,
> Colin
> 
> On Tue, Apr 7, 2020, at 21:03, David Arthur wrote:
>> Hello Kafka users, developers and client-developers,
>>
>> This is the forth candidate for release of Apache Kafka 2.5.0.
>>
>> * TLS 1.3 support (1.2 is now the default)
>> * Co-groups for Kafka Streams
>> * Incremental rebalance for Kafka Consumer
>> * New metrics for better operational insight
>> * Upgrade Zookeeper to 3.5.7
>> * Deprecate support for Scala 2.11
>>
>> Release notes for the 2.5.0 release:
>> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/RELEASE_NOTES.html
>>
>> *** Please download, test and vote by Friday April 10th 5pm PT
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> https://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and binary):
>> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/
>>
>> * Maven artifacts to be voted upon:
>> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>>
>> * Javadoc:
>> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/javadoc/
>>
>> * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
>> https://github.com/apache/kafka/releases/tag/2.5.0-rc3
>>
>> * Documentation:
>> https://kafka.apache.org/25/documentation.html
>>
>> * Protocol:
>> https://kafka.apache.org/25/protocol.html
>>
>> Successful Jenkins builds to follow
>>
>> Thanks!
>> David
>>
> 
>> --
>>  You received this message because you are subscribed to the Google Groups 
>> "kafka-clients" group.
>>  To unsubscribe from this group and stop receiving emails from it, send an 
>> email to kafka-clients+unsubscr...@googlegroups.com.
>>  To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6rUxaPRvddHb50RfVxRtHHvnJD8j9Q9ni18Okc9s-_DSQ%40mail.gmail.com
>>  
>> .
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-591: Add Kafka Streams config to set default store type

2020-04-13 Thread Sophie Blee-Goldman
Hey Matthias,

Thanks for picking this up! This'll be really nice for testing in
particular.

My only question is, do we want to make this available for use with custom
state stores as well? I'm not sure how common custom stores are in practice,
but I imagine when they *are* used, they're likely to be used all
throughout the
topology. So being able to set this one config would probably be a big win.

That said, it would be a nontrivial change given the different store types.
It's unfortunate that we can't just accept a StoreSupplier class to
configure this;
we'd need one for KV, window, and session stores each. We could just
add three configs, but that's not very appealing when it should take one.

Maybe we could define a new "store supplier"-supplier type class, which
maps the store supplier for each of the three store types? Just throwing out
ideas.

I'm actually fine with passing on the custom state stores for this feature
if
it doesn't sound worth the effort -- just wanted to put the thought out
there,
and see if anyone comes up with a more elegant solution.

Thanks for the KIP!
Sophie

On Thu, Apr 9, 2020 at 3:50 PM Matthias J. Sax  wrote:

> Hi,
>
> I would like to propose a small KIP to simplify the switch from RocksDB
> to in-memory stores in Kafka Stream:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-591%3A+Add+Kafka+Streams+config+to+set+default+store+type
>
> Looking forward to your feedback.
>
>
> -Matthias
>
>


Re: [DISCUSS] KIP-588: Allow producers to recover gracefully from transaction timeouts

2020-04-13 Thread Boyang Chen
Thanks Guozhang! Will wait and see if there are more comments.

On Fri, Apr 10, 2020 at 5:17 PM Guozhang Wang  wrote:

> Thanks Boyang, the newly added example looks good to me.
>
> On Thu, Apr 9, 2020 at 2:47 PM Boyang Chen 
> wrote:
>
> > Hey Guozhang,
> >
> > I have added an example of the producer API usage under new improvements.
> > Let me know if this looks good to you.
> >
> > Boyang
> >
> > On Wed, Apr 8, 2020 at 1:38 PM Boyang Chen 
> > wrote:
> >
> > > That's a good suggestion Jason. Adding a dedicated PRODUCER_FENCED
> error
> > > should help distinguish exceptions and could safely mark
> > > INVALID_PRODUCER_EPOCH exception as non-fatal in the new code. Updated
> > the
> > > KIP.
> > >
> > > Boyang
> > >
> > > On Wed, Apr 8, 2020 at 12:18 PM Jason Gustafson 
> > > wrote:
> > >
> > >> Hey Boyang,
> > >>
> > >> Thanks for the KIP. I think the main problem we've identified here is
> > that
> > >> the current errors conflate transaction timeouts with producer
> fencing.
> > >> The
> > >> first of these ought to be recoverable, but we cannot distinguish it.
> > The
> > >> suggestion to add a new error code makes sense to me, but it leaves
> this
> > >> bit of awkwardness:
> > >>
> > >> > One extra issue that needs to be addressed is how to handle
> > >> `ProducerFenced` from Produce requests.
> > >>
> > >> In fact, the underlying error code here is INVALID_PRODUCER_EPOCH.
> It's
> > >> just that the code treats this as equivalent to `ProducerFenced`. One
> > >> thought I had is maybe PRODUCER_FENCED needs to be a separate error
> code
> > >> as
> > >> well. After all, only the transaction coordinator knows whether a
> > producer
> > >> has been fenced or not. So maybe the handling could be something like
> > the
> > >> following:
> > >>
> > >> 1. Produce requests may return INVALID_PRODUCER_EPOCH. The producer
> > >> recovers by following KIP-360 logic to see whether the epoch can be
> > >> bumped.
> > >> If it cannot because the broker version is too old, we fail.
> > >> 2. Transactional APIs may return either TRANSACTION_TIMEOUT or
> > >> PRODUCER_FENCED. In the first case, we do the same as above. We try to
> > >> recover by bumping the epoch. If the error is PRODUCER_FENCED, it is
> > >> fatal.
> > >> 3. Older brokers may return INVALID_PRODUCER_EPOCH as well from
> > >> transactional APIs. We treat this the same as 1.
> > >>
> > >> What do you think?
> > >>
> > >> Thanks,
> > >> Jason
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> On Mon, Apr 6, 2020 at 3:41 PM Boyang Chen <
> reluctanthero...@gmail.com>
> > >> wrote:
> > >>
> > >> > Yep, updated the KIP, thanks!
> > >> >
> > >> > On Mon, Apr 6, 2020 at 3:11 PM Guozhang Wang 
> > >> wrote:
> > >> >
> > >> > > Regarding 2), sounds good, I saw UNKNOWN_PRODUCER_ID is properly
> > >> handled
> > >> > > today in produce / add-partitions-to-txn / add-offsets-to-txn /
> > >> end-txn
> > >> > > responses, so that should be well covered.
> > >> > >
> > >> > > Could you reflect this in the wiki page that the broker should be
> > >> > > responsible for using different error codes given client request
> > >> versions
> > >> > > as well?
> > >> > >
> > >> > >
> > >> > >
> > >> > > Guozhang
> > >> > >
> > >> > > On Mon, Apr 6, 2020 at 9:20 AM Boyang Chen <
> > >> reluctanthero...@gmail.com>
> > >> > > wrote:
> > >> > >
> > >> > > > Thanks Guozhang for the review!
> > >> > > >
> > >> > > > On Sun, Apr 5, 2020 at 5:47 PM Guozhang Wang <
> wangg...@gmail.com>
> > >> > wrote:
> > >> > > >
> > >> > > > > Hello Boyang,
> > >> > > > >
> > >> > > > > Thank you for the proposed KIP. Just some minor comments
> below:
> > >> > > > >
> > >> > > > > 1. Could you also describe which producer APIs could
> potentially
> > >> > throw
> > >> > > > the
> > >> > > > > new TransactionTimedOutException, and also how should callers
> > >> handle
> > >> > > them
> > >> > > > > differently (i.e. just to make your description more concrete
> as
> > >> > > > javadocs).
> > >> > > > >
> > >> > > > > Good point, I will add example java doc changes.
> > >> > > >
> > >> > > >
> > >> > > > > 2. It's straight-forward if client is on newer version while
> > >> broker's
> > >> > > on
> > >> > > > > older version; however If the client is on older version while
> > >> > broker's
> > >> > > > on
> > >> > > > > newer version, today would the internal module of producers
> > treat
> > >> it
> > >> > > as a
> > >> > > > > general fatal error or not? If not, should the broker set a
> > >> different
> > >> > > > error
> > >> > > > > code upon detecting older request versions?
> > >> > > > >
> > >> > > > > That's a good suggestion, my understanding is that the
> > >> prerequisite
> > >> > of
> > >> > > > this change is the new KIP-360 API which is going out with 2.5,
> > >> > > > so we could just return UNKNOWN_PRODUCER_ID instead of
> > >> PRODUCER_FENCED
> > >> > as
> > >> > > > it could be interpreted as abortable error
> > >> > > > in 2.5 producer and retry. 

Re: Permission to create a KIP

2020-04-13 Thread 张祥
Thanks Guozhang.

Guozhang Wang  于2020年4月14日周二 上午12:52写道:

> I've added your id to the apache wiki space. You should be able to create
> new pages now.
>
> On Sun, Apr 12, 2020 at 10:55 PM 张祥  wrote:
>
> >  I just registered a new account with xiangzhang1...@gmail.com and my
> > username is `iamabug`, not sure which one is id.
> >
> > Guozhang Wang  于2020年4月13日周一 下午1:51写道:
> >
> > > The id is for the apache's wiki space:
> > > https://cwiki.apache.org/confluence/display/KAFKA
> > >
> > > If you already had one before, that will work; if not you can create
> one
> > > under that space.
> > >
> > >
> > > Guozhang
> > >
> > > On Sun, Apr 12, 2020 at 10:49 PM 张祥  wrote:
> > >
> > > > I am not sure that I have one, how can I find out this and how can I
> > > create
> > > > one ? Thanks.
> > > >
> > > > Guozhang Wang  于2020年4月13日周一 下午1:42写道:
> > > >
> > > > > Hello Xiang,
> > > > >
> > > > > What's your apache ID?
> > > > >
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Sun, Apr 12, 2020 at 6:08 PM 张祥 
> wrote:
> > > > >
> > > > > > Hi, I am working on a ticket which requires modifying public APIs
> > > that
> > > > > are
> > > > > > visible to users. Could somebody grant the KIP permission to me ?
> > > > Thanks.
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > -- Guozhang
> > > > >
> > > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>
>
> --
> -- Guozhang
>


Build failed in Jenkins: kafka-trunk-jdk8 #4428

2020-04-13 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: need to cleanup any tasks closed in TaskManager (#8463)


--
[...truncated 3.02 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > 

[jira] [Resolved] (KAFKA-9842) Address testing gaps in consumer OffsetsForLeaderEpochs request grouping

2020-04-13 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9842.

Resolution: Fixed

> Address testing gaps in consumer OffsetsForLeaderEpochs request grouping
> 
>
> Key: KAFKA-9842
> URL: https://issues.apache.org/jira/browse/KAFKA-9842
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> KAFKA-9583 identified an issue with the grouping of partitions in 
> OffsetsForLeaderEpoch requests sent by the consumer. We should have test 
> cases which ensure that partitions are grouped correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [kafka-clients] [VOTE] 2.5.0 RC3

2020-04-13 Thread Colin McCabe
+1 (binding)

verified checksums
ran unitTest
ran check

best,
Colin

On Tue, Apr 7, 2020, at 21:03, David Arthur wrote:
> Hello Kafka users, developers and client-developers,
> 
> This is the forth candidate for release of Apache Kafka 2.5.0.
> 
> * TLS 1.3 support (1.2 is now the default)
> * Co-groups for Kafka Streams
> * Incremental rebalance for Kafka Consumer
> * New metrics for better operational insight
> * Upgrade Zookeeper to 3.5.7
> * Deprecate support for Scala 2.11
> 
> Release notes for the 2.5.0 release:
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/RELEASE_NOTES.html
> 
> *** Please download, test and vote by Friday April 10th 5pm PT
> 
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
> 
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/
> 
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> 
> * Javadoc:
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/javadoc/
> 
> * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
> https://github.com/apache/kafka/releases/tag/2.5.0-rc3
> 
> * Documentation:
> https://kafka.apache.org/25/documentation.html
> 
> * Protocol:
> https://kafka.apache.org/25/protocol.html
> 
> Successful Jenkins builds to follow
> 
> Thanks!
> David
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "kafka-clients" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to kafka-clients+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6rUxaPRvddHb50RfVxRtHHvnJD8j9Q9ni18Okc9s-_DSQ%40mail.gmail.com
>  
> .


Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-13 Thread Kowshik Prakasam
Hi Jun,

Thanks for the feedback! I have updated the KIP-584 addressing your
comments.
Please find my response below.

> 100.6 You can look for the sentence "This operation requires ALTER on
> CLUSTER." in KIP-455. Also, you can check its usage in
> KafkaApis.authorize().

(Kowshik): Done. Great point! For the newly introduced UPDATE_FEATURES api,
I have added a
requirement that AclOperation.ALTER is required on ResourceType.CLUSTER.

> 110. Keeping the feature version as int is probably fine. I just felt that
> for some of the common user interactions, it's more convenient to
> relate that to a release version. For example, if a user wants to
downgrade
> to a release 2.5, it's easier for the user to use the tool like "tool
> --downgrade 2.5" instead of "tool --downgrade --feature X --version 6".

(Kowshik): Great point. Generally, maximum feature version levels are not
downgradable after
they are finalized in the cluster. This is because, as a guideline bumping
feature version level usually is used mainly to convey important breaking
changes.
Despite the above, there may be some extreme/rare cases where a user wants
to downgrade
all features to a specific previous release. The user may want to do this
just
prior to rolling back a Kafka cluster to a previous release.

To support the above, I have made a change to the KIP explaining that the
CLI tool is versioned.
The CLI tool internally has knowledge about a map of features to their
respective max
versions supported by the Broker. The tool's knowledge of features and
their version values,
is limited to the version of the CLI tool itself i.e. the information is
packaged into the CLI tool
when it is released. Whenever a Kafka release introduces a new feature
version, or modifies
an existing feature version, the CLI tool shall also be updated with this
information,
Newer versions of the CLI tool will be released as part of the Kafka
releases.

Therefore, to achieve the downgrade need, the user just needs to run the
version of
the CLI tool that's part of the particular previous release that he/she is
downgrading to.
To help the user with this, there is a new command added to the CLI tool
called `downgrade-all`.
This essentially downgrades max version levels of all features in the
cluster to the versions
known to the CLI tool internally.

I have explained the above in the KIP under these sections:

Tooling support (have explained that the CLI tool is versioned):
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Toolingsupport

Regular CLI tool usage (please refer to point #3, and see the tooling
example)
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-RegularCLItoolusage

> 110. Similarly, if the client library finds a feature mismatch with the
broker,
> the client likely needs to log some error message for the user to take
some
> actions. It's much more actionable if the error message is "upgrade the
> broker to release version 2.6" than just "upgrade the broker to feature
> version 7".

(Kowshik): That's a really good point! If we use ints for feature versions,
the best
message that client can print for debugging is "broker doesn't support
feature version 7", and alongside that print the supported version range
returned
by the broker. Then, does it sound reasonable that the user could then
reference
Kafka release logs to figure out which version of the broker release is
required
be deployed, to support feature version 7? I couldn't think of a better
strategy here.

> 120. When should a developer bump up the version of a feature?

(Kowshik): Great question! In the KIP, I have added a section: 'Guidelines
on feature versions and workflows'
providing some guidelines on when to use the versioned feature flags, and
what
are the regular workflows with the CLI tool.

Link to the relevant sections:
Guidelines:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Guidelinesonfeatureversionsandworkflows

Regular CLI tool usage:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-RegularCLItoolusage

Advanced CLI tool usage:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-AdvancedCLItoolusage


Cheers,
Kowshik


On Fri, Apr 10, 2020 at 4:25 PM Jun Rao  wrote:

> Hi, Kowshik,
>
> Thanks for the reply. A few more comments.
>
> 110. Keeping the feature version as int is probably fine. I just felt that
> for some of the common user interactions, it's more convenient to
> relate that to a release version. For example, if a user wants to downgrade
> to a release 2.5, it's easier for the user to use the tool like "tool
> --downgrade 2.5" instead of "tool --downgrade --feature X --version 6".
> 

[DISCUSS] KIP-593: Enable --if-exists and --if-not-exists for AdminClient in TopicCommand

2020-04-13 Thread Cheng Tan
Hi developers,

In kafka-topic.sh, we expect to use --if-exists to ensure that the topic to 
create or change exists. Similarly, we expect to use --if-not-exists to ensure 
that the topic to create or change does not exist. Currently, only 
ZookeeperTopicService supports these two options. We want to introduce them to 
AdminClientTopicService. Please let me know if you have any thought or idea 
related to this KIP. 

Here’s the link to the KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-593%3A+Enable+--if-exists+and+--if-not-exists+for+AdminClient+in+TopicCommand
 


Thanks,

- Cheng Tan

Re: [Discuss] KIP-582 Add a "continue" option for Kafka Connect error handling

2020-04-13 Thread Christopher Egerton
HI Zihan,

Thanks for the KIP! I have some questions that I'm hoping we can address to
help better understand the motivation for this proposal.

1. In the "Motivation" section it's written that "If users want to store
their broken records, they have to config a broken record queue, which is
too much work for them in some cases." Could you elaborate on what makes
this a lot of work? Ideally, users should be able to configure the dead
letter queue by specifying a value for the "
errors.deadletterqueue.topic.name" property in their sink connector config;
this doesn't seem like a lot of work on the surface.

2. If the "errors.tolerance" property is set to "continue", would sink
connectors be able to differentiate between well-formed records whose
successfully-deserialized contents are byte arrays and malformed records
whose contents are the still-serialized byte arrays of the Kafka message
from which they came?

3. I think it's somewhat implied by the KIP, but it'd be nice to see what
the schema for a malformed record would be. Null? Byte array? Optional byte
array?

4. This is somewhat covered by the first question, but it seems worth
pointing out that this exact functionality can already be achieved by using
features already provided by the framework. Configure your connector to
send malformed records to a dead letter queue topic, and configure a
separate connector to consume from that dead letter queue topic, use the
ByteArrayConverter to deserialize records, and send those records to the
destination sink. It'd be nice if this were called out in the "Rejected
Alternatives" section with a reason on why the changes proposed in the KIP
are preferable, especially since it may still work as a viable workaround
for users who are working on older versions of the Connect framework.

Looking forward to the discussion!

Cheers,

Chris

On Tue, Mar 24, 2020 at 11:50 AM Zihan Li  wrote:

> Hi,
>
> I just want to re-up this discussion thread about KIP-582 Add a "continue"
> option for Kafka Connect error handling.
>
> Wiki page: https://cwiki.apache.org/confluence/x/XRvcC <
> https://cwiki.apache.org/confluence/x/XRvcC>
>
> JIRA: https://issues.apache.org/jira/browse/KAFKA-9740 <
> https://issues.apache.org/jira/browse/KAFKA-9740>
>
> Please share your thoughts about adding this new error handling option to
> Kafka Connect.
>
> Best,
> Zihan
>
> > On Mar 18, 2020, at 12:55 PM, Zihan Li  wrote:
> >
> > Hi all,
> >
> > I'd like to use this thread to discuss KIP-582 Add a "continue" option
> for Kafka Connect error handling, please see detail at:
> > https://cwiki.apache.org/confluence/x/XRvcC <
> https://cwiki.apache.org/confluence/x/XRvcC>
> >
> > Best,
> > Zihan Li
>
>


[jira] [Created] (KAFKA-9862) Enable --if-exists and --if-not-exists for AdminClient in TopicCommand

2020-04-13 Thread Cheng Tan (Jira)
Cheng Tan created KAFKA-9862:


 Summary: Enable --if-exists and --if-not-exists for AdminClient in 
TopicCommand
 Key: KAFKA-9862
 URL: https://issues.apache.org/jira/browse/KAFKA-9862
 Project: Kafka
  Issue Type: New Feature
Reporter: Cheng Tan
Assignee: Cheng Tan


In *kafka-topic.sh*, we expect to use --if-exists to ensure that the topic to 
create or change exists. Similarly, we expect to use --if-not-exists to ensure 
that the topic to create or change does not exist. Currently, only 
*ZookeeperTopicService* supports these two options and we want to introduce 
them to *AdminClientTopicService.*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.4-jdk8 #186

2020-04-13 Thread Apache Jenkins Server
See 


Changes:

[konstantine] [MINOR] allow additional JVM args in KafkaService (#7297)

[konstantine] Fix missing reference in kafka.py (#7715)


--
[...truncated 2.76 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED


[KAFKA-9861] Process Simplification - Community Validation of Kafka Release Candidates

2020-04-13 Thread Israel Ekpo
Hi Everyone,

I have created [KAFKA-9861] as a process improvement and I have started
work on it.
https://issues.apache.org/jira/browse/KAFKA-9861

I understand that KIPs are generally reserved for code improvements or
changes but I wanted to find out if something like this is better as a KIP
rather than a JIRA issue.

When you have a moment, please let me know and I can either submit the KIP
or add subtasks or related tasks to this current issue.

Thanks in advance for any feedback or guidance.


On Tue, Mar 17, 2020 at 11:00 AM David Arthur  wrote:

> Thanks, Israel. I agree with Gwen, this is a great list and would be
> useful to add to our release candidate boilerplate. Since we found a
> blocker bug on RC1, I'll go ahead and close voting. RC2 will be announced
> shortly.
>
> -David
>
> On Tue, Mar 17, 2020 at 10:46 AM Israel Ekpo  wrote:
>
>> Thanks for the feedback, Gwen. I will create JIRA tasks to track the items
>> shortly.
>>
>> The JIRA tasks will document the goals, expectations and relevant Kafka
>> versions for each resource.
>>
>> I will volunteer for some of them and update the JIRA tasks accordingly.
>>
>>
>> On Tue, Mar 17, 2020 at 12:51 AM Gwen Shapira  wrote:
>>
>> > Oh wow, I love this checklist. I don't think we'll have time to create
>> one
>> > for this release, but will be great to track this via JIRA and see if we
>> > can get all those contributed before 2.6...
>> >
>> > Gwen Shapira
>> > Engineering Manager | Confluent
>> > 650.450.2760 | @gwenshap
>> > Follow us: Twitter | blog
>> >
>> > On Mon, Mar 16, 2020 at 3:02 PM, Israel Ekpo < israele...@gmail.com >
>> > wrote:
>> >
>> > >
>> > >
>> > >
>> > > - Download artifacts successfully
>> > > - Verified signatures successfully
>> > > - All tests have passed so far for Scala 2.12. Have not run it on 2.13
>> > yet
>> > >
>> > >
>> > >
>> > >
>> > > +1 (non-binding) for the release
>> > >
>> > >
>> > >
>> > > I do have some feedback so I think we should include in the RC
>> > > announcement a link for how the community should test and include
>> > > information like:
>> > >
>> > >
>> > >
>> > > - How to set up test environment for unit and functional tests
>> > > - Java version(s) needed for the tests
>> > > - Scala version(s) needed for the tests
>> > > - Gradle version needed
>> > > - Sample script for running sanity checks and unit tests
>> > > - Sample Helm Charts for running all the basic components on a
>> Kubernetes
>> > > - Sample Ansible Script for running all the basic components on
>> Virtual
>> > > Machines
>> > >
>> > >
>> > >
>> > > It takes a bit of time for newcomers to investigate why the tests are
>> not
>> > > running successfully in the beginning and providing guidance for these
>> > > categories of contributors will be great. If I did not know where to
>> look
>> > > (kafka-2.5.0-src/gradle/dependencies.gradle) it would take longer to
>> > > figure out why the tests are not working/running
>> > >
>> > >
>> > >
>> > > Thanks.
>> > >
>> > >
>> > >
>> > > On Thu, Mar 12, 2020 at 11:21 AM Bill Bejeck < bbejeck@ gmail. com (
>> > > bbej...@gmail.com ) > wrote:
>> > >
>> > >
>> > >>
>> > >>
>> > >> Hi David,
>> > >>
>> > >>
>> > >>
>> > >> 1. Scanned the Javadoc, looks good
>> > >> 2. Downloaded kafka_2.12-2.5.0 and ran the quickstart and streams
>> > >> quickstart
>> > >> 3. Verified the signatures
>> > >>
>> > >>
>> > >>
>> > >> +1 (non-binding)
>> > >>
>> > >>
>> > >>
>> > >> Thanks for running the release David!
>> > >>
>> > >>
>> > >>
>> > >> -Bill
>> > >>
>> > >>
>> > >>
>> > >> On Tue, Mar 10, 2020 at 4:01 PM David Arthur < david. arthur@
>> > confluent. io
>> > >> ( david.art...@confluent.io ) > wrote:
>> > >>
>> > >>
>> > >>>
>> > >>>
>> > >>> Thanks for the test failure reports, Tom. Tracking (and fixing)
>> these
>> > is
>> > >>> important and will make future release managers have an easier time
>> :)
>> > >>>
>> > >>>
>> > >>>
>> > >>> -David
>> > >>>
>> > >>>
>> > >>>
>> > >>> On Tue, Mar 10, 2020 at 10:16 AM Tom Bentley < tbentley@ redhat.
>> com (
>> > >>> tbent...@redhat.com ) > wrote:
>> > >>>
>> > >>>
>> > 
>> > 
>> >  Hi David,
>> > 
>> > 
>> > 
>> >  I verified signatures, built the tagged branch and ran unit and
>> >  integration
>> >  tests. I found some flaky tests, as follows:
>> > 
>> > 
>> > 
>> >  https:/ / issues. apache. org/ jira/ browse/ KAFKA-9691 (
>> >  https://issues.apache.org/jira/browse/KAFKA-9691 ) (new) https:/ /
>> > issues.
>> >  apache. org/ jira/ browse/ KAFKA-9692 (
>> >  https://issues.apache.org/jira/browse/KAFKA-9692 ) (new) https:/ /
>> > issues.
>> >  apache. org/ jira/ browse/ KAFKA-9283 (
>> >  https://issues.apache.org/jira/browse/KAFKA-9283 ) (already
>> reported)
>> > 
>> > 
>> > 
>> >  Many thanks,
>> > 
>> > 
>> > 
>> >  Tom
>> > 
>> > 
>> > 
>> >  On Tue, Mar 10, 2020 at 3:28 AM David Arthur < mumrah@ gmail. com
>> (
>> >  

[jira] [Created] (KAFKA-9861) Process Simplification - Community Validation Kafka Release Candidates

2020-04-13 Thread Israel Ekpo (Jira)
Israel Ekpo created KAFKA-9861:
--

 Summary: Process Simplification - Community Validation Kafka 
Release Candidates
 Key: KAFKA-9861
 URL: https://issues.apache.org/jira/browse/KAFKA-9861
 Project: Kafka
  Issue Type: Improvement
  Components: build, documentation, system tests
Affects Versions: 2.6.0, 2.4.2, 2.5.1
 Environment: Linux, Java 8/11, Scala 2.x
Reporter: Israel Ekpo
Assignee: Israel Ekpo
 Fix For: 2.6.0, 2.4.2, 2.5.1


When new KAFKA release candidates are published and there is a solicitation for 
the community to get involved in testing and verifying the release candidates, 
it would be great to have the test process thoroughly documented for newcomers 
to participate effectively.

For new contributors, this can be very daunting and it would be great to have 
this process clearly documented in a way that lowers the level of effort 
necessary to get started.

The goal of this task is to create the documentation and supporting artifacts 
that would make this goal a reality.

Going forward for future releases, it would be great to have the link to this 
documentation included in the RC announcements so that the community 
(especially end users) can help test and participate in the voting process 
effectively.

These are the items that I believe should be included in this documentation
 * How to set up test environment for unit and functional tests
 * Java version(s) needed for the tests
 * Scala version(s) needed for the tests
 * Gradle version needed
 * Sample script for running sanity checks and unit tests
 * Sample Helm Charts for running all the basic components on a Kubernetes
 * Sample Ansible Script for running all the basic components on Virtual 
Machines

The first 4 items will be part of the documentation that shows how to install 
these dependencies in a Linux VM. The 5th item is a script that will download 
PGP keys, check signatures, validate checksums and run unit/integration tests. 
The 6th item is a Helm chart with basic components necessary to validate 
critical components in the ecosystem (Zookeeper, Brokers, Streams etc) within a 
Kubernetes cluster. The last item is similar to the 6th item but installs these 
components on virtual machines instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [kafka-clients] [VOTE] 2.5.0 RC3

2020-04-13 Thread Jun Rao
Hi, David,

Thanks for running the release. Verified quickstart on the scala 2.12
binary. +1 from me.

Jun

On Tue, Apr 7, 2020 at 9:03 PM David Arthur  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the forth candidate for release of Apache Kafka 2.5.0.
>
> * TLS 1.3 support (1.2 is now the default)
> * Co-groups for Kafka Streams
> * Incremental rebalance for Kafka Consumer
> * New metrics for better operational insight
> * Upgrade Zookeeper to 3.5.7
> * Deprecate support for Scala 2.11
>
> Release notes for the 2.5.0 release:
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/RELEASE_NOTES.html
>
> *** Please download, test and vote by Friday April 10th 5pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/javadoc/
>
> * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
> https://github.com/apache/kafka/releases/tag/2.5.0-rc3
>
> * Documentation:
> https://kafka.apache.org/25/documentation.html
>
> * Protocol:
> https://kafka.apache.org/25/protocol.html
>
> Successful Jenkins builds to follow
>
> Thanks!
> David
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6rUxaPRvddHb50RfVxRtHHvnJD8j9Q9ni18Okc9s-_DSQ%40mail.gmail.com
> 
> .
>


[jira] [Resolved] (KAFKA-9859) kafka-streams-application-reset tool doesn't take into account topics generated by KTable foreign key join operation

2020-04-13 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-9859.

Resolution: Not A Problem

> kafka-streams-application-reset tool doesn't take into account topics 
> generated by KTable foreign key join operation
> 
>
> Key: KAFKA-9859
> URL: https://issues.apache.org/jira/browse/KAFKA-9859
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, tools
>Reporter: Levani Kokhreidze
>Priority: Major
>
> Steps to reproduce:
>  * Create Kafka Streams application which uses foreign key join operation
>  * Stop Kafka streams application
>  * Perform `kafka-topics-list` and verify that foreign key operation internal 
> topics are generated
>  * Use `kafka-streams-application-reset` to perform the cleanup of your kafka 
> streams application: `kafka-streams-application-reset --application-id 
>  --input-topics  --bootstrap-servers 
>  --to-datetime 2019-04-13T00:00:00.000`
>  * Perform `kafka-topics-list` again, you'll see that topics generated by the 
> foreign key operation are still there.
> `kafka-streams-application-reset` uses `repartition` and `changelog` suffixes 
> to determine which topics needs to be deleted, as a result topics generated 
> by the foreign key are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-13 Thread Kowshik Prakasam
Hi Guozhang,

Thanks for the explanation! This is a very good point. I have updated the
KIP incorporating
the proposed idea. We now maintain/serve MAX as well as MIN version levels
of finalized features.
So, the client will get to know both these values in the
ApiVersionsResponse. This serves as a solution
to the problem that you explained earlier.

It is important to note the following explanation. We only allow the
finalized feature MAX version level to
be increased/decreased dynamically via the controller API. Contrastingly,
the MIN version level can not be
mutated via the Controller API. This is because, MIN version level is
usually increased only to indicate the
intent to stop support for a certain feature version. We would usually
deprecate features during broker releases,
after prior announcements. Therefore, the facility to mutate MIN version
level need not be made available
through the controller API to the cluster operator.

Instead it is sufficient if such changes can be done directly by the
controller i.e. during a certain Kafka
release we would change the controller code to mutate the '/features' ZK
node increasing the MIN version level
of one or more finalized features (this will be a planned change, as
determined by Kafka developers). Then, as
this Broker release gets rolled out to a cluster, the feature versions will
become permanently deprecated.

Here are links to the specific sub-sections with the changes including
MIN/MAX version levels:

Goals:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Goals

Non-goals (see point #2):
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Non-goals

Feature version deprecation:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Featureversiondeprecation

Admin API changes:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-AdminAPIchanges


Cheers,
Kowshik



On Mon, Apr 6, 2020 at 3:28 PM Guozhang Wang  wrote:

> Hello Kowshik,
>
> For 2) above, my motivations is more from the flexibility on client side
> instead of version deprecation: let's say a client talks to the cluster
> learned that the cluster-wide version for feature is X, while the client
> itself only knows how to execute the feature up to version Y ( < X), then
> at the moment the client has to give up leveraging that since it is not
> sure if all brokers actually supports version Y or not. This is because the
> version X is only guaranteed to be within the common overlapping range of
> all [low, high] across brokers where "low" is not always 0, so the client
> cannot safely assume that any versions smaller than X are also supported on
> the cluster.
>
> If we assume that when cluster-wide version is X, then all versions smaller
> than X are guaranteed to be supported, then it means all broker's supported
> version range is like [0, high], which I think is not realistic?
>
>
> Guozhang
>
>
>
> On Mon, Apr 6, 2020 at 12:06 PM Jun Rao  wrote:
>
> > Hi, Kowshik,
> >
> > Thanks for the reply. A few more replies below.
> >
> > 100.6 You can look for the sentence "This operation requires ALTER on
> > CLUSTER." in KIP-455 .
> Also, you can check its usage in
> > KafkaApis.authorize().
> >
> > 110. From the external client/tooling perspective, it's more natural to
> use
> > the release version for features. If we can use the same release version
> > for internal representation, it seems simpler (easier to understand, no
> > mapping overhead, etc). Is there a benefit with separate external and
> > internal versioning schemes?
> >
> > 111. To put this in context, when we had IBP, the default value is the
> > current released version. So, if you are a brand new user, you don't need
> > to configure IBP and all new features will be immediately available in
> the
> > new cluster. If you are upgrading from an old version, you do need to
> > understand and configure IBP. I see a similar pattern here for
> > features. From the ease of use perspective, ideally, we shouldn't
> require a
> > new user to have an extra step such as running a bootstrap script unless
> > it's truly necessary. If someone has a special need (all the cases you
> > mentioned seem special cases?), they can configure a mode such that
> > features are enabled/disabled manually.
> >
> > Jun
> >
> > On Fri, Apr 3, 2020 at 5:54 PM Kowshik Prakasam 
> > wrote:
> >
> > > Hi Jun,
> > >
> > > Thanks for the feedback and suggestions. Please find my response below.
> > >
> > > > 100.6 For every new request, the admin needs to control who is
> allowed
> > to
> > > > issue that request if security is enabled. So, we need to assign the
> > new
> > > > request a ResourceType and possible 

Re: Permission to create a KIP

2020-04-13 Thread Guozhang Wang
I've added your id to the apache wiki space. You should be able to create
new pages now.

On Sun, Apr 12, 2020 at 10:55 PM 张祥  wrote:

>  I just registered a new account with xiangzhang1...@gmail.com and my
> username is `iamabug`, not sure which one is id.
>
> Guozhang Wang  于2020年4月13日周一 下午1:51写道:
>
> > The id is for the apache's wiki space:
> > https://cwiki.apache.org/confluence/display/KAFKA
> >
> > If you already had one before, that will work; if not you can create one
> > under that space.
> >
> >
> > Guozhang
> >
> > On Sun, Apr 12, 2020 at 10:49 PM 张祥  wrote:
> >
> > > I am not sure that I have one, how can I find out this and how can I
> > create
> > > one ? Thanks.
> > >
> > > Guozhang Wang  于2020年4月13日周一 下午1:42写道:
> > >
> > > > Hello Xiang,
> > > >
> > > > What's your apache ID?
> > > >
> > > >
> > > > Guozhang
> > > >
> > > > On Sun, Apr 12, 2020 at 6:08 PM 张祥  wrote:
> > > >
> > > > > Hi, I am working on a ticket which requires modifying public APIs
> > that
> > > > are
> > > > > visible to users. Could somebody grant the KIP permission to me ?
> > > Thanks.
> > > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


-- 
-- Guozhang


Re: Kafka topology monitoring

2020-04-13 Thread Guozhang Wang
Hello Timothy,

I'm not sure I get your question: what do you mean that these metrics `do
not seem to extrapolate the above messages`?

Note that the metrics reporting has reporting levels (current there are
INFO and DEBUG) and the ones you mentioned are DEBUG level, so you'd need
to override your metrics reporting level config to DEBUG, details here:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-444%3A+Augment+metrics+for+Kafka+Streams

Guozhang


On Mon, Apr 13, 2020 at 8:14 AM  wrote:

> Good Afternoon and hope you all are well.
>
> Im trying to monitor my topologies to ensure That I can look at the
> following metrics
>
> Thread metrics
>
> - Average time for commits, poll, process operations
>
> - Tasks created per second, tasked closed per second
>
> I have seen metrics in micrometer, but they don't seem to extrapolate
> the above messages, is there a tool that I can use to get this level of
> detailed monitoring?
>
> best regards
>
> Timothy



-- 
-- Guozhang


Re: [DISCUSS] (KAFKA-9806) authorize cluster operation when creating internal topics

2020-04-13 Thread Paolo Moriello
Right, the problem in this case is that restoring ACLs to a correct 
configuration does not fix the problem, because the internal topics remains in 
a bad state. For instance:
1) user sets insufficient cluster level ACLs (now brokers are not able to 
communicate)
2) user consumes for the first time, consumer_offsets gets created
3) user sets correct ACLs (now brokers are able to communicate)
4) it is still impossible to consume because consumer_offsets is in a bad state

I agree that when broker ACLs are configured incorrectly, a lot of things fail. 
However, when ACLs are set back correctly we should expect things to work 
normally. This does not happen for consumers at the moment. This is why I 
believe that the source of inconsistency is in consumer_offsets creation here, 
it shouldn’t be created if we know that subsequent requests will fail.

Best,
Paolo

> On 13 Apr 2020, at 15:05, Colin McCabe  wrote:
> Hi Paolo,
> 
> If the problem is broker ACLs being configured incorrectly so that it can't 
> receive requests from the controller, a lot of things will fail.  This isn't 
> really related to anything with FindCoordinator.
> 
> best,
> Colin


Re: [DISCUSS] KIP-587 Suppress detailed responses for handled exceptions in security-sensitive environments

2020-04-13 Thread Connor Penhale
Hi Chris!

RE: SSL, indeed, the issue is not that the information is not encrypted, but 
that there is no authorization layer.

I'll be sure to edit the KIP as we continue discussion!

RE: the 200 response you highlighted, great catch! I'll work with my customer 
and get back to you on their audit team's intention! I'm fairly certain I know 
the answer, but I need to be sure before I speak for them.

Thanks!
Connor

On 4/8/20, 11:27 PM, "Christopher Egerton"  wrote:

Hi Connor,

Just a few more remarks!

I noticed that you said "Kafka Connect was passing these exceptions without
authentication." For what it's worth, the Connect REST API can be secured
with TLS out-of-the-box by configuring the worker with the various ssl.*
properties, but that doesn't provide any kind of authorization layer to
provide levels of security depending who the user is. Just pointing out in
case this helps with your use case.

As far as editing the KIP based on discussion goes--it's not only
acceptable, it's expected :) Ideally, the KIP should be kept up-to-date to
the point where, were it to be accepted at any moment, it would accurately
reflect the changes that would then be made to Kafka. This can be relaxed
if there's rapid iteration or items that are still up for discussion, but
as soon as things settle down it should be updated.

As far as item 4 goes, my question was about exceptions that aren't handled
by the ExceptionMapper, but which are returned as part of the response body
when querying the status of a connector or task that has failed by querying
the /connectors/{name}/status or /connectors/{name}/tasks/{taskId}/status
endpoints. Even if the request is successful and results in an HTTP 200
response, the body might contain a stack trace if the connector or any of
its tasks have failed.

For example, I ran an instance of the FileStreamSource connector named
"file-source" locally and instructed it to consume from a file that it
lacked permissions to read. When I queried the status of that connector by
issuing a request to /connectors/file-source/status, I got back the
following response:

{
  "name": "file-source",
  "connector": {
"state": "RUNNING",
"worker_id": "192.168.86.21:8083"
  },
  "tasks": [
{
  "id": 0,
  "state": "FAILED",
  "worker_id": "192.168.86.21:8083",
  "trace": "org.apache.kafka.connect.errors.ConnectException:
java.nio.file.AccessDeniedException: test.txt\n\tat

org.apache.kafka.connect.file.FileStreamSourceTask.poll(FileStreamSourceTask.java:116)\n\tat

org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:265)\n\tat

org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:232)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\tat

java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
java.lang.Thread.run(Thread.java:748)\nCaused by:
java.nio.file.AccessDeniedException: test.txt\n\tat
sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)\n\tat
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n\tat
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n\tat

sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)\n\tat
java.nio.file.Files.newByteChannel(Files.java:361)\n\tat
java.nio.file.Files.newByteChannel(Files.java:407)\n\tat

java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)\n\tat
java.nio.file.Files.newInputStream(Files.java:152)\n\tat

org.apache.kafka.connect.file.FileStreamSourceTask.poll(FileStreamSourceTask.java:82)\n\t...
9 more\n"
}
  ],
  "type": "source"
}

Note the "trace" field in the first element of the "tasks" field of the
response: this was the stack trace for the exception that caused the task
to fail during execution, which has nothing to do with the success or
failure of the REST request I issued to the /connectors/file-source/status
endpoint.

I was wondering if you wanted to include these kinds of stack traces as
part of the KIP, as opposed to uncaught exceptions that result in a 500
error from the REST API.

Cheers,

Chris

On Wed, Apr 8, 2020 at 9:51 AM Connor Penhale  wrote:

> Hi All!
>
> Is there any additional feedback that the community can provide me on the
> KIP? Has anyone else run into requirements like 

[jira] [Created] (KAFKA-9860) Transactional Producer could add partitions by batch at the end

2020-04-13 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9860:
--

 Summary: Transactional Producer could add partitions by batch at 
the end
 Key: KAFKA-9860
 URL: https://issues.apache.org/jira/browse/KAFKA-9860
 Project: Kafka
  Issue Type: Improvement
Reporter: Boyang Chen
Assignee: Boyang Chen


As of today, the Producer transaction manager bookkeeps the partitions involved 
with current transaction. Each time it sees a new partition, it will try to 
send a request to add all the involved partitions to the broker, which results 
in multiple requests. If we could batch the work by the end of the transaction, 
we save unnecessary round trips.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Kafka topology monitoring

2020-04-13 Thread timothy
Good Afternoon and hope you all are well. 

Im trying to monitor my topologies to ensure That I can look at the
following metrics 

Thread metrics 

- Average time for commits, poll, process operations 

- Tasks created per second, tasked closed per second 

I have seen metrics in micrometer, but they don't seem to extrapolate
the above messages, is there a tool that I can use to get this level of
detailed monitoring? 

best regards 

Timothy

Re: [DISCUSS] KIP-574: CLI Dynamic Configuration with file input

2020-04-13 Thread Aneel Nazareth
Thanks to David Jacot for his suggestions. Would it be possible for a
committer to have a look at this PR? Thanks!

https://github.com/apache/kafka/pull/8184

On Wed, Apr 1, 2020 at 12:30 PM Aneel Nazareth  wrote:
>
> I have a PR that I think is ready for review here:
> https://github.com/apache/kafka/pull/8184
>
> Feedback would be most welcome!
>
> On Mon, Mar 30, 2020 at 6:57 PM Colin McCabe  wrote:
> >
> > Just as a note about whether we should support "-" as a synonym for 
> > STDIN...  I agree it's kind of inconsistent.
> >
> > It may not be that big of a deal to drop support for STDIN.  A lot of UNIX 
> > shells make it easy to create a temporary file in scripts-- for example, 
> > you could use a "here document" ( 
> > https://en.wikipedia.org/wiki/Here_document )
> >
> > best,
> > Colin
> >
> >
> > On Fri, Mar 27, 2020, at 07:46, Aneel Nazareth wrote:
> > > Update: I have simplified the KIP down to just adding the single new
> > > --add-config-file option. Thanks for your input, everyone!
> > >
> > > On Thu, Mar 26, 2020 at 10:13 AM Aneel Nazareth  
> > > wrote:
> > > >
> > > > Hi Kamal,
> > > >
> > > > Thanks for taking a look at this KIP.
> > > >
> > > > Unfortunately the user actually can't pass the arguments on the
> > > > command line using the existing --add-config option if the values are
> > > > complex structures that contain commas. --add-config assumes that
> > > > commas separate distinct configuration properties. There's a
> > > > workaround using square brackets ("[a,b,c]") for simple lists, but it
> > > > doesn't work for things like nested lists or JSON values.
> > > >
> > > > The motivation for allowing STDIN as well as files is to enable
> > > > grep/pipe workflows in scripts without creating a temporary file. I
> > > > don't know if such workflows will end up being common, and hopefully
> > > > someone with a complex enough use case to require it would also be
> > > > familiar with techniques for securely creating and cleaning up
> > > > temporary files.
> > > >
> > > > I'm okay with excluding the option to allow STDIN in the name of
> > > > consistency, if the consensus thinks that's wise. Anyone else have
> > > > opinions on this?
> > > >
> > > > On Thu, Mar 26, 2020 at 9:02 AM Kamal Chandraprakash
> > > >  wrote:
> > > > >
> > > > > Hi Colin,
> > > > >
> > > > > We should not support STDIN to maintain uniformity across scripts. If 
> > > > > the
> > > > > user wants to pass the arguments in command line,
> > > > > they can always use the existing --add-config option.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Thu, Mar 26, 2020 at 7:20 PM David Jacot  
> > > > > wrote:
> > > > >
> > > > > > Rajini has made a good point. I don't feel strong for either ways 
> > > > > > but if
> > > > > > people
> > > > > > are confused by this, it is probably better without it.
> > > > > >
> > > > > > Best,
> > > > > > David
> > > > > >
> > > > > > On Thu, Mar 26, 2020 at 7:23 AM Colin McCabe  
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Kamal,
> > > > > > >
> > > > > > > Are you suggesting that we not support STDIN here?  I have mixed
> > > > > > feelings.
> > > > > > >
> > > > > > > I think the ideal solution would be to support "-" in these tools
> > > > > > whenever
> > > > > > > a file argument was expected.  But that would be a bigger change 
> > > > > > > than
> > > > > > what
> > > > > > > we're talking about here.  Maybe you are right and we should keep 
> > > > > > > it
> > > > > > simple
> > > > > > > for now.
> > > > > > >
> > > > > > > best,
> > > > > > > Colin
> > > > > > >
> > > > > > > On Wed, Mar 25, 2020, at 01:24, Kamal Chandraprakash wrote:
> > > > > > > > STDIN wasn't standard practice in other scripts like
> > > > > > > > kafka-console-consumer.sh, kafka-console-producer.sh and 
> > > > > > > > kafka-acls.sh
> > > > > > > > in which the props file is accepted via consumer.config /
> > > > > > > producer.config /
> > > > > > > > command-config parameter.
> > > > > > > >
> > > > > > > > Shouldn't we have to maintain the uniformity across scripts?
> > > > > > > >
> > > > > > > > On Mon, Mar 16, 2020 at 4:13 PM David Jacot 
> > > > > > > > 
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi Aneel,
> > > > > > > > >
> > > > > > > > > Thanks for the updated KIP. I have made a second pass over it 
> > > > > > > > > and the
> > > > > > > > > KIP looks good to me.
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > David
> > > > > > > > >
> > > > > > > > > On Tue, Mar 10, 2020 at 9:39 PM Aneel Nazareth 
> > > > > > > > > 
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > After reading a bit more about it in the Kubernetes case, I 
> > > > > > > > > > think
> > > > > > > it's
> > > > > > > > > > reasonable to do this and be explicit that we're ignoring 
> > > > > > > > > > the
> > > > > > value,
> > > > > > > > > > just deleting all keys that appear in the file.
> > > > > > > > > >
> > > > > > > > > > I've updated the KIP wiki page to reflect 

Re: [DISCUSS] (KAFKA-9806) authorize cluster operation when creating internal topics

2020-04-13 Thread Colin McCabe
On Thu, Apr 9, 2020, at 09:36, Paolo Moriello wrote:
> Hi Colin,
> 
> Thanks again for checking this out.
> 
> Indeed you are right, a configuration problem is what leads to
> authorization failure (and consequently to the internal topics bug): i.e.
> incorrect ACLs configuration. In particular, in case of insufficient
> cluster-level ACLs, so if one does not include the broker CN required to
> allow inter-broker communication when client SSL is required:
> 1) FindCoordinator request completes successfully, and __consumer_offsets
> topic is created in zk
> 2) but subsequent UpdateMetadata and LeaderAndIsr fail. This leaves the
> internal topic in a bad state
> 
> A deeper look confirmed that the change I proposed initially does not work,
> since authorizing the user principal is not enough to prevent the issue.
> However, I believe that we should still avoid creating the internal
> topic(s) at all in case of insufficient broker ACLs (which means, make
> FindCoordinator request fail since we won't have the required metadata). A
> possibility could be to try to check the existence of brokers' ACLs before
> creating the internal topic.
> Let me know if you have any feedback.

Hi Paolo,

If the problem is broker ACLs being configured incorrectly so that it can't 
receive requests from the controller, a lot of things will fail.  This isn't 
really related to anything with FindCoordinator.

best,
Colin


> 
> Thanks,
> Paolo
> 
> 
> On Tue, 7 Apr 2020 at 17:12, Colin McCabe  wrote:
> 
> > On Tue, Apr 7, 2020, at 08:08, Paolo Moriello wrote:
> > > Hi Colin,
> > >
> > > Thanks for your interest in this. I agree with you, this change could
> > break
> > > compatibility. However, changing the source principal is non trivial in
> > > this case. In fact, here the problem is not in the internal topic
> > creation
> > > - which succeeds - but in the two subsequent LeaderAndIsr and
> > > UpdateMetadata requests.
> > >
> > > When a consumer tries to consume for the first time, the creation of
> > > internal topic completes, zk-nodes are filled with the necessary
> > metadata,
> > > and this triggers a ZkPartitionStateMachine (PartitionStateMachine.scala)
> > > update which, in turn, makes the ControllerChannelManager
> > > (ControllerChannelManager.scala) send LeaderAndIsr and UpdateMetadata
> > > requests to the brokers; (I can be wrong, but I believe that this
> > requests
> > > are already being executed with broker principal). These requests fail
> > > because we authorize the cluster operation there, so the
> > __consumer_offsets
> > > topic remains in a bad state.
> >
> > I might be misunderstanding something here, but it seems to me that if
> > LeaderAndIsrRequest or UpdateMetadataRequest are failing with authorization
> > errors, then there is a configuration problem on the cluster which doesn't
> > have anything to do with the __consumer_offsets topic.
> >
> > >
> > > Is there a reason to not authorize the operation for find coordinator
> > > requests as well?
> >
> > To be clear, we can't change the authorization for FindCoordinatorRequest.
> >
> > best,
> > Colin
> >
>


[jira] [Created] (KAFKA-9859) kafka-streams-application-reset tool doesn't take into account topics generated by KTable foreign key join operation

2020-04-13 Thread Levani Kokhreidze (Jira)
Levani Kokhreidze created KAFKA-9859:


 Summary: kafka-streams-application-reset tool doesn't take into 
account topics generated by KTable foreign key join operation
 Key: KAFKA-9859
 URL: https://issues.apache.org/jira/browse/KAFKA-9859
 Project: Kafka
  Issue Type: Bug
Reporter: Levani Kokhreidze


Steps to reproduce:
 * Create Kafka Streams application which uses foreign key join operation
 * Stop Kafka streams application
 * Perform `kafka-topics-list` and verify that foreign key operation internal 
topics are generated
 * Use `kafka-streams-application-reset` to perform the cleanup of your kafka 
streams application: `kafka-streams-application-reset --application-id 
 --input-topics  --bootstrap-servers 
 --to-datetime 2019-04-13T00:00:00.000`
 * Perform `kafka-topics-list` again, you'll see that topics generated by the 
foreign key operation are still there.

`kafka-streams-application-reset` uses `repartition` and `changelog` suffixes 
to determine which topics needs to be deleted, as a result topics generated by 
the foreign key are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Need JIRA permission to self-assign tickets

2020-04-13 Thread Manikumar
Hi,

Thanks for your interest. I just added you to the the contributors list.

Thanks,

On Mon, Apr 13, 2020 at 7:13 PM Luke Chen  wrote:

> Hi devs,
> I need the JIRA permission to self-assign tickets.
> Please help grant me the permission.
>
> JIRA username: showuon
>
> Thank you very much.
> Luke
>


Need JIRA permission to self-assign tickets

2020-04-13 Thread Luke Chen
Hi devs,
I need the JIRA permission to self-assign tickets.
Please help grant me the permission.

JIRA username: showuon

Thank you very much.
Luke


[jira] [Created] (KAFKA-9858) CVE-2016-3189 Use-after-free vulnerability in bzip2recover in bzip2 1.0.6 allows remote attackers to cause a denial of service (crash) via a crafted bzip2 file, related

2020-04-13 Thread sihuanx (Jira)
sihuanx created KAFKA-9858:
--

 Summary: CVE-2016-3189  Use-after-free vulnerability in 
bzip2recover in bzip2 1.0.6 allows remote attackers to cause a denial of 
service (crash) via a crafted bzip2 file, related to block ends set to before 
the start of the block.
 Key: KAFKA-9858
 URL: https://issues.apache.org/jira/browse/KAFKA-9858
 Project: Kafka
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.1, 2.3.1, 2.2.2
Reporter: sihuanx


I'm not sure whether  CVE-2016-3189 affects kafka 2.4.1  or not?  This 
vulnerability  was related to rocksdbjni-5.18.3.jar  which is compiled with 
*bzip2 .* 

Or is there any task or plan to fix it.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9857) Failed to build image ducker-ak-openjdk-8 on arm

2020-04-13 Thread jiamei xie (Jira)
jiamei xie created KAFKA-9857:
-

 Summary: Failed to build image ducker-ak-openjdk-8 on arm
 Key: KAFKA-9857
 URL: https://issues.apache.org/jira/browse/KAFKA-9857
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: jiamei xie
Assignee: jiamei xie


It failed to build image ducker-ak-openjdk-8 on arm and below is its log. This 
issue is to fix it.

kafka/tests/docker$ ./run_tests.sh
Sending build context to Docker daemon  53.76kB
Step 1/43 : ARG jdk_version=openjdk:8
Step 2/43 : FROM $jdk_version
8: Pulling from library/openjdk
no matching manifest for linux/arm64/v8 in the manifest list entries
docker failed
+ die 'ducker-ak up failed'
+ echo ducker-ak up failed
ducker-ak up failed
+ exit 1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9856) GetOffsetShell does not support SSL or Kerberos

2020-04-13 Thread Tek Kee Wang (Jira)
Tek Kee Wang created KAFKA-9856:
---

 Summary: GetOffsetShell does not support SSL or Kerberos
 Key: KAFKA-9856
 URL: https://issues.apache.org/jira/browse/KAFKA-9856
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 2.4.0
 Environment: All
Reporter: Tek Kee Wang


There is no option for --command-config that will enable specification of SSL 
or Kerberos parameters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)